The history of computer architecture is a long and winding road that spans centuries. It all started with Charles Babbage's Analytical Engine, a mechanical computer designed in the early 19th century.
The first electronic computer, ENIAC, was developed in the 1940s by John Mauchly and J. Presper Eckert. It used vacuum tubes to perform calculations.
The transistor, invented in 1947, revolutionized computer design by replacing vacuum tubes with smaller and more reliable components. This led to the development of the first commercial computers.
The first computer with a stored-program concept was the Electronic Numerical Integrator and Computer (ENIAC), built in the 1940s.
History of Computer Architecture
The first documented computer architecture was in the correspondence between Charles Babbage and Ada Lovelace, describing the analytical engine. Konrad Zuse described the stored-program concept in his 1936 patent applications for the computer Z1.
The term "architecture" in computer literature was first used by Lyle R. Johnson and Frederick P. Brooks, Jr. in 1959. They used the term to describe the level of detail required to discuss computer systems.
The earliest computer architectures were designed on paper and then directly built into the final hardware form. Later, computer architectures were physically built in the form of a transistor–transistor logic (TTL) computer, tested, and tweaked before committing to the final hardware form.
Here are some notable early stored-program computers:
- The IBM SSEC was publicly demonstrated on January 27, 1948.
- The ARC2 developed by Andrew Booth and Kathleen Booth at Birkbeck, University of London officially came online on May 12, 1948.
- The Manchester Baby was the first fully electronic computer to run a stored program, running a factoring program for 52 minutes on June 21, 1948.
- The ENIAC was modified to run as a primitive read-only stored-program computer on September 16, 1948.
The von Neumann architecture, proposed in 1945, stated that a computer consists of a processor, memory unit, input/output devices, and secondary storage. This architecture is still the foundation of modern computer systems.
Early History
The early history of computer architecture is a fascinating story that spans several decades. The first documented computer architecture was in the correspondence between Charles Babbage and Ada Lovelace, describing the analytical engine.
In the 1930s, Konrad Zuse started building the computer Z1 and described in patent applications that machine instructions could be stored in the same storage used for data, a concept known as the stored-program concept.
One of the earliest computer architectures was the EDVAC, described in John von Neumann's 1945 paper, First Draft of a Report on the EDVAC, which described an organization of logical elements. This paper is a significant milestone in the history of computer architecture.
Alan Turing's Proposed Electronic Calculator for the Automatic Computing Engine, also from 1945, cited von Neumann's paper and provided more details on the design of a computer.
The term "architecture" in computer literature was first used by Lyle R. Johnson and Frederick P. Brooks, Jr. in 1959, in the context of the Stretch, an IBM-developed supercomputer. They noted that their description of formats, instruction types, hardware parameters, and speed enhancements were at the level of "system architecture".
Computer architecture prototypes were initially built directly into the final hardware form, but later, they were built as TTL computers, tested, and tweaked before committing to the final hardware form.
Definition
Computer architecture is all about finding a balance between performance, efficiency, cost, and reliability. This balance is crucial in designing a computer system that meets the needs of its users.
A good example of this balance is the instruction set architecture, which can be either simple or complex. More complex instruction sets can enable programmers to write more space-efficient programs.
However, longer and more complex instructions can take longer for the processor to decode and may be more costly to implement effectively. This increased complexity also creates more room for unreliability when instructions interact in unexpected ways.
The trade-off between complexity and reliability is a constant challenge in computer architecture.
Microprogrammed Control
In the realm of computer architecture, microprogrammed control is a crucial aspect that has been around for a while. It refers to the use of microinstructions to control the flow of data and instructions within a computer.
Microinstructions are essentially small, low-level instructions that are used to implement the behavior of a computer's control unit. These microinstructions are stored in a read-only memory (ROM) or a read-only random access memory (RRAM) and are executed by the control unit.
The control unit is responsible for fetching, decoding, and executing instructions, and it's typically implemented using either hardwired logic or microprogrammed control. In a microprogrammed control unit, microinstructions are used to implement the control logic, whereas in a hardwired control unit, the control logic is implemented using a fixed set of logic gates.
A key difference between CALL and JUMP instructions is that CALL saves the current state of the program and transfers control to a new location, whereas JUMP simply transfers control to a new location without saving the current state. This is a fundamental aspect of computer programming and is used extensively in various programming languages.
Microprogrammed control units can be implemented in two ways: horizontally or vertically. In a horizontal microprogrammed control unit, the microinstructions are stored in a single table, whereas in a vertical microprogrammed control unit, the microinstructions are stored in multiple tables.
Here's a comparison between hardwired and microprogrammed control units:
The performance of a computer is directly related to the efficiency of its control unit. A well-designed microprogrammed control unit can improve the performance of a computer by reducing the execution time of instructions.
In computer organization, subprograms are small programs that are used to perform a specific task. They have characteristics such as being self-contained, having a well-defined interface, and being reusable. Subprograms are an essential aspect of computer programming and are used extensively in various programming languages.
Von Neumann Architecture
The Von Neumann architecture is a fundamental concept in computer science, named after mathematician and computer scientist John von Neumann. It features a single memory space for both data and instructions, which are fetched and executed sequentially.
This architecture was proposed in 1945 by von Neumann and his colleagues, and it's based on the idea that instructions and data are stored in the same memory. The processor can access the instructions and data required for the execution of a program using dedicated connections called buses.
The Von Neumann architecture introduced the concept of stored-program computers, where both instructions and data are stored in the same memory, allowing for flexible program execution. However, this architecture also has a limitation known as the Von Neumann bottleneck.
The Von Neumann bottleneck occurs because the single bus can only access one of the two classes of memory at a time, leading to a limited throughput between the central processing unit (CPU) and memory. This bottleneck has become more of a problem over time, as CPU speed and memory size have increased much faster than the throughput between them.
Here are some key features of the Von Neumann architecture:
- Single memory space for both data and instructions
- Instructions and data are fetched and executed sequentially
- Dedicated connections called buses for accessing instructions and data
- Introduces the concept of stored-program computers
- Features a Von Neumann bottleneck
The Von Neumann architecture remains highly relevant and influential in computer design, and it's still used in many modern computers. However, modern CPUs employ techniques like caching and pipelining to improve efficiency and address the Von Neumann bottleneck.
Arithmetic
Arithmetic is a fundamental aspect of computer architecture, and it's fascinating to explore its evolution. The Arithmetic Logic Unit (ALU) is the brain of the computer, performing calculations and logical operations.
In the early days of computing, arithmetic operations were performed using 1's complement and 2's complement methods. The 1's complement method involves flipping the bits of the binary number to represent the opposite number, while the 2's complement method adds 1 to the 1's complement to get the correct result.
The choice between 1's complement and 2's complement largely depends on the type of operation being performed. 2's complement is used for addition and subtraction, while 1's complement is used for bitwise operations.
Let's take a look at some examples of arithmetic operations:
One of the most significant advancements in arithmetic operations is the development of the Booth's Algorithm, which is a method for performing multiplication using a combination of addition and subtraction.
In addition to the ALU, the data path is another crucial component of computer arithmetic. The data path is responsible for transferring data between different parts of the computer, and it plays a vital role in arithmetic operations.
The data path is typically implemented using a series of registers and buses, which allow data to be transferred efficiently between different parts of the computer.
In conclusion, arithmetic is a fundamental aspect of computer architecture, and understanding its evolution and components is essential for building efficient and powerful computers.
IEEE Number Standards
IEEE Number Standards were a crucial development in the history of computer architecture.
IEEE Standard 754 is a widely adopted standard for floating point numbers. It defines how numbers are represented in computers, which has a significant impact on how calculations are performed.
This standard has been widely adopted across the industry, and it's hard to imagine modern computers without it.
The IEEE Standard 754 defines a 32-bit floating point number as a combination of a sign bit, an exponent, and a mantissa.
Computer Architecture Basics
Computer architecture is the foundation of modern computing, and understanding its basics is essential for anyone interested in computer science. The von Neumann architecture, proposed by John von Neumann and his colleagues in 1945, is a fundamental concept in computer architecture.
The von Neumann architecture consists of a processor with an arithmetic and logic unit (ALU) and a control unit, a memory unit, connections for input/output devices, and a secondary storage for saving and backing up data. This architecture revolutionized computing by allowing instructions and data to be loaded into the same memory unit.
The central processing unit (CPU) is often referred to as the "brain" of the computer, executing instructions, performing calculations, and managing data. Its architecture dictates factors such as instruction set, clock speed, and cache hierarchy, all of which significantly impact overall system performance.
Here are the key components of computer architecture:
- Central Processing Unit (CPU)
- Memory Hierarchy
- Input/Output (I/O) System
- Storage Architecture
These components work together through a system bus consisting of the address bus, data bus, and control bus, enabling the CPU to access instructions and data required for execution.
The Von Neumann Model
The Von Neumann Model is a fundamental concept in computer architecture that has been around since the 1940s. It was first proposed by mathematician John von Neumann in 1945.
This model is based on the idea that a computer has a single memory space for both data and instructions, which are fetched and executed sequentially. The Von Neumann architecture is still widely used today, despite its limitations.
One of the key features of the Von Neumann model is the use of a single bus to fetch instructions and data. This leads to the Von Neumann bottleneck, where the CPU is forced to wait for needed data to move to or from memory.
The Von Neumann bottleneck was described by John Backus in 1977, and it's a problem that has become more severe with each new generation of CPU. The bottleneck occurs because the single bus can only access one of the two classes of memory at a time, resulting in lower throughput.
In the early days of computing, the Von Neumann model was used in several computers, including the ARC2, Manchester Baby, and EDSAC. These computers were the first practical stored-program electronic computers, and they paved the way for modern computing.
Here are some key components of the Von Neumann model:
- Central Processing Unit (CPU): executes instructions, performs calculations, and manages data
- Memory Hierarchy: includes cache memory, random access memory (RAM), and storage devices
- Input/Output (I/O) System: enables communication between the computer and external devices
- Storage Architecture: deals with how data is stored and retrieved from storage devices
- Instruction Pipelining: breaks down instruction execution into multiple stages
- Parallel Processing: divides a task into smaller subtasks and executes them concurrently
The Von Neumann model has been influential in the development of modern computers, and it remains a fundamental concept in computer architecture today.
Basic Instructions
Basic instructions are the building blocks of computer programming, and understanding them is essential for any aspiring programmer. They are the set of commands that a computer's processor can execute directly.
A simple understanding of computer instructions is crucial for programmers to write efficient code. This includes knowing how to use basic instructions like MOV, which is a fundamental instruction in computer architecture.
Timing diagrams, like the one for the MOV instruction in microprocessors, help programmers visualize how instructions are executed. They show the sequence of events that occur when an instruction is processed.
Assembly language and high-level languages are two different types of programming languages. Assembly language is a low-level language that uses symbolic representations of machine code, while high-level languages are more abstract and closer to human language.
Addressing modes are used to specify where data is stored in memory. There are two main types: memory-based and register-based addressing modes.
Here's a comparison of the two:
The choice of addressing mode depends on the specific requirements of the program and the architecture of the computer.
Instruction Set and Implementation
The instruction set architecture (ISA) is the interface between a computer's software and hardware, essentially the programmer's view of the machine. It defines how instructions are encoded and how the computer interacts with memory and itself.
An ISA is usually described in a small instruction manual, which describes how instructions are encoded and defines short mnemonic names for the instructions. These names can be recognized by an assembler, a software development tool that translates human-readable form of the ISA into a computer-readable form. Disassemblers are also widely available to isolate and correct malfunctions in binary computer programs.
A good ISA compromises between programmer convenience, size of the code, cost of the computer to interpret the instructions, and speed of the computer. During design emulation, emulators can run programs written in a proposed instruction set and measure size, cost, and speed to determine whether a particular ISA is meeting its goals.
Implementation of an ISA involves several steps, including logic implementation, circuit implementation, physical implementation, and design validation. Logic implementation designs the circuits required at a logic-gate level, while circuit implementation does transistor-level designs of basic elements. Physical implementation draws physical circuits, and design validation tests the computer as a whole to see if it works in all situations and all timings.
Instruction Set
An instruction set architecture (ISA) is the interface between a computer's software and hardware, essentially the programmer's view of the machine. It defines how the processor understands instructions encoded in numerical fashion, usually as binary numbers.
The ISA is responsible for translating high-level programming languages into instructions that the processor can understand, which is done by software tools like compilers. This translation process is crucial for a program to run smoothly on a computer.
A good ISA must strike a balance between programmer convenience, size of the code, cost of the computer to interpret the instructions, and speed of the computer. This balance is essential to ensure that the ISA is meeting its goals.
The ISA defines items in the computer that are available to a program, such as data types, registers, addressing modes, and memory. Instructions locate these available items with register indexes (or names) and memory addressing modes.
Here are some key components of an ISA:
- Data types
- Registers
- Addressing modes
- Memory
The ISA is usually described in a small instruction manual, which describes how the instructions are encoded and defines short mnemonic names for the instructions. These names can be recognized by a software development tool called an assembler.
An assembler is a computer program that translates a human-readable form of the ISA into a computer-readable form. Disassemblers are also widely available, usually in debuggers and software programs to isolate and correct malfunctions in binary computer programs.
Implementation
Implementation is a crucial step in bringing a computer design to life. It involves taking the instruction set and microarchitecture and turning them into a practical machine.
The implementation process can be broken down into several steps, including logic implementation, circuit implementation, physical implementation, and design validation.
Logic implementation designs the circuits required at a logic-gate level, which is the foundation of the computer's architecture. This is where the digital logic of the computer is defined.
Circuit implementation takes it a step further, designing the transistor-level circuits for basic elements like gates, multiplexers, and latches. It also designs larger blocks like ALUs and caches.
Physical implementation is where the computer's design is brought to life, literally. It involves placing the different circuit components in a chip floor plan or on a board and creating the wires that connect them.
Design validation is the final step, where the computer is tested as a whole to ensure it works in all situations and timings. This is a critical step, as it can reveal issues that need to be addressed before the design is finalized.
Here's an overview of the implementation steps:
- Logic implementation
- Circuit implementation
- Physical implementation
- Design validation
Design validation is typically done using logic emulators, but this can be too slow for realistic testing. To speed things up, prototypes are constructed using Field-Programmable Gate-Arrays (FPGAs).
Harvard Architecture
The Harvard architecture is a type of computer architecture that keeps instructions and data in separate memories.
This design allows for simultaneous access to instructions and data, potentially improving performance, especially for tasks that involve a lot of data movement.
Separate memory units can be optimized for their specific purposes, such as instruction memory being read-only or data memory being optimized for fast read/write operations.
Many microcontroller devices use a Harvard-like architecture, which is a variation of the Harvard architecture.
The Harvard architecture is used extensively in embedded computing systems, such as digital signal processing (DSP) systems.
Implementing separate storage and pathways can be more complex than the simpler von Neumann architecture, and having separate memory units can increase the overall cost of the system.
Sources
- https://en.wikipedia.org/wiki/Computer_architecture
- https://online.sunderland.ac.uk/what-is-computer-architecture/
- https://em360tech.com/tech-article/computer-architecture
- https://en.wikipedia.org/wiki/Von_Neumann_architecture
- https://www.geeksforgeeks.org/computer-organization-and-architecture-tutorials/
Featured Images: pexels.com