Showing posts with label 8008. Show all posts
Showing posts with label 8008. Show all posts

Reverse-engineering the carry-lookahead circuit in the Intel 8008 processor

The 8008 was Intel's first 8-bit microprocessor, introduced in 1972. While primitive by today's standards, the 8008 is historically important because it essentially started the microprocessor revolution and is the ancestor of the modern x86 processor family. I've been studying the 8008's silicon die under the microscope and reverse-engineering its circuitry.

The die photo below shows the main functional blocks1 including the registers, instruction decoder, and on-chip stack storage. The 8-bit arithmetic logic unit (ALU) is on the left. Above the ALU is the carry-lookahead generator, which improves performance by computing the carries for addition, before the addition takes place. It's a bit surprising to see carry lookahead implemented in such an early microprocessor. This blog post explains how the carry circuit is implemented.

The Intel 8008 die with key functional blocks labeled. Click for a larger version.

The Intel 8008 die with key functional blocks labeled. Click for a larger version.

Most of what you see in the die photo is the greenish-white wiring of the metal layer on top. Underneath the metal is polysilicon wiring, providing more connections as well as implementing transistors. The chip contains about 3500 tiny transistors, which appear as brighter yellow. The underlying silicon substrate is mostly obscured; it is purplish-gray. Around the edges of the die are 18 rectangular pads; these are connected by tiny bond wires to the external pins of the integrated circuit package (below).

An 8008 integrated circuit in an 18-pin DIP (dual inline package). The package is very scratched, but I didn't see the point in paying for mint condition for a chip I was immediately going to decap.

An 8008 integrated circuit in an 18-pin DIP (dual inline package). The package is very scratched, but I didn't see the point in paying for mint condition for a chip I was immediately going to decap.

The 8008 was sold as a small 18-pin DIP (dual inline package) integrated circuit. 18 pins is an inconveniently small number of pins for a microprocessor, but Intel was committed to small packages at the time.2 In comparison, other early microprocessors typically used 40 pins, making it much easier to connect the data bus, address bus, control signals, and power to the processor.

Addition

The heart of a processor is the arithmetic-logic unit (ALU), the functional block that performs arithmetic operations (such as addition or subtraction) and logical operations (such as AND, OR, and XOR). Addition was the most challenging operation to implement efficiently because of the need for carries.3

Consider how you add two decimal numbers such as 8888 and 1114, with long addition. Starting at the right, you add each pair of digits (8 and 4), write down the sum (2), and pass any carry (1) along to the left. In the next column, you add the pair of digits (8 and 1) along with the carry (1), writing down the sum (0) and passing the carry (1) to the next column. You repeat the process right-to-left, ending up with the result 10002. Note that you have to add each position before you can compute the next position.

Binary numbers can be added in a similar way with a circuit called a ripple-carry adder that was used in many early microprocessors. Each bit is computed by a full adder, which takes two input bits and a carry and produces the sum bit and a carry-out. For instance, adding binary 1 + 1 with no carry-in yields 10, for a sum bit of 0 and a carry-out of 1. Each carry-out is added to the bit position to the left, just like decimal long addition.

The problem with ripple carry is if you add, say, 11111111 + 1, you need to wait as the carry "ripples" through the sum from right to left. This makes addition a slow serial operation instead of a parallel operation. Even though the 8008 only performs addition on 8-bit numbers, this delay would slow the processor too much. The solution was a carry lookahead circuit that rapidly computes the carries for all eight bit positions. Then the sum can be calculated in parallel without waiting for carries to ripple through the bits. According to 8008 designer Hal Feeney, "We built the carry look-ahead logic because we needed the speed as far as the processor is concerned. So carry look ahead seemed like something we could integrate and have fairly low real estate overhead and, as you see, the whole carry look ahead is just a very small portion of the chip."

Implementing carry lookahead

The idea of carry lookahead is that if you can compute all the carry values in advance, then you can rapidly add all the bit positions in parallel. But how can you compute the carries without performing the addition? The solution in the 8008 was to build a separate circuit for each bit position to compute the carry based on the inputs.

The diagram below zooms in on the carry lookahead circuitry and the arithmetic-logic unit (ALU). The two 8-bit arguments and a carry-in arrive at the top. These values flow vertically through the carry lookahead circuit, generating carry values for each bit along the way. Each ALU block receives two input bits and a carry bit and produces one output bit. The carry lookahead has a triangular layout because successive carry bits require more circuitry, as will be explained. The 8-bit ALU has an unusual layout in order to make the most of the triangular space. Almost all microprocessors arrange the ALU in a rectangular block; an 8-bit ALU would have 8 similar slices. But in the 8008, the slices of the ALU are scattered irregularly; some slices are even rotated sideways. I've written about the 8008's ALU before if you want more details.

Closeup of the 8008 die showing the carry lookahead circuitry and the ALU.

Closeup of the 8008 die showing the carry lookahead circuitry and the ALU.

To understand how carry lookahead works, consider three addition cases. First, adding 0+0 cannot generate a carry, even if there is a carry in; the sum is 0 (if there is no carry in) or 1 (with carry in). The second case is 0+1 or 1+0. In this case, there will be a carry out only if there is a carry in. (With no carry-in the result is 1, while with carry-in the result is 10.) This is the "propagate" case, since the carry-in is propagated to carry-out. The final case is 1+1. In this case, there will be a carry-out, regardless of the carry-in. This is the "generate" case, since a new carry is generated.

The circuit below computes the carry-out when adding two bits (X and Y) along with a carry-in. This circuit is built from an OR gate on the left, two AND gates in the middle, and an OR gate on the right. (Although this circuit looks complex, it can be implemented efficiently in hardware.) To see how it operates, consider the three cases. If X and Y are both 0, the carry output will be 0. Otherwise, the first OR gate will output 1. If carry-in is 1, the upper AND gate will output 1 and the carry-out will be 1. (This is the propagate case.) Finally, if both X and Y are 1, the lower AND gate will output 1, and the carry-out will be 1. (This is the generate case.)

This circuit computes the carry-out given the carry-in and two input bits X and Y.

This circuit computes the carry-out given the carry-in and two input bits X and Y.

To compute the carry into a higher-order position, multiple instances of this circuit can be chained together. For instance, the circuit below computes the carry into bit position 2 (C2). The gate block on the left computes C1, the carry into bit position 1, from the carry-in (C0) and low-order bits X0 and Y0, as explained above. The gates on the right apply the same process to the next bits, generating the carry into position 2. For other bit positions, the same principle is used but with additional blocks of gates. For instance, the carry into position 7 is computed by a chain of seven blocks of gates. Since the circuit for each successive bit is one unit longer, the carry structure has the triangular structure seen on the die.

Computing the carry into position 2 requires two stages of carry prediction.

Computing the carry into position 2 requires two stages of carry prediction.

The diagram below shows how the carry circuit for bit 2 is implemented on the die; the circuit for other bits is similar, but with more repeated blocks. In the photograph, the metal wiring on top of the die is silverish. Underneath this, the polysilicon wiring is yellow. At the bottom, the silicon is grayish. The transistors are brighter yellow; several are indicated. The schematic underneath shows the wiring of the transistors; the layout of the schematic is close to the physical layout.

Implementation of the carry lookahead circuit for bit 2.

Implementation of the carry lookahead circuit for bit 2.

I'll give a brief outline of how the circuit works. The 8008 is implemented with a type of transistor called a PMOS transistor. You can think of a PMOS transistor as turning on if the input is 0, and off if the input is 1.4 Instead of standard logic gates, this circuit uses a technique called dynamic logic, which takes advantage of capacitance. In the first step, the precharge signal connects -9 volts to the circuitry, precharging it. In the second step, the input signals (top) are applied, turning on various transistors. If there is a path through the transistors from the +5 supply to the output, the output will be pulled high. Otherwise, the output remains at the precharge level; the capacitance of the wires holds the -9 volts. I won't trace out the entire circuit, but the upper X/Y transistor pairs implement an OR gate since if either one is on, the carry can get through. The lower X/Y transistors implement an AND gate; if both are on, the +5 signal will get through, generating a 1.

You might wonder why this carry lookahead circuit is any faster than a plain ripple-carry adder, since the carry signal has to go through up to seven large gates to generate the last carry bit. The trick is that the entire circuit is electrically a single large gate due to the dynamic design. All the transistors are activated in parallel, and then the 5-volt signal can pass through them all rapidly.5 Although there is still a delay as this signal travels through the circuit, the circuit is faster than the standard ripple carry adder which activates transistors in sequence.

A brief history of carry circuits

The efficient handling of carries was an issue back to the earliest days of mechanical calculation. The mathematician Blaise Pascal created a mechanical calculator in 1645. This calculator used a mechanical ripple carry mechanism powered by gravity that rapidly propagated the carry from one digit to the next (video). Almost two centuries later, Charles Babbage designed the famous difference engine (1819-1842). It used a slow ripple carry; after the addition cycle, spiral levers on a rotating shaft activated each digit's carry in sequence. Babbage spent years designing a better carry mechanism for his ambitious Analytical Engine (1837), developing an "anticipating carriage" to perform all carries in parallel. With the anticipating carriage, each digit wheel had a sliding shaft that moved into position when a digit was 9. When a digit triggered a carry by moving from 9 to 0, it raised the stack of shafts, incrementing all the appropriate digits in parallel (see video).

Detail of Babbage's diagram of the "anticipating carriage" that computes carries in the Analytical Engine.
I'm not sure how this mechanism works. From The Babbage Papers at the Science Museum, London, CC BY-NC-SA 4.0.

Detail of Babbage's diagram of the "anticipating carriage" that computes carries in the Analytical Engine. I'm not sure how this mechanism works. From The Babbage Papers at the Science Museum, London, CC BY-NC-SA 4.0.

The first digital computers used ripple carry. The designer of the Bell Labs relay computer (1939) states that "the carry circuit was complicated" due to the use of binary-coded decimal (BCD). The groundbreaking ENIAC (1946) used decimal counters with ripple carry. Early binary electronic computers such as EDSAC (1949) and SEAC (1950) were serial, operating on one bit at a time, so they computed carries one bit at a time too. Early computers with parallel addition such as the 1950 SWAC (the fastest computer at the time) and the commercial IBM 701 (1952) used ripple carry.

As computers became faster in the 1950s, ripple carry limited performance so alternatives were developed. In 1956, the National Bureau of Standards patented a 53-bit adder using vacuum tubes. This design introduced the important carry-lookahead concept, as well as the idea of using a hierarchy of carry lookahead (two levels in this case). The diagram below illustrates the complexity of this adder.

Diagram of a 53-bit adder from A 1-microsecond adder using one-megacycle circuitry, 1956.

Diagram of a 53-bit adder from A 1-microsecond adder using one-megacycle circuitry, 1956.

The development of supercomputers led to new carry techniques. The transistorized Atlas was built by the University of Manchester, Ferranti and Plessey in 1962. It used the influential Manchester carry chain technique, described in 1959. The Atlas vied with the IBM Stretch (1961) for the title of the world's fastest computer. The Stretch introduced high-speed techniques including the carry-select adder and the patented carry save adder for multiplication.

As with mainframes, microprocessors started with simple adders but required improved carry techniques as performance demands increased. Most early microprocessors used ripple carry, such as the 6502, Z-80, and ARM1. Carry-skip was often used for the program counter (as in the 6502 and Z-80); ripple carry was fast enough for 8-bit words but too slow for the 16-bit program counter. The ALU of the Intel 8086 (1978) used a Manchester carry chain as well as carry skip. The large transistor counts of VLSI chips permitted more complex adders, fed by research in parallel-prefix adders. The DEC Alpha 21064 (1992) combined multiple techniques: Manchester carry chain, carry lookahead, conditional sum, and carry select (details). The Hewlett-Packard PA_8000 (1995) contained over 20 adders for various purposes, including a Ling adder, a type developed at IBM in 1966 (details). The Pentium II (1997) used a 72-bit Kogge-Stone adder while the Pentium 4 (2000) used a Han-Carlson adder.6

This history shows that carry propagation was an important performance problem in the 1950s and remains an issue today with continuing research and improvements. Many different solutions have been developed, first in mainframes and later in microprocessors, growing more complex as technology advances. These approaches have tradeoffs of die area, cost, and speed, so different processors choose different implementations.

Die photo of the Intel 8008 processor. Click for a larger version.

Die photo of the Intel 8008 processor. Click for a larger version.

If you're interested in the 8008, I have other articles about it describing its architecture, its ALU, its on-chip stack, bootstrap loads, and its unusual history. I announce my latest blog posts on Twitter, so follow me at @kenshirriff. I also have an RSS feed.

Notes and references

  1. The functional blocks of the 8008 processor are documented in the datasheet (below). The layout of this diagram closely matches the physical layout on the die. I've highlighted the carry lookahead block.

    Functional blocks of the 8008 processor. From the 8008 datasheet.

    Functional blocks of the 8008 processor. From the 8008 datasheet.

     

  2. According to Federico Faggin's oral history, the 8008 team was lucky to be allowed to even use an 18-pin package for the 8008. "It was a religion in Intel" to use 16-pin packages, even though other manufacturers commonly used 40- or 48-pin packages. When Intel was forced to move to 18-pin packages for the 1103 RAM chip, it "was like the sky had dropped from heaven. I never seen so [many] long faces at Intel". The move to 18 pins was beneficial for the 8008 team, which had been forced to use 16 pins for the earlier 4004. However, even 18 pins was impractically small considering the chip used 14-bit addresses. The result was address and data signals were multiplexed over 8 data pins. This both slowed the processor and made use of the chip more complicated. Intel soon gave up on small packages, using a standard 40-pin package for the 8080 processor in 1974. 

  3. I'm ignoring subtraction in this discussion because it was implemented by addition, adding a two's complement value. Multiplication and division were not implemented by early microprocessors. Interestingly, even the earliest mainframe computers implemented multiplication and division in hardware. 

  4. Most of the "classic" microprocessors were implemented with NMOS transistors. If you're familiar with NMOS gates, everything is backward with PMOS. Although PMOS has worse performance than NMOS, it was easier to manufacture at first, so the Intel 4004 and 8008 used PMOS. PMOS required fairly large negative voltages, which is why the diagram shows -9 volts and +5 volts. 

  5. I'm hand-waving over the timing of the carry lookahead circuit. An accurate analysis of the timing would require considering the capacitance of each stage, which might add an O(n2) term.

    Also note that this carry lookahead circuit is a bit unusual. A typical carry lookahead circuit (as in the 74181 ALU chip) expands out the gates, yielding much larger but flatter circuits to minimize propagation delays. On the other hand, the 8008's circuit has a lot in common with a Manchester carry chain, which uses a similar technique of passing the incoming carry through a chain of pass transistors, or potentially generating a carry at each stage. A Manchester carry chain, however, uses a single N-stage chain rather than the 8008's triangle of separate chains for each bit. A Manchester carry chain can tap each bit's carry from each stage of the chain, so only one chain is required. The 8008's carry circuit, however, lacks the transistors that block a carry from propagating backwards, so its intermediate values may not be valid.

    In any case, the 8008's carry lookahead circuit was sufficiently fast for Intel's needs. 

  6. For more information on various adders, see this presentation, another presentation and Advanced Arithmetic Techniques

How the bootstrap load made the historic Intel 8008 processor possible

Near the end of 1972, Intel introduced their first 8-bit microprocessor, the 8008. Decades later, this processor still influences computing; you probably use an x86 processor that is a descendent of the 8008. One unusual feature of the 8008 processor is its use of a "bootstrap load" or "bootstrap capacitor", a special capacitor circuit to improve performance.1 Federico Faggin, who led the development of the 8008, is the main character in this story; he invented a new way to fabricate bootstrap capacitors for the Intel 4004 and 8008 processors and says it "proved essential to the microprocessor realization" and "without [the bootstrap load], there was no microprocessor."

Die photo of the 8008 microprocessor. (Click for a larger image.)
The initials HF appear on the top right for Hal Feeney, who did the chip's logic design and physical layout.

Die photo of the 8008 microprocessor. (Click for a larger image.) The initials HF appear on the top right for Hal Feeney, who did the chip's logic design and physical layout.

My photo above shows the tiny silicon die inside the 8008 package. You can barely see the wires and transistors that make up the chip. There are 90 bootstrap capacitors, visible as small yellow rectangles, especially in the upper center. The squares around the outside are the 18 pads that are connected to the external pins by tiny bond wires. 18 pins is a very small number for a microprocessor, but Intel was bizarrely committed to small packages at the time.2 This required inconvenient tradeoffs; the lack of multiple power pins was one factor forcing the use of bootstrap loads.

The 8008 processor's history is more complex than you might expect. Its roots are the Datapoint 2200, a popular computer introduced in 1970 as a programmable terminal. Created before the microprocessor, the Datapoint 2200 contained a board-sized CPU build from individual TTL chips. Datapoint talked with both Intel and Texas Instruments about replacing the processor board with a single MOS chip. Texas Instruments created the TMX 1795 processor in March 1971, while Intel created the 8008 around the end of 1971 but Datapoint rejected both chips for a variety of reasons. Texas Instruments abandoned the TMX 1795 after their attempts to market it failed. Intel, on the other hand, marketed the 8008 as a general-purpose microprocessor, creating the microprocessor industry.

(You might wonder how the Intel 4004 fits into this story. The Intel 4004 is architecturally unrelated to the 8008 in almost every way; despite the similar names, the 8008 is not an 8-bit version of the 4-bit 4004. After the Intel 4004 was launched in 1971, much of the 4004 team (including Faggin, Hoff, Mazor, and Feeney) moved over to the 8008 project. Because the 4004 and 8008 processors were built by the same team with the same PMOS3 process, they have some layout and circuit-level similarities, in particular the bootstrap load circuit.)

Why the bootstrap load?

The purpose of the bootstrap load is to get extra voltage out of a transistor when necessary. To explain this, I'll start by showing how an inverter works when implemented in a processor. The diagram below shows an inverter, built from a PMOS3 transistor and a load resistor (which is actually a transistor). If the input to the inverter is 0 (low), the lower transistor turns on, pulling the output high (1). But if the input is 1 (high), the output transistor turns off. In that case, the load resistor pulls the output low (0). Thus, the input signal is inverted.

How an inverter is constructed from PMOS transistors. The upper symbol indicates a PMOS transistor that is acting as a load resistor.  Based on the 8008 datasheet.)

How an inverter is constructed from PMOS transistors. The upper symbol indicates a PMOS transistor that is acting as a load resistor. Based on the 8008 datasheet.)

The diagram below shows the physical implementation of an inverter in the 8008 processor. The first die photo shows the inverter as it appears in the chip. The horizontal metal wiring on top provides VDD and the input to the circuit. For the second photo, I dissolved the metal layer to reveal the two transistors that form the circuit. The schematic on the right matches the physical layout of the transistors on the die but otherwise corresponds to the schematic above. Because creating resistors in an integrated circuit is inconvenient, the load resistor is implemented by a transistor.

How an inverter appears in the 8008 processor.

How an inverter appears in the 8008 processor.

There's a complication from using a transistor as a load resistor: these MOS transistors have a property called the threshold voltage VT. The problem is that when you try to pull a signal low, the transistor can't pull it all the way low. Although you'd like the signal to get pulled down to VDD (-9 volts), the threshold voltage (say -5 volts)9 means that you can only get the signal down to -4 volts. (This is one of the reasons why the 8008 requires a much larger voltage (15 volts overall) than modern integrated circuits; if you tried to run it at 5 volts, the threshold voltage would consume the entire signal.)

The diagram below explains the threshold voltage in more detail. VD, VG, and VS are the voltages on the drain, gate, and source respectively. VGS is the voltage between the gate and the source. The transistor will turn on if VGS < VT, the threshold voltage. (Inconveniently, most of these voltages are negative in a PMOS transistor, which makes things confusing.) The problem is that with a gate voltage of -9 volts and a threshold voltage of -5 volts, the transistor will only be on if VS is higher than -4 volts. Thus, the transistor can't pull VS lower than -4 volts. The only way to get VS lower is if you had a more-negative gate voltage, at least -14 volts in this case. Some chips solve this by using an additional voltage supply to provide more voltage to the gate, such as the Intel 8080 or the HP Nanoprocessor.

VD, VG, and VS are the voltages on the transistor's drain, gate, and source respectively. VGS is the voltage difference between the gate and source.

VD, VG, and VS are the voltages on the transistor's drain, gate, and source respectively. VGS is the voltage difference between the gate and source.

The threshold voltage isn't much of a problem when you're dealing with inverters and other gates, because the voltage levels are restored by each gate. However, there are two places where the threshold voltage is a problem: superbuffers and pass transistor logic. In these circuits (described in the footnote4), the threshold voltage drop happens twice, yielding an output that is too weak. Since these circuits are common in processors, a solution was needed: the bootstrap load. It is a way of generating more voltage for the gate to overcome the threshold voltage so the transistor to pull its output all the way to VD.

How the bootstrap load works

The bootstrap load is essentially a charge pump circuit that uses a bootstrap capacitor to boost the gate voltage. The diagram below shows the basic idea of a charge pump. On the left, a capacitor is charged to -9 volts from a voltage source. If you disconnect the voltage source and then re-connect the negative side to the capacitor as shown on the right, the capacitor retains its charge of -9 volts. However, since the lower side of the capacitor is now at -9 volts, the upper side of the capacitor is now at -18 volts. The bootstrap load uses this -18 volts as the gate voltage, sufficient to overcome the threshold voltage.

A charge pump. On the left, the capacitor is charged to -9 volts. On the right, the bottom of the capacitor is connected to -9 volts, yielding -18 volts on top of the capacitor.

A charge pump. On the left, the capacitor is charged to -9 volts. On the right, the bottom of the capacitor is connected to -9 volts, yielding -18 volts on top of the capacitor.

The diagram below shows the bootstrap load circuit. The circuit is similar to the inverter described earlier, but with the addition of a capacitor and a transistor. In the first diagram, a 0 input turns on the lower transistor (Q1), yielding a 1 output (+5 volts). Meanwhile, Q3 acts as a load resistor, pulling the top of the capacitor to -4 volts (not -9 volts due to the threshold voltage.) This results in -9 volts stored across the capacitor.

How the bootstrap load circuit works.

How the bootstrap load circuit works.

The second and third diagrams show what happens with a 1 input. The lower transistor Q1 turns off, allowing Q2 to pull the output low. With a regular inverter, -4 volts is as low as the output can go (second diagram). However, as explained earlier, the capacitor still holds -9 volts, so the top of the capacitor must be -13 volts. With -13 volts on the gate of Q2, Q2 will continue to pull the output lower, until the circuit ends up as shown on the right, with the output pulled all the way down to -9 volts. Note that the source can't get pulled down any lower than the drain, regardless of the gate voltage. (In comparison, the simple inverter described earlier could only pull the output down to -5 volts.)5

The image below shows part of Intel's schematic for the 4004 processor, showing the circuit for a standard load and the circuit for the bootstrap load, indicated by a "B" next to the resistor.

Representation of the bootstrap load on the Intel 4004 schematic. The resistor with "B" symbolizes the bootstrap load circuit next to it.

Representation of the bootstrap load on the Intel 4004 schematic. The resistor with "B" symbolizes the bootstrap load circuit next to it.

The silicon-gate bootstrap load

So far, I've discussed the bootstrap load, which was extensively used with MOS circuitry, and was patented by North American Rockwell in 1966. The invention necessary for the 4004 and 8008 processors was the extension of the bootstrap load to silicon-gate integrated circuits.

One of the key inventions that made the 8008 practical was the self-aligning silicon gate transistor.6 The diagram below shows the structure of an MOS transistor. Early MOS integrated circuits used metal-gate 7 transistors, which used metal, typically aluminum, instead of polysilicon for the gate. But at Fairchild in 1968, Faggin and Klein invented a practical way to make transistors with silicon gates. This may seem like a trivial difference, but silicon-gate transistors were better than metal-gate transistors in three important ways. First, the electrical properties of silicon-gate transistors are much better than metal-gate transistors, running faster and at lower power. Second, polysilicon provided a second layer for routing signals, making integrated circuit layouts much more compact.

Structure of a PMOS transistor.

Structure of a PMOS transistor.

Finally, polysilicon permitted construction of self-aligned transistors, which play an important part in the bypass capacitor story. Integrated circuits are constructed through a sequence of processing steps, using optical masks and photo-sensitive resist to create patterns on the surface. An integrated circuit with metal-gate transistors is constructed from the bottom up. First, the source and drain regions are doped with impurities to form P-type silicon, as shown below. In a later step, the metal gate is created between the source and the drain, using a different mask. The tricky part is making sure the gate is lined up with the source and the drain; if there's a gap, the transistor won't work. Thus, a metal gate is made larger than necessary so it will still cover the gate channel, even if the alignment of the layers is slightly off. Unfortunately, this overlap creates capacitance and harms performance.

How a photomask is used to dope regions of silicon.

How a photomask is used to dope regions of silicon.

On the other hand, the self-aligned gate is created in the opposite order. The polysilicon gate is created first. In a later step, the source and drain regions are doped. However, a mask isn't used to separate the source and drain from the gate. Instead, the gate itself blocks doping of the region in between the source and drain. Thus, the source and drain are automatically "self-aligned" with the gate, eliminating the excess capacitance from a too-large gate. (Why couldn't metal gates be self-aligned? Because doping the silicon requires high temperatures that would melt the metal, but polysilicon can handle the heat.)

Although self-aligned silicon gates are a major improvement over metal gates, there was one drawback: capacitors. With metal-gate transistors, a capacitor could be easily constructed by using metal and doped silicon as the plates: a large metal layer on top, doped silicon underneath, and a thin insulating oxide layer in between. (In other words, a transistor with a large gate is used as a capacitor.) With self-aligned gates, the polysilicon gate could be used as a capacitor plate in place of the metal layer. However, in the self-aligned process, the polysilicon gate blocks doping of the silicon underneath, which is good for a transistor but bad for a capacitor, since you can't dope the silicon under the polysilicon plate. (You could use an extra manufacturing step to dope the capacitor plates before creating the polysilicon gate, but this extra step would increase the cost.)

Faggin invented a solution that made capacitors practical with self-aligned gates.8 He realized that if you bias the capacitor correctly, the charge on the upper plate will create a conductive region in the silicon underneath it, even without any doping. He tried this at Fairchild and discovered that it worked. This solved the problem of how to use a bootstrap load with self-aligned silicon-gate transistors.

Closeup of a bootstrap load circuit in the 8008.

Closeup of a bootstrap load circuit in the 8008.

The photo above zooms in on one of the boostrap load circuits in the 8008, used in an inverter. The diagram below shows the underlying silicon after removing the metal layer. The bootstrap capacitor is constructed by a layer of polysilicon (pinkish) over the underlying silicon, forming the capacitor plates. The transistor on the right inverts the input. The capacitor is charged by the transistor in the lower left. The load transistor is in the middle; the capacitor provides the boosted voltage to its gate. The transistors have varying sizes depending on their roles. The inverting transistor is the largest since it provides the most current. The transistor that charges the capacitor is very small in comparison because a small current can keep the capacitor charged.

The circuitry of an inverter with a bootstrap load.

The circuitry of an inverter with a bootstrap load.

This bootstrap load technique was extensively used in the 4004 and 8008 processors. The diagram below shows the bootstrap loads in the 8008 processor, indicated with a red box. The 8008 has 90 bootstrap loads, so it is a significant circuit. Many bootstrap loads are around the periphery of the chip to help drive the output pins. The instruction register (upper center) uses bootstrap loads to drive the relatively large instruction decoder (center). At the right, bootstrap loads drive the register storage (upper right) and stack storage (lower right). Other miscellaneous circuits throughout the processor also use bootstrap loads.

The bootstrap loads in the 8008 are indicated by red boxes.

The bootstrap loads in the 8008 are indicated by red boxes.

Conclusion

A final question is if the bootstrap load was a key invention that made the microprocessor possible (as embodied in the 4004 and 8008) or if the microprocessor was inevitable regardless of features such as the bootstrap load. One view is that "the buried contact and particularly the bootstrap load, were indispensable to obtain the required speed within the available power budget." Feeney said in an 8008 oral history "that being limited on pins, limited on power supplies, whatever, that the bootstrap load became very, very critical." On the other hand, the development of the microprocessor seemed an inevitable, incremental process to many. Fairchild engineer Lee Boysel said in 1970,10 "The computer-on-a-chip is no big deal. It's almost here now... I've no doubt the whole computer will be on one chip within five years." Hal Feeney of Intel said, "a the time in the early 1970s, late 1960s, the industry was ripe for the invention of the microprocessor."

In the narrow sense, the bootstrap load made the 4004 and 8008 possible with their given size, performance, and power consumption. The bootstrap load also illustrates how the microprocessor is not a single invention, but the aggregation of many smaller inventions that made it possible. However, looking at the broader picture, microprocessors would have been only slightly hampered if the bootstrap capacitor didn't exist. There were many alternatives such as four-phase logic, static logic, higher gate voltages, an additional power supply, or using an extra mask for the capacitors. The Texas Instruments TMX 1795 provides a direct comparison, since it was built at the same time as the 8008 with the same architecture, but using metal-gate transistors instead of silicon-gate. The diagram below shows that the TMX 1795 was considerably larger than the 8008, and it had somewhat worse performance, but the point is that microprocessors would have proceeded essentially the same without the bootstrap load. In any case, by 1974, the switch to NMOS transistors and improvements in threshold voltages made bootstrap loads unnecessary. My conclusion is that the bootstrap load was a helpful innovation, but microprocessors would have proceeded along a similar path even without this invention. Once technology permitted a few thousand transistors to be constructed on an integrated circuit, the single-chip CPU was inevitable.

Comparative die sizes of the TMX 1795, 4004 and 8008 microprocessors. Note that the 4004 and 8008 are nearly the same size, while the TMX 1795 is more than twice as large. The top third of the TMX 1795 is instruction decoding and control logic, the middle is the 8-bit ALU, and the bottom is storage (stack and registers). TMX 1795 die photo courtesy of Computer History Museum.

Comparative die sizes of the TMX 1795, 4004 and 8008 microprocessors. Note that the 4004 and 8008 are nearly the same size, while the TMX 1795 is more than twice as large. The top third of the TMX 1795 is instruction decoding and control logic, the middle is the 8-bit ALU, and the bottom is storage (stack and registers). TMX 1795 die photo courtesy of Computer History Museum.

If you're interested in the 8008, my previous article has a detailed discussion of the 8008's architecture and more die photos; I also explain the 8008's ALU. I announce my latest blog posts on Twitter, so follow me at kenshirriff. I also have an RSS feed.

Notes and references

  1. Bootstrap loads in the Intel 4004 are discussed by Insanity 4004 here and here

  2. In his oral history, Faggin describes Intel's fixation on 16-pin packages. When a memory chip required 18 pins instead of 16, it was "like the sky had dropped from heaven. I never seen so [many] long faces at Intel, over this issue, because it was a religion in Intel; everything had to be 16 pins, in those days. Everything had to be 16 pins... It was a completely silly requirements to have 16 pins." At the time, other manufacturers were using 40- and 48-pin packages, so there was no technical limitation, just a minor cost saving from the smaller package. 

  3. The classic microprocessors such as the 8080, 6502, and Z-80 were built with NMOS transistors. The earlier 4004 and 8008 used PMOS transistors, which were easier to manufacture but had poorer performance. If you're familiar with NMOS logic, PMOS logic is a mirror world, where everything is backward. PMOS used negative voltages, which were also significantly higher than the 5 volts used by standard TTL. For compatibility with TTL levels, the 8008 ran with Vcc at +5V and Vdd at -9V, so it could produce TTL-compatible outputs of roughly 0 volts and 5 volts. (See the datasheet for more details.) The 4004 required -15 volts, typically Vdd = -10V and Vss = +5V. Confusingly, the 4004 defined logic "0" as the more positive voltage and logic "1" as the more negative voltage (datasheet). 

  4. The "superbuffer" replaces the load resistor with an active transistor and is used when more current is required, for instance to drive an internal bus or an output pin. The upper transistor is driven by an inverter, so it is on when the lower transistor is off. Instead of the weak current from the load resistor/transistor, this transistor provides a high current. The problem is that the threshold voltage limits the voltage from the upper transistor. With a regular inverter, the inverter output loses VT, so it will provide -4 volts to the upper transistor's gate. Losing another VT there yields an insufficient output voltage of +1 volt instead of the desired -9 volts.

    A superbuffer provides a fast, high-current output in both directions.

    A superbuffer provides a fast, high-current output in both directions.

    The second case where the threshold voltage drop is a problem is with a pass transistor, used for dynamic logic. The diagram below illustrates a simple pass transistor circuit. When the control signal is low, the transistor is active, passing the input signal through to the output. But when the control signal is high, the transistor stops passing the input. Instead, the previous value is held by the circuit's capacitance (shown in gray) so the output holds its previous value. Thus, pass transistors provide an efficient way of implementing temporary storage. The problem with pass transistors is the threshold voltage. If the control signal on the gate comes from a regular gate, the "on" voltage will be -4 volts due to the threshold voltage loss. The pass transistor causes a second threshold voltage loss, so the lowest it can pull its output is +1 volt, not enough for reliable operation.

    A simple pass-transistor circuit.

    A simple pass-transistor circuit.

    The bootstrap load fixes these problems. By putting a bootstrap load on the inverter in the superbuffer or on the circuit controlling the pass transistor, the drive voltage will be close to -9 volts. Now there is only a single threshold voltage drop, leaving the output at -5 volts, sufficiently negative for reliable operation. 

  5. This discussion of the bootstrap load is a simplified explanation. The real circuit is affected by stray capacitance, transistor leakage, and other factors, so the output wouldn't be all the way to VDD. One thing I'd like to point out, though, is that you might expect the capacitor's charge to leak out through Q3 as fast as it charged. Although Q3 is treated as a resistor, it also acts as a diode, blocking the capacitor from discharging. (With the capacitor more negative, the roles of Q3's source and drain are reversed and it no longer conducts.) 

  6. The silicon-gate bootstrap capacitor exemplifies the paths of information between companies at the dawn of the microprocessor era. Practical silicon gate technology was created at Fairchild (with some earlier roots). When employees (including Faggin) left Fairchild for Intel, they took this knowledge with them. (And in some cases took "lots and lots of Fairchild internal confidential documents", see Shima oral history). From Intel, ideas spread to other companies, such as when Faggin leaving Intel to found Zilog, basing the Zilog Z80 on the Intel 8080.  

  7. Interestingly, in 2007 Intel started using metal gates again in order to scale transistors further (details). In a way, semiconductor technology has gone full circle, back to metal gates, although now unusual metals such as hafnium are used. 

  8. In the making of the first microprocessor, Federico Faggin says, "bootstrap load was a very popular circuit design trick used in just about all MOS dynamic circuits of that time. It made possible an output signal swing that was not only equal to the power supply voltage, but was also faster than possible with normal MOS loads for the same power dissipation." Faggin describes how he invented the bootstrap load in the 4004 oral history (p11) and the 8008 oral history (p8). Also see Faggin's The MOS silicon gate technology and the first microprocessors. He describes how the bootstrap load is needed for a two-phase design, and how silicon gate technology didn't support capacitors. Faggin's site describes the bootstrap load. Bootstrap load is also described at mosgate

  9. The threshold voltage depends on various properties of the integrated circuit including the gate material and the oxide thickness. I couldn't find a specific value for the threshold voltage in the 8008 processor, but -5 volts seems like the right ballpark (and is a conveniently round number). The book MOSFET in Circuit Design discusses threshold voltages for P-channel devices.  

  10. The bootstrap load illustrates the social process through which people are assigned credit for inventions and the construction of reputation. Although Faggin had a key role in the 4004 and 8008 processors, "when he left to found Zilog he got temporarily written outside of the Intel history." (See Intel disowns Faggin and Interview with San Mazor.) Faggin states, "They tried to erase my name from all of my contributions, including the silicon gate technology and the first microprocessor, and attribute them to others." After lobbying efforts by Faggin's wife and the pro-Faggin website intel4004.com, Intel reluctantly gave Faggin more credit. Faggin eventually received various awards including the National Medal of Technology and Innovation in 2010, so in the end he received his (deserved) recognition.

    The point is that credit is not assigned objectively, but is a dynamic force depending on various corporate and personal forces and who tells the story. (Wikipedia is one modern arena for these conflicts.) One corrective is the book History of semiconductor engineering, which covers many of the key people in the history of integrated circuits, with little regard for the "generally accepted" history. I should make it clear that I am drawing most heavily on Faggin's writings for background on the bootstrap load, so this blog post should not be viewed as an "objective" view of who should get credit for it. It looks like the silicon-gate bootstrap load was invented simultaneously at National Semiconductor; patent 3912948 filed in 1971 by Dilip Bapat describes an identical silicon-gate bootstrap load circuit. 

Analyzing the vintage 8008 processor from die photos: its unusual counters

The revolutionary Intel 8008 microprocessor is 45 years old today (March 13, 2017), so I figured it's time for a blog post on reverse-engineering its internal circuits. One of the interesting things about old computers is how they implemented things in unexpected ways, and the 8008 is no exception. Compared to modern architectures, one unusual feature of the 8008 is it had an on-chip stack for subroutine calls, rather than storing the stack in RAM. And instead of using normal binary counters for the stack, the 8008 saved a few gates by using shift-register counters that generated pseudo-random values. In this article, I reverse-engineer these circuits from die photos and explain how they work.

The image below shows the 8008's tiny silicon die, highly magnified. Around the outside of the die, you can see the 18 wires connecting the die to the chip's external pins. The 8008's circuitry is built from about 3500 tiny transistors (yellow) connected by a metal wiring layer (white). This article will focus on the stack circuits on the right side of the chip and how they interact with the data bus (blue).

The die of the Intel 8008 microprocessor, showing the stack and other important subcomponents.

The die of the Intel 8008 microprocessor, showing the stack and other important subcomponents.

For the 8008 processor's birthday, I'm using the date of its first public announcement, an article in Electronics on March 13, 1972 entitled "8-bit parallel processor offered on a single chip." This article described the 8008 as a complete central processing unit for use in "intelligent terminals" and stated that chips were available at $200 each.1

You might think that an intelligent terminal is a curiously specific application for the 8008 processor. There's an interesting story behind that, going back to the roots of the chip: the Datapoint 2200 "programmable terminal", introduced in June 1970. The popular Datapoint 2200 was essentially a desktop minicomputer with its processor consisting of a board full of simple TTL chips. The photo below shows the CPU board from the Datapoint 2200. The chips are gates, flip flops, decoders, and so forth, combined to build a processor, since microprocessors didn't exist at the time.

The processor board from the Datapoint 2200. The 8008 microprocessor was created to replace this board, but was never used by Datapoint. Photo courtesy of unknown source.

The processor board from the Datapoint 2200. The 8008 microprocessor was created to replace this board, but was never used by Datapoint. Photo courtesy of unknown source.

Processors typically use a stack to store addresses for subroutine calls, so they can "pop" the return address off the stack. This stack is usually stored in main memory. However, the Datapoint 2200 used slow shift-register memory2 instead of expensive RAM for its main storage, so implementing a stack in main memory would be slow and inconvenient. Instead, the Datapoint 2200's stack was stored in four i3101 RAM chips, providing a small stack of 16 entries. 3 4 The i3101 was Intel's very first product, and held just 64 bits. In the photo above, you can see the chips in their distinctive white packaging each with a large "i" for Intel. 5

To keep track of the top of the stack, the Datapoint 2200 used a 4-bit up/down counter chip to hold the stack pointer. The clever thing about this design is there's no separate program counter (PC) and stack; the PC is simply the value at the top of the stack. You don't need to explicitly push and pop the PC onto the stack; for a subroutine call you just update the counter and write the subroutine address to the stack.

The story of the 8008's origin is that Datapoint went to Intel and asked if Intel could build a chip that combined the stack memory and the stack pointer onto a single chip. Intel said not only could they do that, they could put the whole processor board onto a single chip! This was the start of Intel's 8008 project to duplicate the Datapoint 2200's processor board onto a chip, keeping the Datapoint 2200 instruction set and architecture.6 After various delays, Intel completed the 8008 microprocessor, but Datapoint rejected it. Intel decided to sell the 8008 as a general-purpose processor chip, sparking the microprocessor revolution. Intel improved the 8008 with the 8080 and then the 16-bit 8086, leading to the x86 architecture that dominates desktop and server computers today.

The consequence of the 8008's history is that it inherited its architecture and instruction set from the Datapoint 2200 intelligent terminal. One of these features was the fixed, internal stack. But the 8008's implementation of that stack is unusual.

Shift-register counter

The most unexpected part of the 8008's stack is how it keeps track of the current position. The straightforward way to implement the stack would be with a binary up/down counter to keep track of the current stack position (which is what the Datapoint 2200 did). But to save a few transistors, the 8008 uses a nonlinear feedback shift register instead of a counter. The result is the stack entries are accessed in a pseudo-random order! But since they are read and written in the same order, everything works out fine.

The shift register outputs are based on a de Bruijn sequence, a cyclic sequence in which every possible output occurs as a subsequence exactly once. The 8008's de Bruijn sequence is shown below. The first value (000) is underlined in red. Shifting to the blue position yields the second value (001). Proceeding around the circle clockwise yields all eight values in the sequence: 000, 001, 010, 101, 011, 111, 110, 100 and finally back to 000. Note that each value appears exactly once, but they are not in standard binary order.

8This de Bruijn sequence contains all eight 3-bit values as subsequences. 000 and 001 are underlined. The Intel 8008's internal counters are built form this sequence.

This de Bruijn sequence contains all eight 3-bit values as subsequences. 000 and 001 are underlined. The Intel 8008's internal counters are built form this sequence.

At each step in the sequence, the last two bits are shifted to the left and a new bit is placed on the right. Counting down is the converse: the first two bits are shifted to the right and a new bit is placed one the left. This process can be implemented with a shift register, a circuit that allows a bit sequence to be shifted and an additional bit inserted.7

The diagram below shows how the 8008 implements the nonlinear feedback shift register counter. While it make look complex, it's a straightforward implementation of the de Bruijn sequence. The three latches in the middle form a shift register, with each latch holding one bit. To count up, each bit is shifted to the left and a new bit is added on the right (green arrows). To count down, each bit is shifted to the right and a new bit is added on the left (purple arrows). The logic gate on the left generate the "new" bit for counting down and the gates on the right generate the new bit for counting up.

The 8008 uses the above circuit for its internal stack counter. The refresh counter is based on this, but counts up only.

The 8008 uses the above circuit for its internal stack counter. The refresh counter is based on this, but counts up only.

The logic gates may appear complex. However, one feature of PMOS logic is it's as simple to build an AND-OR-NOR gate as a plain NOR gate, just by wiring transistors in parallel or series. Designing the logic is also straightforward: for each triple of current bits, the de Bruijn sequence specifies the next bit. If you've studied digital logic, Karnaugh maps can be used to create the logic circuits to generate the desired next bit.

Inside the stack storage

The 8008 uses dynamic RAM (DRAM) to for its stack storage and its registers. The other 1970s microprocessors that I've examined use static latches, so the 8008 is a bit unusual in this regard. Since Intel was primarily a RAM company at the time, I assume they wanted to leverage their RAM skills and save transistors by using DRAM.

Each bit of storage in the 8008 uses a cell with three transistors and one capacitors, called a 3T1C cell, similar to the cell in Intel's i1103 DRAM chip. The diagram below shows a closeup of the 8008's stack storage, with six DRAM cells visible. Each row is one 14-bit address in the stack. Each row has a read enable and write enable control line coming from the left. Each column stores one of the 14 bits; the column sense line is used to read and write the selected bit.

Detail of the Intel 8008 microprocessor's die, showing six storage cells for the stack registers. Each bit is stored with a DRAM cell consisting of three transistors and a capacitor.

Detail of the Intel 8008 microprocessor's die, showing six storage cells for the stack registers. Each bit is stored with a DRAM cell consisting of three transistors and a capacitor.

The transistors for the first cell are labeled T1, T2 and T3. The value is stored on the capacitor labeled C. (There is no separate physical capacitor; the capacitance of the wiring is sufficient to store the bit.)

To write a bit, the write line for the desired row is pulled low, turning on T1. The desired voltage (low or high) is fed onto the sense line, passes through T1, and is stored by the capacitor. To read the value, the appropriate read line is pulled low, turning on T3. If C has a low voltage, T2 is turned on. This connects the sense line to ground through T3 and T2. On the other hand, if C has a high voltage, T2 is turned off and the sense line is not grounded. Thus, the circuitry connected to the sense line can tell what bit value is stored on C.

The inconvenience with dynamic RAM is that values can only be stored temporarily. After a few hundred microseconds, the charge stored on capacitor C will leak away and the value will be lost. The solution is a refresh circuit that periodically reads each value and writes it back, before the bit fades away. (A similar refresh process is used by your computer's RAM.) The 8008's internal RAM is refreshed at least every 240 microseconds, ensuring that bits are not lost. (Static RAM, on the other hand, uses a larger, more complex circuit for each bit, but will preserve the bit as long as the circuit is powered up.)

In the 8008, the stack storage (and the registers) are refreshed by continuously stepping through each entry: reading it and writing it back. To accomplish this, a second 3-bit shift-register counter is used as a refresh counter, tracking the current position that is being refreshed. The circuit for this is the same as the stack counter, except it omits the logic to count down, as it only needs to count in one direction.9

Understanding the die photo

I'll briefly explain what you're looking at in the die photo above. The chip itself is made from a silicon wafer. Plain silicon is essentially an insulator, but by doping it with impurities, it becomes a semiconductor. The dark lines indicate the boundary between doped and undoped regions; the doped silicon in the first cell is indicated in red.

On top of the silicon is the polysilicon layer, which is the yellowish stripes. Polysilicon acts as a conductor and is used as internal wiring of the chip. More importantly, a transistor is created when polysilicon crosses doped silicon. A thin oxide layer separates the polysilicon from the silicon, forming the transistor's gate. A low voltage on the polysilicon gate causes the transistor to conduct, connecting the two sides (called source and drain) of the transistor. A high voltage on the gate turns the transistor off, disconnecting the two sides. Thus, the transistor acts as a switch, controlled by the gate.

The top layer of the chip is the metal layer, which is also used as wiring. For the photo above, I removed the metal layer with hydrochloric acid to make the underlying silicon more visible. The green, blue and gray lines indicate where the metal wiring was before being removed. Transistors T1 and T3 are connected to the sense line (blue), while transistor T2 is connected to ground (green). The read and write lines enter the circuit on the left as metal wiring, connected to polysilicon lines.

The interface between the stack and the data bus

To access memory, the address in the stack must be provided to external memory via the 8 data/address pins on the chip. These pins are connected to the stack (and other parts of the 8008) via the data bus. The die photo below shows the circuitry that interfaces the 14-bit stack storage to the 8-bit data bus.11 At the top of the photo are the metal control lines and three of the data bus lines. At the bottom are the sense lines, discussed earlier, from the stack storage. In between are the transistors (orange) that connect the data bus and the stack.

The control lines select the low (L) or high (H) half of the address. These activate the appropriate read or write transistors, connecting the appropriate stack columns to the data bus.

The stack /bus driver circuit provides the "glue" between the data bus and the stack DRAM storage.

The stack /bus driver circuit provides the "glue" between the data bus and the stack DRAM storage.

The transistors to write an address to the data bus are much larger than typical transistors, appearing as vertical yellow bars in the die photo. The reason for this is the data bus passes through the whole chip. Due to the length of the bus, it has relatively high capacitance and larger, high-current transistors are required to drive a signal on the data bus.

Near the bottom of the photo are the inverter amplifiers. Each sense line is attached to an inverter that boosts the signal from the stack storage. During refresh, this boosted signal is written back, strengthening the bit stored on the capacitor.10

Conclusion

By examining die photos, it is possible to reverse-engineer the 8008 microprocessor. One unusual feature of the 8008 is that instead of using standard binary counters internally, it saves a few gates by using shift-register counters. Although these count in a pseudo-random order rather than sequentially, the 8008 still functions correctly. One counter is used for the on-chip address stack. The 8008 also uses DRAM internally for stack storage and register storage, requiring a second counter to refresh the DRAM. Since every transistor was precious at the dawn of the microprocessor age, the 8008 has these interesting design decisions that produced compact circuitry.

If you're interested in the 8008, my previous article has a detailed discussion of the architecture, more die photos and information on how to take them. This article explains the 8008's ALU.

I announce my latest blog posts on Twitter, so follow me at kenshirriff. I also have an RSS feed.

Notes and references

  1. The first announcement of the 8008 microprocessor in Electronics is shown below (click for a larger version). The announcement called the chip a "parallel processor", a term that had a different meaning back then, indicating that the processor operated on all 8 bits at the same time. This was in contrast to serial processors (such as the Datapoint 2200) that handled one bit of the word at a time.)

    The 8008 chip was announced in Electronics on March 13, 1972: "8-bit parallel processor offered on a single chip."

    The 8008 chip was announced in Electronics on March 13, 1972: "8-bit parallel processor offered on a single chip."

  2. In 1970, RAM memory chips were extremely expensive: $99.50 for an i3101 chip with just 64 bits of storage. Shift-register memory was cheaper and denser, with 512 bits of storage in an Intel 1405 chip. The big disadvantage is the bits were circulated around and around inside the chip, with only one bit available at a time. Sequential access wasn't a problem, but if you wanted to read memory out of order, you might need to wait half a millisecond for the right bit to circle around. I wrote about shift-register memories in detail here, with detailed die photos. 

  3. The i3101 memory was called the 3101 due to Intel's part numbering system at the time, described in Intel Technology Journal, Q1 2001. To summarize, the first digit indicate the product family: 1xxx is PMOS, 2xxx is NMOS, 3xxx is bipolar and so forth. The second digit indicates the product type: 1 is RAM, 2 is a controller, 3 is ROM, and so forth. The last two digits are sequence numbers typically starting with 01. Thus, the first bipolar RAM was the 3101.

    During development, the 8008 chip was called the 1201, following Intel's naming scheme: the 1 indicated the chip was built from PMOS technology, the 2 indicated a custom chip and the 01 was a serial number. Fortunately, when it came time to market microprocessors, Intel decided that marketing was more important than systematic numbering: Intel's 4-bit microprocessor became the 4004 and their 8-bit microprocessor the 8008. 

  4. Intel introduced the i3101 chip in April 1969. The i3101 RAM chip was a static memory chip, rather than the dynamic RAM chips common today. It was also built from Schottky TTL technology, rather than MOS used in modern RAM chips. Other companies, such as National Semiconductor, Signetics and Fairchild, made 64-bit memory chips compatible with the Intel i3101. However, they typically used the standard 74xx numbering scheme, calling the chip the 7489

  5. Although the Datapoint's stack could hold 16-bit values, the Datapoint 2200 only used 13 address bits, supporting a maximum of 8K of memory. The 8008 expanded the address range to 14 bits, supporting 16K of memory, which was a huge amount at that time. However, the 8008's internal stack was only 8 values, rather than the 16 of the Datapoint 2200. 

  6. Texas Instruments heard that Intel was designing a processor for Datapoint and asked Datapoint if they could build a processor for Datapoint too. TI beat Intel to the finish, creating the TMC 1795 processor before Intel completed the 8008, largely because Intel put the 8008 on the back burner. After Datapoint rejected TI's microprocessor, TI tried to find a new customer for the chip. TI was unsuccessful, and the TMC 1795 was abandoned and mostly forgotten. I've written about the TI chip in more detail here

  7. You may be familiar with linear-feedback shift registers (LFSRs), which can be used as pseudo-random number generators or noise generators. With N stages, a LFSR can generate 2N-1 output values. The de Bruijn sequence is generated from a nonlinear-feedback shift register. Nonlinear-feedback shift registers are a generalization of LFSRs; by using more complex feedback circuitry than just XOR, a nonlinear feedback shift register can generate sequences of arbitrary length. In particular, it can generate a sequence of 2N values, while a LFSR is limited to 2N-1. 

  8. Nonlinear feedback shift registers seem pretty obscure. The only other use I've seen is the TMS 0100 calculator chip, which generates an internal sequence of length 11. For information on the theory, see The Synthesis of Nonlinear Feedback Shift Registers and Counting with Nonlinear Binary Feedback Shift Registers. The book Shift Register Sequences goes into great detail on linear and nonlinear sequences; Section VII:5 is probably most relevant, describing how to make a shift register cycle of any length.

    The TMS 1000 microcontroller saves a few gates by using a LFSR for the program counter. Instead of incrementing, the PC goes through a pseudo-random sequence. The code is stored in the ROM in the same sequence; everything works out, but it seems like a strange way to implement a program counter. 

  9. I was expecting the stack counter and refresh counter to have a regular layout on the chip, with a single shift register stage repeated three times. However, on the 8008 die, the transistors are arranged irregularly, scattered around where there was room. Presumably this made the layout more compact. 

  10. Since the signal read from stack storage passes through an inverter before being written back, you might expect the bit to get flipped. The explanation is that transistor T2 in the storage cell inverts the value on C. Thus, the value read from a sense line is inverted compared to the value written on the sense line. The inverter amplifier provides a second inversion, restoring the original value. 

  11. Each 8008 instruction takes multiple clock cycles to execute. An instruction is broken into one or more machine cycles; each machine cycle typically corresponds to one memory access for instruction or data. Each machine cycle consists of up to 5 states (T1 through T5). An address is transmitted to memory during state T1 and T2, and the memory location is read or written during T3. Each T state requires two clock cycles, so an 8008 instruction takes a minimum of 10 clock cycles. The Intel 8008 user's manual provides detailed timings. 

Reverse-engineering the surprisingly advanced ALU of the 8008 microprocessor

A computer's arithmetic-logic unit (ALU) is the heart of the processor, performing arithmetic and logic operations on data. If you've studied digital logic, you've probably learned how to combine simple binary adder circuits to build an ALU. However, the 8008's ALU uses clever logic circuits that can perform multiple operations efficiently. And unlike most 1970's microprocessors, the 8008 uses a complex carry-lookahead circuit to increase its performance.

The 8008 was Intel's first 8-bit microprocessor, introduced 45 years ago.1 While primitive by today's standards, the 8008 is historically important because it essentially started the microprocessor revolution and is the ancestor of the x86 processor family that you are probably using right now.2 I recently took some die photos of the 8008, which I described earlier. In this article, I reverse-engineer the 8008's ALU circuits from these die photos and explain how the ALU functions.

Inside the 8008 chip

The image below shows the 8008's tiny silicon die, highly magnified. Around the outside of the die, you can see the 18 wires connecting the die to the chip's external pins. The rest of the chip contains the chip's circuitry, built from about 3500 tiny transistors (yellow) connected by a metal wiring layer (white).

Die photo of the 8008 microprocessor, showing important functional blocks.

Die photo of the 8008 microprocessor, showing important functional blocks.

Many parts of the chip work together to perform an arithmetic operation. First, two values are copied from the registers (on the right side of the chip) to the ALU's temporary registers (left side of the chip) via the 8-bit data bus. The ALU computes the result, which is stored back into the accumulator register via the data bus. (Note that the data bus splits and goes around both sides of the ALU to simplify routing.) The carry lookahead circuit generates the carry bits for the sum in parallel for higher performance.3 This is all controlled by the instruction decode logic in the center of the chip that examines each machine instruction and generates signals that control the ALU (and other parts of the chip).

The Arithmetic-Logic Unit

The 8008's ALU implements four functions: Sum, AND, XOR and OR. The Sum operation adds two 8-bit numbers. The remaining three operations are standard Boolean logic operations. The AND operation sets an output bit if the bit is set in the first AND the second number. OR checks if a bit is set in the first OR the second number (or both). XOR (exclusive-or) checks if a bit is set in the first OR the second number (but not both).

The concept of carries during addition is a key part of the ALU. Binary addition in a processor is similar to grade-school long addition, except with binary numbers instead of decimal. Starting at the right, each column of two numbers is added and there can be a carry to the next column. Thus, in each column, the ALU adds two bits as well as a carry bit.

In most early microprocessors, addition of each column needs to wait until the column to the right has been added and the carry is available. The carry "ripples" through the bits, right to left, slowing the addition. The 8008, however, uses a fast carry-lookahead circuit3 to generate the carries for all 8 columns in parallel before the addition happens. Then all the columns can all be added in parallel without waiting for the carry to "ripple" through the sum. This carry-lookahead circuit is an unusual feature to see in an early microprocessor due to its complexity.

Since the 8008 is an 8-bit processor, the ALU operates on two eight-bit arguments. Most 8-bit processors (including the 8008) use a "bit-slice" construction for the ALU, with a one-bit ALU slice repeated eight times. Each one-bit ALU slice takes two input bits and the carry-in bit, and produces the output bit. In most 8-bit processors, the bit-slice ALU is arranged by stacking 8 rectangular ALU slices to form a compact, regular block. However, the 8008 has its eight ALU slices arranged in an irregular fashion—some blocks are even sideways—as shown in the diagram below. The motivation for this is that the carry lookahead circuit takes up a triangular space on the chip. To fit the remaining space better, the 8008's ALU is arranged into its unusual triangular layout.

Arrangement of the eight ALU slices on the 8008 microprocessor die. Unlike most processors, the 8008's ALU slices are arranged in a haphazard triangular arrangement. This fits better with the triangular carry-lookahead circuit above the ALU.

Arrangement of the eight ALU slices on the 8008 microprocessor die. Unlike most processors, the 8008's ALU slices are arranged in a haphazard triangular arrangement. This fits better with the triangular carry-lookahead circuit above the ALU.

Zooming in on the die photo, we can look at one of the ALU slices and see how the circuitry is constructed. The chip is built from three layers (to simplify slightly). The topmost layer is the metal wiring. It is the most visible feature, and looks metallic (not surprisingly). In the detail below, you can see the horizontal and vertical metal traces. The polysilicon layer is underneath the metal layer and appears yellow/orange under the microscope. Polysilicon can act as wiring, but more importantly it forms the gates of the transistors, switching them on and off. The bottom layer is the grayish silicon die itself, but it is hard to see under the other layers.

Die photo of the 8008 processor, zoomed in on the circuit for one bit of the ALU.

Die photo of the 8008 processor, zoomed in on the circuit for one bit of the ALU.

In the diagram above, the carry c and the complemented a and b inputs enter through the metal wires at the top. The ALU output is at the bottom. The control signals are horizontal metal lines. The circuit is powered by the Vcc (+5 volts) and Vdd (-9 volts) metal lines. The brighter yellow polysilicon regions are transistors. Each gate in the circuit requires a "load resistor" connected to Vdd to pull its output low; for improved performance, these are implemented with transistors rather than resistors.

Removing the metal layer with acid makes the silicon and polysilicon layers more visible, as shown below.6 The chip is formed on a silicon wafer with regions of it "doped" with impurities to create regions of semiconducting silicon. You can see dark lines along the border between doped silicon and undoped silicon. A transistor is formed where a yellowish polysilicon wire crosses the doped silicon. The transistor forms a switch between the two silicon sides, controlled by the polysilicon gate. Each ALU slice contains 20 transistors; the diagram below points out two of them.5

With the metal layer removed from the 8008 processor die, the underlying silicon is visible. The photo shows bit 1 of the 8008's ALU.

With the metal layer removed from the 8008 processor die, the underlying silicon is visible. The photo shows bit 1 of the 8008's ALU.

Simulating one slice of the ALU

By examining the die photos carefully, you can map out the ALU slice's 20 transistors and their connections. From this, you can reverse-engineer the gates that make up the circuit. I explained in my previous article how PMOS gates are structured, so I won't go into the details here. The result is the schematic below, showing one bit of the ALU. Each ALU slice takes two inputs (a and b) and the input carry c, and outputs one result bit. There are three mode lines (m1, m2 and m3) that select one of the four ALU operations.7

The schematic below is interactive. First, select an operation and the table will update with results for the eight different inputs. Next, click a row in the table, and the schematic will update, showing how the ALU computes that row. (Note that the a and b inputs to the ALU are inverted, indicated by an overbar.)

Operation:

While this ALU slice looks like it is made of many gates, physically it is only three gates: two large, multilevel AND-OR-NAND gates and one NAND gate. The AND-OR-NAND logic is implemented on the chip as a single complex gate, rather than by combining simpler gates, since a single large gate provides better performance with less circuitry than multiple small gates. One feature of MOS logic is it's just as easy to form an AND-OR-NAND gate (for instance) as a plain NAND gate.

Understanding the ALU logic

The 8008's ALU circuit above looks like a mysterious collection of gates, but eventually I figured out the structure behind it. The starting point is a full adder that handles the Sum operation. (A full adder adds three input bits (a, b and c) and outputs the (low-order) sum bit and a carry bit.) The full adder is then heavily modified to support the logic operations, yielding the ALU from the previous section. The logic operations are implemented by using the mode lines to block parts of the circuit, yielding XOR, AND or OR, rather than the more complex Sum.

The diagram below strips down the 8008's ALU circuit to reveal the full adder "hidden" inside. The gate in red generates the carry-out from the three inverted inputs, using relatively straightforward logic. (Since the 8008 uses carry-lookahead, this carry-out signal isn't passed to the next ALU slice, but just used to generate the ALU output.) If you examine the possible sum cases, you will see that the sum bit is almost always just the carry-out inverted, except for the 0+0+0 and 1+1+1 cases. Thus, the sum bit can be generated by inverting the carry-out and handling the two exceptional cases.8 The two gates indicated below handle the exceptions by forcing the sum output to the correct value.

Simplified 8008 ALU slice, showing the full adder circuit.

Simplified 8008 ALU slice, showing the full adder circuit.

Comparing the full adder with the full ALU circuit earlier shows how the mode lines support the logic operations. Once you have a full adder, generating XOR is simply a matter of setting the carry-in to 0, which is done by the m3 control line. For the OR and AND operations, mode lines m3 and m2 respectively disable all of the circuit except the gates labeled in green.9 Thus, if you start with a full-adder and extend it to support XOR, AND and OR, the 8008's ALU circuit is a logical result.

Intel's earlier 4004 microprocessor had a simple ALU that only supported addition and subtraction, not any logic operations.10 Interestingly, the 4004's ALU circuit is almost identical to the full adder circuit shown above. So it's very likely that Intel designed the 8008 ALU by extending the 4004 ALU as described above. This would explain why the 8008's ALU generates carries internally, even though the carry lookahead circuit made this redundant.11

The 8008's ALU logic is very similar to the Z80's ALU,12 although the Z80's ALU is (surprisingly) 4 bits (details). The 8085 uses a different complex gate arrangement. The 6502 on the other hand, uses an entirely different approach: straightforward circuits for addition, AND, OR, XOR and shift-right, using pass-transistor multiplexers to select the operation.

Instruction decoding: how the ALU knows what operation to do

The 8008 executes 8-bit instructions, which move data, perform I/O, branch, call subroutines, and so forth. The instruction decoding logic examines the instruction and determines what operation to perform, generating about 30 control signals.13 Over a quarter of the instructions perform ALU operations, and the instruction set is carefully designed so three bits of the instruction specify which of the eight operations to perform.14 By examining these bits, the instruction decoder generates the ALU's mode control lines m1, m2 and m3.

Looking at AND instructions illustrates how this works. All AND instructions have the bit pattern xx100xxx (where x is either 0 or 1). For instance, the instruction to AND with memory is 10100111 and the instruction to AND with a constant is 00100100. When the instruction decode circuit matches this pattern, it pulls the m1 control line low, which causes the ALU to perform an AND operation.7 Other bit patterns generate the other ALU control signals.15

Part of the 8008's instruction decode PLA. The three indicated transistors match opcode pattern XX100XXX, indicating an AND instruction.

Part of the 8008's instruction decode PLA. The three indicated transistors match opcode pattern XX100XXX, indicating an AND instruction.

The diagram above shows part of the instruction decode circuit. The instruction bits (and their complements) are on yellow polysilicon wires running vertically through the circuit. Each row matches a bit pattern, with a transistor connected to each instruction bit to be matched. (The doped silicon regions forming transistors are the black outlines. Circles are connections between a transistor and the row's metal line.) For example, the three transistors marked with arrows match bit 3 low, bit 4 low, and bit 5 high, detecting the AND instruction pattern. Thus, the processor uses the grid of transistors in the instruction decoder to determine the meaning of each instruction.

Loose ends: Subtraction and rotating

The ALU implements a Sum operation, so you might wonder how subtraction is implemented. By using two's complement arithmetic, the CPU can perform subtraction by simply flipping all the bits on a value and then adding it. The ALU uses two temporary registers to hold the two operands since the ALU can't read the operands from the register file and write the result back simultaneously. One of the temporary registers has the feature that its value can be fed to the ALU directly or inverted. The subtraction instructions generate a signal causing the temporary register to provide the inverted value to the ALU, causing the ALU to perform subtraction.

One important operation in most processors is rotating or shifting the bits in a value, to the left or to the right. In most of the microprocessors I've examined, shifting is performed by the ALU.16 The 8008, on the other hand, implements the rotate logic in the register access circuit, on the opposite side of the chip from the ALU. When reading a register, the bits can be shifted one position left or right by a simple circuit before going onto the data bus.

History of the 8008

The Intel 8008 is important historically since it is the ancestor of the dominant Intel x86 architecture that you're probably using right now.2 I wrote a detailed article for the IEEE Spectrum on early microprocessor history, so I'll just give the outline of the 8008's complicated history here.

The 8008 copies the instruction set and architecture of the Datapoint 2200, a popular minicomputer introduced in 1970 as a programmable terminal.17 As was typical for minicomputers, the Datapoint 2200 contained a CPU build from individual TTL chips, filling up a circuit board. Datapoint contracted with both Intel and Texas Instruments to build a single-chip CPU that would replace this processor board, but keeping the same architecture and instruction set.

The Datapoint 2200 computer. The 8008 microprocessor was built to implement the Datapoint 2200's architecture and instruction set. Photo courtesy of Austin Roche.

The Datapoint 2200 computer. The 8008 microprocessor was built to implement the Datapoint 2200's architecture and instruction set. Photo courtesy of Austin Roche.

Texas Instruments was first to build a 2200-compatible microprocessor, creating the TMC 1795 chip. Intel got their version, the 8008, working a bit later, around the end of 1971. Datapoint rejected both processors, instead updating the Datapoint 2200 to use the 74181 TTL ALU chip. Texas Instruments couldn't find a new customer for the TMC 1795 and abandoned it. Intel, on the other hand, came up with the idea of selling the 8008 as a well-supported general-purpose processor. The 8008 led to the 8080, the 8085, 8086, and Intel's x86 line, which still retains some features of the 8008.

Conclusion

Although the 8008 was a very early microprocessor, its ALU was more advanced than you might expect. In particular, it used a complex carry-lookahead circuit for higher performance. Unfortunately, even with the carry-lookahead circuit, the 8008 was slower than the TTL-based Datapoint 2200 processor it was supposed to replace; addition took 20µs on the 8008, compared to 16µs on the original Datapoint 2200 and just 3.2µs on the upgraded Datapoint 2200. This illustrates the speed advantage that TTL had over MOS in the early 1970s. To us, a microprocessor may seem obviously better than a board of chips, but this wasn't always the case.

If you're interested in the 8008, my previous article has a detailed discussion of the architecture, more die photos and information on how to take them, and information on semiconductor history, so take a look.

I announce my latest blog posts on Twitter, so follow me at kenshirriff. I also have an RSS feed.

Notes and references

  1. The 8008 chip was publicly announced in an article in Electronics on March 13, 1972, entitled "8-bit parallel processor offered on a single chip", offering the chips for $200 each. 

  2. If you're not using an x86 processor right now, you're probably using an ARM processor. Don't feel neglected, though, since I've reverse-engineered the ARM-1 too. (Although there are many more ARM chips out there than x86, analytics show 71% of my readers are on x86.)  

  3. Using a carry look ahead circuit avoids the delay from a standard ripple-carry adder, where the carries propagate through the sum. The 8008's carry-lookahead is based on the Manchester carry chain, but with a separate carry chain for each carry, yielding the triangular structure you see on the die. For performance, the carry chain is implemented with dynamic logic, depending on wire capacitance, rather than with standard Boolean gates. The 74181 ALU chip in comparison, uses a different carry lookahead scheme implemented with standard logic. I plan to write more about the 8008's carry lookahead later. 

  4. The 8008 implements eight different arithmetic/logic functions: Add, Add with carry, Subtract, Subtract with borrow, AND, XOR, OR, and Compare.14 These are implemented in terms of the ALU's four basic operations. Subtraction is performed by inverting the second argument. The operations without carry/borrow clear the carry-in bit. Compare is simply a subtraction that doesn't store the result; it just sets the flags with the status. Thus, the four fundamental operations of the ALU are used to implement eight different arithmetic/logic operations. 

  5. Note that the 8008 uses PMOS transistors, rather than the faster NMOS transistors in later microprocessors such as the 8080, 6502 and Z80. If you're familiar with NMOS circuits, PMOS can be confusing since everything is backwards. PMOS transistors turn on if the gate is low, and typically pull the output high. Vdd in PMOS is negative, and "ground" is positive. The "pull-up resistor" in a PMOS gate pulls the output down. A PMOS NAND gate has transistors in parallel (compared to serial for an NMOS NAND gate). A PMOS NOR gate has transistors in serial (compared to parallel for an NMOS NOR gate). 

  6. The metal layer of the chip is protected by silicon dioxide passivation layer. The professional way to remove this layer is with dangerous hydrofluoric acid. Instead, I used Armour Etch glass etching cream, which is slightly safer and can be obtained at craft stores. I applied the etching cream to the die and wiped it for four minutes with a Q-tip. (Since the cream is designed for frosting glass, it only etches in spots. It must be moved around to obtain a uniform etch.) After this, I soaked the die in hydrochloric acid (pool acid from the hardware store) overnight to dissolve the metal. This was probably too long, since the edges of the polysilicon were eaten away in places. 

  7. The following values are used for the three mode lines to select the ALU function:

    Operationm1m2m3
    Sum111
    And010
    Or100
    Xor110
     

  8. A more straightforward way of generating the sum bit is by xoring the three inputs: a⊕b⊕c. Unfortunately, an XOR gate is relatively difficult to implement with Boolean logic, so designers will often try to avoid XOR. 

  9. You might wonder why the OR operation is implemented with an AND gate, and vice versa. Since the inputs and the output of the OR gate are inverted, this is equivalent to an AND gate (by De Morgan's laws), and similarly for the AND gate. 

  10. Strictly speaking, the 4004 microprocessor has an AU (arithmetic unit), not an ALU (arithmetic/logic unit), since it doesn't do logical operations. Since the 4004 was designed for a calculator, logical operations weren't required. 

  11. The 8008's full adder generates the carry-out first, and generates the sum from that. In contrast, the typical full adder circuit combines two half adders to generate the sum and carry-out separately. If the typical full adder circuit had been used in the 8008, the carry-out logic could easily be omitted. 

  12. To see the similarity between the Z80's ALU circuit and the 8008's, you need to swap AND and OR gates. (Apply De Morgan's laws since the 8008's ALU inputs are inverted.) In the Z80, the carry-out comes from the ALU rather than a carry-lookahead circuit, so the control lines are somewhat different. But the fundamental ALU circuit is otherwise the same between the 8008 and Z80, which is not surprising since Federico Faggin worked on both chips. 

  13. Instruction decoding is based on a Programmable Logic Array (PLA), an arrangement of transistors that efficiently implements logic gates. These gates match bit patterns and generate the appropriate control signals for the rest of the chip. The 8008's PLA has 16 input lines flow vertically through the PLA. Each row in the PLA matches a bit pattern and generates a control signal output.

    In more detail, each row output line is pulled low by a load resistor/transistor to Vdd. The transistors are connected between the row line and Vcc (+5V). The bit lines are connected to the transistor's gate. If any bit line is low (indicating a mismatch), the PMOS transistor turns on, pulling the row line high. Thus, if there is no mismatch, the control line is low, and if there is a mismatch, the control line is high. In other words, each row is a NAND gate with instruction bit inputs.

    The input lines are ordered as follows: bit 3, bit 3 complement, 4, 4', 5, 5', 0, 0', 1, 1', 2, 2', 6, 6', 7, 7'. This order may seem strange, but there's a reason for it. In the 8008, the ALU operation is selected by bits 3, 4 and 5 of the instruction. By putting those bits on the left side of the PLA, they are closer to the ALU. Some rows of the PLA actually decode two instructions: bits 3, 4 and 5 are decoded on the left side, generating an ALU control signal, while the remaining bits are decoded on the right side generating a different control signal. This increases the PLA density and saves space on the chip. 

  14. The 8008's instruction set is designed around octal. Among other things, there are 8 ALU operations, 8 registers and 8 conditionals. In octal, the ALU instructions have the value 2ar, where a is the ALU operation to perform (0 through 7) and r is the register to use (0 through 7, where 7 indicates memory). The octal structure originates with the Datapoint 2200, which decoded instructions with TTL 7442 BCD chips that decoded groups of three bits. This octal structure persisted in descendants of the 8008, including the Z80 and x86. Unfortunately, these instruction sets are almost always presented in hexadecimal, which hides the underlying structure. 

  15. The instruction decoder generates all the signals required by the ALU. As described above, AND matches xx100xxx, pulling the m1 control signal low. An OR opcode has the bit pattern xx110xxx, which causes the instruction decode circuit to pull the m2 control line low. An XOR instruction has the bit pattern xx101xxx. The m3 control line is pulled low for patterns xx10xxxx or xx1x0xxx, matching AND, OR or XOR instructions. The subtract (with and without borrow) instructions match xx01xxxx, generating a signal that inverts the second argument. 

  16. Different processors use a variety of techniques for shifting. In the Z80, shifting is performed as data enters the ALU. The 6502 performs a left shift with "A plus A", and has a path inside the ALU for right shifts; the 8085 is similar. The ARM-1 has a barrel shifter next to the ALU that performs arbitrary shifts. 

  17. The instruction set of the Datapoint 2200 is described in the Reference Manual. The 8008 has a couple minor changes. For instance, the 8008 has increment and decrement instructions that are not present in the 2200.