Examining a vintage RAM chip, I find a counterfeit with an entirely different die inside

A die photo of a vintage 64-bit TTL RAM chip came up on Twitter recently, but the more I examined the photo the more puzzled I became. The chip didn't look at all like a RAM chip or even a TTL chip, and in fact appeared partially analog. By studying the chip's circuitry closely, I discovered that this RAM chip was counterfeit and had an entirely different die inside. In this article, I explain how I analyzed the die photos and figured out what it really was.

The die photo above is part of Project 54/74, an ambitious project to take die photos of every chip in the popular 7400 series of TTL chips (and the military-grade 5400 versions). The 74LS189 was an early RAM chip (1976) that held just 64 bits: sixteen 4-bit words. This photo interested me because I had recently written about Intel's first product, the 64-bit 3101 memory chip (1969). In my photo below of the 3101, you can see the 16 rows and 4 columns of memory cells forming a regular pattern that takes up most of the chip. The 74LS189 was an improved version of the 3101 RAM chip, so the two die photos should have been very similar. But the two photos were entirely different and the 74LS189 die didn't have 64 of anything. This just didn't make sense.

Die photo of the Intel 3101 64-bit RAM chip. Click for a larger image.

Die photo of the Intel 3101 64-bit RAM chip. Click for a larger image.

A closer examination of the chip brought more confusion. I usually start analyzing a chip by figuring out which of the pins are power, inputs, and outputs, and cross-referencing with the datasheet to find the function of each pin. The power and ground pins are easy to spot, since these are connected to thick metal traces that feed every part of the chip. Most 7400-series chips have the power and ground on diagonally-opposite corners of the chip.1 The die photo, however, shows the power and ground separated by just 5 positions. This immediately rules out the possibility that the chip is the advertised 74LS189, and makes it unlikely to be a 7400-series chip at all. In addition, the transistors all looked wrong. A chip in the 74LSxx series is built from bipolar transistors, which are fairly large and have a distinctive appearance. The transistors in the die photo looked like much smaller and simpler CMOS transistors.

Some visible features on the die of the alleged 74LS189 chip. These features don't match a RAM chip.

Some visible features on the die of the alleged 74LS189 chip. These features don't match a RAM chip.

The chip also contained a complex resistor network, not the simple resistors you'd expect on a TTL chip. The resistor network (along with the large, complex transistors next to it) led me to suspect that this chip had analog circuitry as well as digital logic. I thought it might be an analog-to-digital converter (ADC), but after looking at some ADC datasheets, I decided that wasn't the case. The chip had way too many inputs, for one thing.

The first big clue was when I studied the resistor network carefully. In the photo below, I've marked the resistors with light or dark blue lines. They are all exactly the same length, giving them the same resistance (R). Some were connected as pairs to get a resistance of exactly 2R. I noticed they were connected in a pattern of R-2R-R-2R-... which forms a R-2R resistor ladder network. This structure is used for digital to analog conversion (DAC): you feed bits into the network and you get out a voltage corresponding to the value. The chip had two of these ladders, forming two 4-bit digital-to-analog converters.

The resistors in the center of the die forms two R-2R ladders, which are simple digital-to-analog converters.

The resistors in the center of the die forms two R-2R ladders, which are simple digital-to-analog converters.

What values were going into the digital-to-analog converters? The middle of the die photo contained two small matrices, which I recognized as ROMs, each holding about 24 four-bit words. Perhaps the values in the ROMs were being fed to the DAC. Each row of the ROM had one section (on the right below) to decode 5 address bits, and a second section (on the left) to output the associated 4 data bits. Each data row has a transistor for 1 or no transistor for 0. The decoder is arranged in pairs with one transistor present out of each pair, either matching a 0 address or matching a 1 address. Thus, by looking at the chip, we can read the values in the ROMs.

Detail of a ROM in the chip. Each row stores four bits of data. The pattern of square metal contacts shows the data bits. On the right, the address decode circuit matches the address for the row.

Detail of a ROM in the chip. Each row stores four bits of data. The pattern of square metal contacts shows the data bits. On the right, the address decode circuit matches the address for the row.

Normally a ROM has sequential rows, so you can see the decoder counting in binary, but this decoder was different. Addresses in the ROM were arranged as 10011, 11001, 01100, ... Each address was generated by shifting the previous one to the right and adding a new bit on the left. E.g. 10011 -> 11001. This suggested the ROM addresses were generated by a linear-feedback shift register (LFSR) rather than a binary counter. The motivation is a shift register takes up less space than a counter on the chip; if you don't need the counter to count in the normal order, this is a good tradeoff. There were a couple strange things about the ROM: some addresses appeared to be missing and some addresses perform sort of a "wild card" match, but I'll ignore that for now. Also, the two ROMs were similar but not quite identical.

Looking at the data in the ROM, I noticed the rightmost bit was present for a while, then absent, and finally present again, while the other bits jumped around. That suggested the rightmost bit was the high-order bit. I extracted the data, and after swapping a couple bits got the curve below, a somewhat distorted sine wave.

By visually reading the values from the ROM, we can extract a waveform. But it's strangely distorted

By visually reading the values from the ROM, we can extract a waveform. But it's strangely distorted

So, the mystery chip had two ROMs with sine-ish curves and two digital-to-analog converters. Clearly it's not a RAM chip, but what is it? I looked at function/waveform generator chips, but they didn't seem to match. Could it be a sound synthesis chip (like the 76477 or a Yamaha synthesizer chip)? They didn't seem to match the chip's characteristics either. Why would the chip have a bunch of inputs and an output with two sine wave channels? After puzzling for a long time, I thought of Touch-Tone phone dialing.

DTMF: dialing a Touch-Tone phone

Perhaps I should explain how Touch-Tone phones work. Technically known as Dual-Tone Multi-Frequency signaling (DTMF), Touch-Tone was introduced in 1963 to replace rotary-dial phones with push button dialing. Each button press generates two tones of specific frequencies, which indicate the pressed button to the telephone switching system. Specifically, there is one tone for each row on the keypad and one tone for each column, and a button generates the two corresponding tones.2

A Touch-Tone telephone. Photo courtesy of Retero00064.

A Touch-Tone telephone. Photo courtesy of Retero00064.

Mostek introduced the MK5085 Touch-Tone dialer chip in 1975.3 This chip revolutionized the construction of Touch-Tone phones: instead of using eight carefully-tuned, expensive oscillators, the phone could generate the tones with a cheap integrated circuit. The MK5085 was soon followed by a series of Mostek integrated dialer chips with slightly different functions4 as well as versions from other manufacturers.5

A quick web search found a Touch-Tone chip datasheet. The pinout of this chip matched the die photo with the power, input and output pins in the right places. The datasheet said the chip was metal-gate CMOS (not TTL), which matched the appearance of the die. Finally, the datasheet's block diagram matched the functional blocks I could see on the chip.

Package of the counterfeit memory chip, labeled 74LS189. Courtesy of Robert Barauch.

Package of the counterfeit memory chip, labeled 74LS189. Courtesy of Robert Barauch.

This was pretty conclusive: the mystery die was not a RAM chip but an entirely unrelated DTMF dialing chip. This 74LS189 chip was counterfeit; someone had relabeled the DTMF die as a Texas Instruments 74LS189 chip.

How the DTMF chip works

Now that I had identified the chip, I wanted to understand more about how it works. It turns out that it uses some interesting mathematics and circuitry to generate the tones. The chip needs to generate two tones of the right frequencies based on the 4 row inputs and 4 column inputs from the keypad. It generates these tones by starting with a 3.579545 MHz11 frequency and dividing it down to two lower frequency clocks. Each clock is used to step through the sine-wave lookup table in ROM, generating a sine wave of the desired frequency. Finally, the two sine waves are combined to produce the output.

By looking at the output frequencies listed in the datasheet, we can deduce what is happening internally. For instance, to generate the 1639.0 Hz tone, you can divide the 3.579545 MHz input by 2184. (Reducing a frequency by an integer factor is straightforward in hardware: count the input pulses and reset every time you reach 2184.) Similarly, the other output frequencies can be generated by dividing by integers 2408, 2688, 2968, 3808 4200, 4648 and 5152. Dividing by numbers this large would require inconveniently large counters, but but I noticed these numbers are are all divisible by 56, yielding quotients 39, 43, 48, 53, 68, 75, 83 and 92. These smaller numbers are much more practical to divide by in hardware.

This suggests a straightforward hardware implementation: divide the 3.579545 MHz clock by 2. Then divide by 68, 75, 83 or 92 (depending on the row input), using a 7-bit counter. Finally, iterate through a 28-word ROM to generate the sine wave, yielding the 28-step sine wave described in the datasheet. Similarly, the column frequencies can be generated by dividing by 39, 43, 48 or 53 (using a 6-bit counter) depending on the column input.

At this point, I had reverse-engineered how the chip operated. Or had I? A closer look at the chip revealed 5-bit and 6-bit counters, one bit too small for the necessary divisors. What was going on? How could the chip divide by 68 with a 6-bit counter?

The diagram below shows divider circuitry for the row output, showing the 6-bit shift-register counter. Also visible is the circuit to detect when the counter should be reset, based on which of the four keypad rows is selected.7 The column circuitry is similar, but with a 5-bit counter.

Divider circuitry for the row signal, on the lower right of the die. The input frequency is divided by a particular value depending on which of the four keyboard rows is selected. The counter is implemented with a shift register. The LFSR logic generates the new bit shifted in. The count end check circuitry controls the count length for the selected row. The single button check verifies that exactly one button is pressed.

Divider circuitry for the row signal, on the lower right of the die. The input frequency is divided by a particular value depending on which of the four keyboard rows is selected. The counter is implemented with a shift register. The LFSR logic generates the new bit shifted in. The count end check circuitry controls the count length for the selected row. The single button check verifies that exactly one button is pressed.

More investigation showed that multiple companies made pin-compatible DTMF chips, but they all generated slightly different frequencies. 5 Although the chips seemed like clones, they were all implemented in different ways, dividing the input frequency differently, yielding outputs that were unique (but all within the phone system's tolerance). By repeating the mathematical analysis, I could reverse-engineer each manufacturer's implementation and figure out the divisors and ROM sizes. (Details in footnotes.10)

I found that the divisors for the MK5089 design would fit in the counters I saw on the chip. Specifically, it divides the input frequency by 4 and then divides row frequencies by 33, 36, 40 or 44 (values that fit in 6 bits) and the column frequencies by 17, 19, 21 or 23 (values that fit in 5 bits). The row output ROM has 29 values, while the column output ROM has 32 values. This nicely fit the counter sizes I saw on the die. It also explains why the two ROMs on the die are slightly different.8

Understanding the silicon

I reverse-engineered parts of the chip by closely examining the silicon circuits, so I'll explain some of the silicon-level structures. The chip is built mostly from CMOS13, but the structures are a bit more complex than you see in textbooks. The basic idea of CMOS is it is built from MOS transistors, both PMOS and NMOS transistors connected in a Complementary way (thus the name CMOS). To oversimplify, an NMOS transistor turns on when the input is high, and can pull the output low. A PMOS transistor is opposite; it turns on when the input is low, and can pull the output high.

The diagram below shows the structure of a metal-gate MOS transistor. Electricity flows between the source and the drain, under control of the gate. The metal gate is separated from the silicon by an insulating oxide layer. (The Metal / Oxide / Silicon layers give it the name MOS.) For a PMOS transistor, the source and drain are P-type silicon while the base silicon is N-type. An NMOS transistor is opposite: the source and drain are N-type silicon while the base silicon is P-type.

A metal-gate MOSFET transistor.

A metal-gate MOSFET transistor.

The diagram below shows a CMOS inverter on the chip, built from a PMOS transistor and an NMOS transistor. The first photo shows the metal layer. By dissolving the metal in acid, the silicon is revealed in the second photo. In combination, they reveal the inverter's structure, as shown in the cross-section diagram. You can see the metal gates for the PMOS and NMOS transistors, as well as the silicon regions for the source and drain.12 The black spots are contacts between the metal and silicon, where they are connected.

A CMOS inverter is built from a PMOS transistor and an NMOS transistor.

A CMOS inverter is built from a PMOS transistor and an NMOS transistor.

Note that the NMOS transistor must be embedded in P-type silicon. To achieve this, the transistor is placed in a "P well", a region of P-doped silicon. A grounded "guard ring" surrounds the P well to help isolate it. The chip contains multiple P wells, which typically hold multiple NMOS transistors.

Logic gates (NAND, NOR) are constructed by combining multiple transistors in a similar way (details). CMOS transistors can also be configured to pass or block a signal (details), a technique used to build the shift registers in the chip. These circuits are straightforward to recognize if you examine the chip closely, allowing the circuitry to be reverse engineered, for example the shift-register counter shown earlier.

The DMTF chip is both digital and analog. The diagram below shows the 4-bit digital-to-analog converter for the column tone. (This circuit is in the upper-left of the die; the similar row tone circuit is in the upper right.) The circuit takes 4 bits from the ROM, passes them through a buffer, and then four transistors drive the R-2R resistor ladder digital-to-analog converter that was discussed earlier. The resulting analog voltage forms the synthesized sine wave. Note that the transistors are scaled to provide the necessary current; the "8x" transistor is eight times the size of the "1x" transistor. The NMOS transistors are in a P-well, as described earlier.

This circuit on the DMTF chip converts a 4-bit digital value from the ROM into an analog voltage.

This circuit on the DMTF chip converts a 4-bit digital value from the ROM into an analog voltage.

The die has some unusual structures, metal squares and larger loops that at first glance don't seem connected to anything. I've never seen these described before, so I'll explain what they are. They provide power and ground to parts of the circuit without direct wiring to the power or ground pins. Integrated circuits typically have extensive wiring in the metal layer to provide power and ground to all the circuits that need them. This chip, however, eliminates some of this wiring by using the substrate as a power connection and using the guard rings as ground connections. The photo below shows metal loops that provides a bridge between the positive substrate and a circuit that requires positive voltage.

Metal loops are used to get positive voltage (Vcc) from the substrate and feed it to circuits that need it.

Metal loops are used to get positive voltage (Vcc) from the substrate and feed it to circuits that need it.

The metal loops below provide a bridge between the negative guard ring and the circuitry that requires ground. As far as I can tell, there's no reason to make these links a loop rather than a straight connection.

Metal loops connect the guard ring (at ground potential) to circuits that need a ground connection.

Metal loops connect the guard ring (at ground potential) to circuits that need a ground connection.

Conclusion

The chip turned out to be a Touch-Tone DTMF dialer, most likely a knockoff MK5089, repackaged as a 74LS189 RAM chip. Why would someone go to the effort of creating counterfeit memory chips that couldn't possibly work? The 74LS189 is a fairly obscure part, so I wouldn't have expected counterfeiting it to be worth the effort. The chips sell for about a dollar on eBay, so there's not a huge profit opportunity. However, IC counterfeiting is a widespread problem14. For instance, 15% of replacement semiconductors purchased by the Pentagon are estimated to be counterfeit. With counterfeiting this widespread, even an obscure chip like the 74LS189 can be a target.

As for Robert Baruch's purchase of the chip, he contacted the eBay seller who gave him a refund. The seller explained that the chip must have been damaged in shipping! (Clearly you should pack your chips carefully so they don't turn into something else entirely.)

Follow me on Twitter: @kenshirriff to find out about my latest blog posts. I also have an RSS feed.

Thanks to Robert Baruch for the die photos. His high-resolution photos are here and here.

Notes and references

  1. A few unusual 7400-series chips (such as the 7473 flip flop) don't have the power and ground pins diagonally opposite, but in the middle. On the die, however, these pins are still symmetrically opposite. This simplifies routing of power and ground on the die. 

  2. Touch-Tone keypads normally have four rows and three columns, but the system supports a fourth column. The fourth column is used for some special network purposes and require a special keypad. 

  3. The Touch-Tone chip was patented, which later led to a complex patent battle

  4. Mostek later introduced a second generation of dialer chips with the MK5380. Instead of an R-2R A/D converter, it used a network of resistors with taps selected to generate the sinusoidal voltages. That is, instead of using a ROM to fit the sine curve to 16 uniform voltage steps, 16 unequal voltage levels were selected to fit the sine curve. This was described in patent 4,446,436. The datasheet for the NTE1690 chip says it uses a "resistive ladder network", which is probably the same thing. 

  5. Many manufacturers made Touch-Tone chips that were compatible with the MK5089, often giving them similar part numbers. Some of them are TP5089, MV5089, UM95089, TCM5089, MK5089, and NTE1690 chips. While these DTMF chips seem interchangeable, surprisingly they use entirely different designs internally. Careful examination of the datasheets shows that they output slightly different frequencies. For instance, for the lowest tone the TP5089 has a frequency of 694.8 Hz, while the S2559 outputs 699.1 Hz and the NTE1690 outputs 701.3 Hz, all slightly off from the official 697 Hz.  

  6. Touch-Tone keypads have an unusual internal structure. A standard calculator keypad has a grid of switches. In contrast, a Touch-Tone keypad has 8 switches (4 row, 4 column) and each button closes two switches (so it is known as 2-of-8). Thus, while a calculator normally scans the rows and reads the columns, the output of a Touch-Tone keypad can be read directly. Some DTMF chips include scanning circuitry so a calculator-style keypad can be used. 

  7. Conceptually, the counter is reset when the appropriate value is reached. However, since it is implemented with a linear-feedback shift register, only the input bit can be changed, rather than resetting entirely. That is, the counter jumps ahead (by one bit flip) at the proper point so the number of counts is the desired value. Strictly speaking, this makes the counter a nonlinear-feedback shift register. 

  8. My original readout of the ROM gave a distorted sine wave, but with further analysis I figured out the problem. I had noticed that the address patterns didn't always follow the shifted sequence from the LFSR. In addition, some addresses weren't fully decoded, in effect providing "wild card" addresses. Looking more closely, I realized that the wild card addresses would fill in the gaps in the sequence. The reason was that the ROM designers had used a shortcut to make the ROM smaller. For example, if address 00111 stored the value 13 and address 00011 also stored the value 13, these two rows in the ROM could be collapsed into one: decoding the address 00?11 to the value 13. (Strictly speaking, this makes it a PLA, not a ROM.) Essentially, the ROM could sometimes combine the same value on the ascending and descending parts of the sine way. When I filled in the missing entries, the resulting sine waves looked much better. This also showed that the two ROMs held 29 and 329 entries (as required by the mathematics) and explained why the two ROMs were slightly different on the die. 

  9. You might know that a LFSR will get stuck on all-zeros, so it can only use 2^n-1 of the possible 2^n values. So how can the chip's 5-bit LFSR access all 32 entries in the ROM? The solution is that it's a non-linear feedback shift register (NLFSR), slightly more complicated than a LFSR. In particular, there is a row in the PLA that detects the all-zero entry and keeps the sequence from getting stuck there (as would happen on a LFSR). 

  10. Each DTMF chip's datasheet lists slightly different output frequencies. By factoring these frequencies, I could reverse-engineer the internal design of the chip—the divisors it used and the ROM sizes. The table below gives these values for four different chip designs. Each output frequency is generated by dividing the crystal frequency (3.579545 MHz) by the scale factor, the appropriate divisor, and the points per cycle. Note that the output frequencies are all close to the correct frequencies, but not an exact match.

    ChipRow divisors and frequenciesColumn divisors and frequenciesPoints per cycleScale factor
    TP5089 92 83 75 68 53 48 43 39 28 2
    694.8 Hz 770.1 Hz 852.3 Hz 940.0 Hz 1206.0 Hz 1331.7 Hz 1486.5 Hz 1639.0 Hz
    S2559 80 73 66 59 46 42 38 34 32 2
    699.1 Hz 766.2 Hz 847.4 Hz 948.0 Hz 1215.9 Hz 1331.7 Hz 1471.9 Hz 1645.0 Hz
    MK5089, MV5089 44 40 36 33 23 21 19 17 29 (row), 32 (col) 4
    701.3 Hz 771.5 Hz 857.2 Hz 935.1 Hz 1215.9 Hz 1331.7 Hz 1471.9 Hz 1645.0 Hz
    UM95089 80 73 66 59 46 42 38 34 16 4
    699.1 Hz 766.2 Hz 847.4 Hz 948.0 Hz 1215.9 Hz 1331.7 Hz 1471.9 Hz 1645.0 Hz
    Correct frequency: 697 Hz 770 Hz 852 Hz 941 Hz 1209 Hz 1336 Hz 1477 Hz 1633 Hz
     

  11. You might wonder why they picked 3.579545 MHz for the crystal, as that seems like a strange frequency. That's the NTSC colorburst frequency, used by color televisions for complex technical reasons. Since the crystals were made by the millions for color televisions, they were inexpensive and easy to obtain. 

  12. In the die photo, the source of an NMOS transistor connected to ground is much darker. I assume this is due to a different doping level, perhaps to pull the P well to ground. 

  13. While most of the circuitry in the chip is CMOS, other parts use NMOS or PMOS logic to simplify the circuitry. For instance, the ROMs have NMOS transistors for the address decode and PMOS for the data storage. Another example is the circuitry to detect multiple button presses. For the four row buttons, there are six double-press combinations which are detected by an AND-OR-INVERT gate with 6 AND gates. This is built as a single complex NMOS gate, with a pull-up resistor. The diagram below shows how it is structured. (A similar circuit checks the column inputs for double presses.)

    The circuitry to detect multiple button presses is built from NMOS, not CMOS.

    The circuitry to detect multiple button presses is built from NMOS, not CMOS.

  14. Two interesting articles about finding counterfeit semiconductors come from SparkFun and Bunnie Studios. For articles on counterfeiting, see this and this

Inside Intel's first product: the 3101 RAM chip held just 64 bits

Intel's first product was not a processor, but a memory chip: the 31011 RAM chip, released in April 1969. This chip held just 64 bits of data (equivalent to 8 letters or 16 digits) and had the steep price tag of $99.50.2 The chip's capacity was way too small to replace core memory, the dominant storage technology at the time, which stored bits in tiny magnetized ferrite cores. However, the 3101 performed at high speed due to its special Schottky transistors, making it useful in minicomputers where CPU registers required fast storage. The overthrow of core memory would require a different technology—MOS DRAM chips—and the 3101 remained in use in the 1980s.3

This article looks inside the 3101 chip and explains how it works. I received two 3101 chips from Evan Wasserman and used a microscope to take photos of the tiny silicon die inside.4 Around the outside of the die, sixteen black bond wires connect pads on the die to the chip's external pins. The die itself consists of silicon circuitry connected by a metal layer on top, which appears golden in the photo. The thick metal lines through the middle of the chip power the chip. The silicon circuitry has a grayish-purple color, but it largely covered by the metal layer. Most of the chip contains a repeated pattern: this is the 16x4 array of storage cells. In the upper left corner of the chip, the digits "3101" in metal identify the chip, but "Intel" is not to be found.

Die photo of the Intel 3101 64-bit RAM chip. Click for a larger image.

Die photo of the Intel 3101 64-bit RAM chip. Click for a larger image.

Overview of the chip

The 3101 chip is controlled through its 16 external pins. To select one of the chip's 16 words of memory, the address in binary is fed into the chip through the four address pins (A0 to A3). Memory is written by providing the 4-bit value on the data input pins (D1 to D4). Four data output pins (O1 to O4) are used to read memory; these pins are inverted as indicated by the overbar. The chip has two control inputs. The chip select pin (CS) enables or disables the chip. The write enable pin (WE) selects between reading or writing the memory. The chip is powered with 5 volts across the Vcc and ground pins.

The diagram below shows how the key components of the 3101 are arranged on the die. The RAM storage cells are arranged as 16 rows of 4 bits. Each row stores a word, with bits D1 and D2 on the left and D3 and D4 on the right. The address decode logic in the middle selects which row of storage is active, based on the address signals coming from the address drivers at the top. At the bottom, the read/write drivers provide the interface between the storage cells and the data in and out pins.

Block diagram of the 3101 RAM chip.

Block diagram of the 3101 RAM chip.

Transistors

Transistors are the key components in a chip. The 3101 uses NPN bipolar transistors, different from the MOS transistors used in modern memory chips. The diagram below shows one of the transistors in the 3101 as it appears on the die. The slightly different tints in the silicon indicate regions that have been doped to form N and P type silicon with different semiconductor properties. The cross-section diagram illustrates the internal structure of the transistor. On top (black) are the metal contacts for the collector (C), emitter (E), and base (B). Underneath, the silicon has been doped to form the N and P regions that make up the transistor.

A key innovation of the 3101 was using Schottky transistors (details), which made the 3101 almost twice as fast as other memory chips.5 In the cross section, note that the base's metal contact touches both the P and N regions. You might think this shorts the two regions together, but instead a Schottky diode is formed where the metal contacts the N layer.6

The structure of an NPN Schottky transistor inside the Intel 3101 chip.

The structure of an NPN Schottky transistor inside the Intel 3101 chip.

The 3101 also used many multiple-emitter transistors. While a multiple-emitter transistors may seem strange, they are common in bipolar integrated circuits, especially TTL logic chips. A multiple-emitter transistor simply has several emitter regions embedded in the base region. The die photo below shows one of these transistors with the collector on the left, followed by the base and two emitters.

A multiple-emitter transistor from the Intel 3101 chip.

A multiple-emitter transistor from the Intel 3101 chip.

Driving the data output pins requires larger, high-current transistors. The image below shows one of these transistors. The central rectangle is the base, surrounded by the C-shaped emitter in the middle and the large collector on the outside. Eight of these high-current transistors are also used to drive the internal address select lines.

For the high-current output, the Intel 3101 chip uses larger transistors.

For the high-current output, the Intel 3101 chip uses larger transistors.

Diodes

While examining the 3101 chip, I was surprised by the large number of diodes on the chip. Eventually I figured out that the chip used DTL (diode-transistor logic) for most of its logic rather than TTL (transistor-transistor logic) that I was expecting. The diagram below shows one of the diodes on the chip. I believe the chip builds diodes using the standard technique of connecting an NPN transistor as a diode.

Presumed structure of a diode inside the 3101 chip. I believe this is a regular diode, not a Schottky diode.

Presumed structure of a diode inside the 3101 chip. I believe this is a regular diode, not a Schottky diode.

Resistors

The die photo below shows several resistors on the 3101 die. The long, narrow snaking regions of p-type silicon provide resistance. Resistors in integrated circuits are inconveniently large, but are heavily used in the 3101 for pull-up and pull-down resistors. At the right is a square resistor, which has low resistance because it is very wide.7 It is used to route a signal under the metal layer, rather than functioning a resistor per se.

Resistors inside the 3101 chip.

Resistors inside the 3101 chip.

The static RAM cell

Now that I've explained the individual components of the chip, I'll explain how the circuitry is wired together for storage. The diagram below shows the cell for one bit of storage with the circuit diagram overlaid. Each cell consists of two multi-emitter transistors (outlined in red) and two resistors (at the top). The horizontal and vertical wiring connects cells together. This circuit forms a static RAM cell, basically a latch that can be in one of two states, storing one data bit.

The circuitry of one storage cell of the 3101 RAM chip. The two multiple-emitter transistors are outlined in red.

The circuitry of one storage cell of the 3101 RAM chip. The two multiple-emitter transistors are outlined in red.

Before explaining how this storage cell works, I'll explain a simpler latch circuit, below. This circuit has two transistors cross-connected so if one transistor is on, it forces the other off. In the diagram, the left transistor is on, which keeps the right transistor off, which keeps the left transistor on. Thus, the circuit will remain in this stable configuration. The opposite state—with the left transistor off and the right transistor on—is also stable. Thus, the latch has two stable configurations, allowing it to hold a 0 or a 1.

A simple latch circuit. The transistor on the left is on, forcing the transistor on the right off, forcing the transistor on the left off...

A simple latch circuit. The transistor on the left is on, forcing the transistor on the right off, forcing the transistor on the left off...

To make this circuit usable—so the bit can be read or modified—more complex transistors with two emitters are used. One emitter is used to select which cell to read or write, while the other emitter is used for the read or write data. This yields the schematic below, which matches the storage cell die photo diagram above.

The RAM cell used in the Intel 3101 is based on multiple-emitter transistors. The row select lines are raised to read/write the row of cells. Each data line accesses a column of cells.

The RAM cell used in the Intel 3101 is based on multiple-emitter transistors. The row select lines are raised to read/write the row of cells. Each data line accesses a column of cells.

Multiple storage cells are combined into a grid to form the memory memory. One word of memory consists of cells in the same row that share select lines. All the cells in a column store the same bit position; their data lines are tied together. (The bias line provides a voltage level to all cells in the memory.8)

Note that unlike the simplified cell, the circuit above doesn't have an explicit ground connection; to be powered, it requires a low input on either the select or data/bias lines. There are three cases of interest:

  • Unselected: If the negative row select line is low, current flows out through the row select line. The data and bias lines are unaffected by this cell.
  • Read: If the negative row select line is higher than the data and bias lines, current will flow out the data line if the left transistor is on, and out the bias line if the right transistor is on. Thus, the state of the cell can be read by examining the current on the data line.
  • Write: If the negative row select line is higher and the data and bias lines have significantly different voltages, the transistor on the lower side will switch on, forcing the cell into a particular state. This allows a 0 or 1 to be written to the cell.

Thus, by carefully manipulating the voltages on the select lines, data lines and the bias line, one row of memory can be read or written, while the other cells hold their current value without influencing the data line. The storage cell and the associated read/write circuitry are essentially analog circuits rather than digital since the select, data, and bias voltages must be carefully controlled voltages rather than logic levels.

The address decode logic

The address decode circuitry determines which row of memory cells is selected by the address lines.11 The interesting thing about this circuitry is that you can easily see how it works just by looking at the die photo. The address driver circuitry sends the four address signals along with their complements on eight metal traces through the chip. Each storage row has a four-emitter transistor. In each row you can see four black dots, which are the connections between emitters and address lines. A row will be selected if all the emitter inputs are high.9 A dot on an address line (e.g. A0) will "match" a 1, while a dot on the complemented address line (e.g. A0) will match a 0, so each row matches a unique four-bit address. In the die photo below, you can see the decoding logic counting down in binary for rows 15 down to 11;10 the remainder of the circuit follows the same pattern.

The address decode logic in the Intel 3101 RAM chip. Each row decodes matches four address lines to decode one of the 16 address combinations. You can see the value counting down in binary.

The address decode logic in the Intel 3101 RAM chip. Each row decodes matches four address lines to decode one of the 16 address combinations. You can see the value counting down in binary.

Some systems that used the 3101

The 64-bit storage capacity of the 3101 was too small for a system's main memory, but the chip had a role in many minicomputers. For example, the Burroughs D Machine was a military computer (and the source of the chips I examined). It used core memory for its main storage, but a board full of 3101 chips provided high-speed storage for its microcode. The Xerox Alto used four 3101 chips to provide 16 high-speed registers for the CPU, while the main memory used slower DRAM chips. Interdata used 3101 chips in many of its 16- and 32-bit minicomputers up until the 1980s.12

The 3101 was also used in smaller systems. The Diablo 8233 terminal used them as RAM.13 The Datapoint 2200 was a "programmable terminal" that held its processor stack in fast 3101 chips rather than the slow main memory which was built from Intel 1405 shift registers.

The CPU of the Datapoint 2200 computer was built from a board full of TTL chips. The four white chips in the lower center-right are Intel 3101 RAM chips holding the stack. Photo courtesy of Austin Roche (I think).

The CPU of the Datapoint 2200 computer was built from a board full of TTL chips. The four white chips in the lower center-right are Intel 3101 RAM chips holding the stack. Photo courtesy of Austin Roche (I think).

How I created the die photos

To get the die photos, I started with two chips that I received thanks to Evan Wasserman and John Culver. The pins on the chips had been crushed in the mail, but this didn't affect the die photos. The chips had two different lot numbers that indicate they were manufactured a few months apart. Strangely, the metal lids on the chips were different sizes and the dies were slightly different. For more information, see the CPU Shack writeup of the 3101.

Two 3101 RAM chips. The chip on the right was manufactured slightly later and has a larger lid over the die.

Two 3101 RAM chips. The chip on the right was manufactured slightly later and has a larger lid over the die.

Popping the metal lid off the chips was easy—just a tap with a hammer and chisel. This revealed the die inside.

With the lid removed, you can see the die of the 3101 RAM chip and the bond wires connected to the die.

With the lid removed, you can see the die of the 3101 RAM chip and the bond wires connected to the die.

Using a metallurgical microscope and Hugin stitching software (details), I stitched together multiple microscope photos to create an image of the die. The metal layer is clearly visible, but it obscures the silicon underneath, making it hard to determine the chip's circuitry. The photo below shows a closeup of the die showing the "3101" part number.

The die photo of the Intel 3101 shows mostly the metal layer.

The die photo of the Intel 3101 shows mostly the metal layer.

I applied acid14 to remove the metal layer. This removed most of the metal, revealing the silicon circuitry underneath. Some of the metal is still visible, but thinner, appearing transparent green. Strangely, the number 3101 turned into 101; apparently the first digit wasn't as protected by oxide as the other digits.

Treatment with acid dissolved most of the metal layer of the 3101 chip, revealing the silicon circuits underneath.

Treatment with acid dissolved most of the metal layer of the 3101 chip, revealing the silicon circuits underneath.

Below is the complete die photo of the chip with the metal layer partially stripped off. (Click it for a larger version.) This die photo was most useful for analyzing the chip. Enough of the metal was removed to clearly show the silicon circuits, but the remaining traces of metal showed most of the wiring. The N+ silicon regions appear to have darkened in this etch cycle.

Die photo of the Intel 3101 64-bit RAM chip with metal layer partially stripped off.

Die photo of the Intel 3101 64-bit RAM chip with metal layer partially stripped off.

I wanted to see how the chip looked with the metal entirely removed so I did a second etch cycle. Unfortunately, this left the die looking like it had been destroyed.

After dissolving most of the oxide layer, the die looks like a mess. (This is a different region from the other photos.)

After dissolving most of the oxide layer, the die looks like a mess. (This is a different region from the other photos.)

I performed a third etch cycle. It turns out that the previous etch hadn't destroyed the die, but just left a thin layer of oxide that caused colored interference bands. The final etch removed the remaining oxide, leaving a nice, clean die. Only a ghost of the "101" number is visible. The contacts between the metal layer and the silicon remained after the etch; they may be different type of metal that didn't dissolve.

The metal and oxide have been completely removed from the 3101 die, showing the silicon layer.

The metal and oxide have been completely removed from the 3101 die, showing the silicon layer.

Below is the full die photo with all the metal stripped off. (Click it for a full-size image.)

Die photo of the Intel 3101 64-bit RAM chip with metal layer stripped off.

Die photo of the Intel 3101 64-bit RAM chip with metal layer stripped off.

Conclusion

The 3101 RAM chip illustrates the amazing improvements in integrated circuits driven by Moore's Law.15 While the 3101 originally cost $99.50 for 64 bits, you can now buy 16 gigabytes of RAM for that price, two billion times as much storage. If you built a 16 GB memory from two billion 3101 chips, the chips alone would weigh about 3000 tons and use over a billion watts, half of Hoover Dam's power. A modern 16GB DRAM module, in comparison, uses only about 5 watts.

As for Intel, the 3101 RAM was soon followed by many other memory products with rapidly increasing capacity, making Intel primarily a memory company that also produced processors. However, facing strong competition from Japanese memory manufacturers, Intel changed its focus to microprocessors and abandoned the DRAM business in 1985.16 By 1992, the success of the x86 processor line had made Intel the largest chip maker, justifying this decision. Even though Intel is now viewed as a processor company, it was the humble 3101 memory chip that gave Intel its start.

Thanks to Evan Wasserman and John Culver for sending me the chips. John also did a writeup of the 3101 chip, which you can read at CPU Shack.

Notes and references

  1. You might wonder why Intel's first chip had the seemingly-arbitrary number 3101. Intel had a highly-structured naming system. A 3xxx part number indicated a bipolar product. A 1 for the second digit indicated RAM, while the last two digits (01) were a sequence number. Fortunately, the marketing department stepped in and gave the 4004 and 8008 processors better names. 

  2. Memory chips started out very expensive, but prices rapidly dropped. Computer Design Volume 9 page 28, 1970, announced a price drop of the 3101 from $99.50 to $40 in small volumes. Ironically, the Intel 3101 is now a collector's item and on eBay costs much more than the original price—hundreds of dollars for the right package. 

  3. Several sources say that the 3101 was the first solid state memory, but this isn't accurate. There were many companies making memory chips in the 1960s. For instance, Texas Instruments announced the 16-bit SN5481 bipolar memory chip in 1966 (Electronics, V39 #1, p151) and Transitron had the TMC 3162 and 3164 16-bit RAM (Electrical Design News, Volume 11, p14). In 1968, RCA made 72-bit and 288-bit CMOS memories for the Air Force (document, photo). Lee Boysel built 256-bit dynamic RAMs at Fairchild in 1968 and 1K dynamic RAMs at Four Phase Systems in 1969 (timeline and Boysel presentation). For more information on the history of memory technology, see timeline and History of Semiconductor Engineering, p215. Another source for memory history is To the Digital Age, p193. 

  4. From my measurements, the 3101 die is about 2.39mm by 3.65mm. Feature size is about 12µm. 

  5. If you've used TTL chips, you probably used the 74LSxx family. The "S" stands for the Schottky transistors that make these chip fast. These chips were "the single most profitable product line in the history of Texas Instruments" (ref). 

  6. The Schottky diode in the Schottky transistor is formed between the base and collector. This diode prevents the transistor from becoming saturated, allowing it to switch faster. 

  7. The resistance of an IC resistor is proportional to the length divided by the width. The sheet resistance of a material is measured in the unusual unit of ohms per square. You might think it should be per square nanometer or square mm or something, but since the resistance depends on the ratio of length to width, the unit cancels out. 

  8. The bias line is shared by all the cells. For reading, it is set to a low voltage. For writing, it is set to an intermediate voltage: higher than the data 0 voltage, but lower than the data 1 voltage. The bias voltage is controlled by the write enable pin.

    More advanced chips use two data lines instead of a bias line for more sensitivity. A differential amplifier to compare the currents on the two data lines and distinguish the tiny change between a zero bit and a one bit. However, the 3101 uses such high currents internally that this isn't necessary; it can read the data line directly. 

  9. If my analysis is correct, when a row is selected, the address decode logic raises both the positive row select and negative row select lines by about 0.8 volts (one diode drop). Thus, the cell is still powered by the same voltage differential, but the voltage shift makes the data and bias lines active. 

  10. Address lines A3 and A2 are reversed in the decoding logic, presumably because it made chip layout simpler. This has no effect on the operation of the chip since it doesn't matter of the physical word order matches the binary order. 

  11. The 3101 has a chip select pin that makes it easy to combine multiple chips into a larger memory. If this pin is high, the chip will not read or write its contents. One strange thing about the address decoding logic is that each pair of address lines is driven by a NAND gate latch. There's no actual latching happening, so I don't understand why this circuit is used.

    How the 3101 implements this feature is a bit surprising. The chip select signal is fed into the address decoding circuit; if the chip is not selected, both A0 and the complement A0 are forced low. Thus, none of the rows will match in the address decoding logic and the chip doesn't respond. 

  12. The Interdata 7/32 (the first 32-bit minicomputer) used 3101 chips in its memory controller. (See the maintenance manual page 338.) The Interdata 16/HSALU used 3101 chips for its CPU registers. (See the maintenance manual page 259.) As late as 1982, the Interdata 3210 used 3101 chips to hold cache tags (see manual page 456). On the schematics note that part number 19-075 indicates the 3101. 

  13. The Diablo 8233 terminal used 3101A (74S289) chips as RAM for its discrete TTL-based processor (which was more of a microcontroller) that controlled the printer. (See maintenance manual page 187.) This systems was unusual since it contained both an 8080 microprocessor and a TTL-based processor. 

  14. The metal layer of the chip is protected by silicon dioxide passivation layer. The professional way to remove this layer is with dangerous hydrofluoric acid. Instead, I used Armour Etch glass etching cream, which is slightly safer and can be obtained at craft stores. I applied the etching cream to the die and wiped it for four minutes with a Q-tip. (Since the cream is designed for frosting glass, it only etches in spots. It must be moved around to obtain a uniform etch.) Next, I applied a few drops of hydrochloric acid (pool acid from the hardware store) to the die for a few hours. 

  15. Moore's law not only describes the exponential growth in transistors per chip, but drives this growth. The semiconductor industry sets its roadmap according to Moore's law, making it in some sense a self-fulfilling prophecy. See chapter 8 of Technological Innovation in the Semiconductor Industry for a thorough discussion. 

  16. Intel's 1985 Annual Report says "It was a miserable year for Intel" and discusses the decision to leave the DRAM business. 

Bitcoin mining on a vintage Xerox Alto: very slow at 1.5 hashes/second

I've been restoring a Xerox Alto minicomputer from the 1970s and figured it would be interesting to see if it could mine bitcoins. I coded up the necessary hash algorithm in BCPL (the old programming language used by the Alto) and found that although the mining algorithm ran, the Alto was so slow that it would take many times the lifetime of the universe to successfully mine bitcoins.

Bitcoin mining on a vintage Xerox Alto computer.

Bitcoin mining on a vintage Xerox Alto computer.

The Alto was a revolutionary computer designed at Xerox PARC in 1973 to investigate personal computing. It introduced high-resolution bitmapped displays, the GUI, Ethernet and laser printers to the world, among other things. In the photo above, the Alto computer is in the lower cabinet. The black box is the 2.5 megabyte disk drive. The Alto's unusual portrait display and an early optical mouse are on top.

How Bitcoin mining works

Bitcoin, a digital currency that can be transmitted across the Internet, has attracted a lot of attention lately. The Bitcoin system can be thought of as a ledger that keeps track of who owns which bitcoins, and allows them to be transferred from one person to another. The revolutionary feature of Bitcoin is there's no central machine or authority keeping track of things. Instead, the "blockchain" is stored across thousands of machines on the Internet, and the system works with nobody in charge.

To ensure everyone agrees on which transactions are valid, Bitcoin uses a process called mining—about every 10 minutes a block of outstanding transactions is mined, which makes the block "official". Bitcoin mining is designed to take an insanely huge amount of computational effort to mine a block, so nobody can take over the mining process. Miners compete against each other, generating trillions and trillions of "hashes" until someone finds a lucky one that succeeds in mining a block. It's hard to visualize just how difficult the hashing process is: finding a valid hash is less likely than finding a single grain of sand out of all the sand on Earth.

Bitcoin mining is based on cryptography, with a "hash function" that converts a block into an essentially random hash value. If the hash starts with 17 zeros,1 the block is successfully mined and is sent into the Bitcoin network. Most of the time the hash isn't successful, so the miner will modify the block slightly and try again, over and over billions of times. About every 10 minutes someone will successfully mine a block, and the process starts over. It's kind of like a lottery, where miners keep trying until someone "wins".

As a side-effect, mining adds new bitcoins to the system. For each block mined, miners currently get 12.5 new bitcoins (currently worth about $30,000) as well as fees, which encourages miners to do the hard work of mining blocks. With the possibility of receiving $30,000 every 10 minutes, miners invest in datacenters full of specialized mining hardware using huge amounts of electricity2.

Structure of a Bitcoin block

Structure of a Bitcoin block. The data in yellow is hashed to yield the block hash, which becomes the identifier for the block. The block is linked to the previous block by including the previous block's hash, forming the blockchain. The Merkle root is a hash of all the transactions in the block.

The diagram above shows what actually goes into a block that is mined. The yellow part is the block header (which gets hashed), and it is followed by the transactions that go into the block. Each block contains the hash of the previous block, causing all the blocks to be linked together forming the blockchain. You can see that for the block above, the hash is successful because it starts with lots of zeros: 0000000000000000e067a478024addfecdc93628978aa52d91fabd4292982a50. The "Merkle root" is a hash of all the transactions that go into the block; this ensures that none of the mined transactions can be changed. The nonce is an arbitrary number; each attempt at mining the block changes the nonce.

To summarize the mining process: you collect new Bitcoin transactions and create a header (as in the diagram above). You do the cryptographic hash of the block. If by some incredible chance the result starts with 17 zeros you send the block into the Bitcoin network and "win" $30,000 in bitcoin. Otherwise, you change the nonce and try again. Probably none of the nonce values will work, so you change something else in the header and start over. If someone else succeeds in mining the block, you start over with the new previous block hash and new transactions.

I've simplified a lot of details above. For in-depth information on Bitcoin and mining, see my articles Bitcoins the hard way and Bitcoin mining the hard way.

The SHA-256 hash algorithm used by Bitcoin

Next, I'll discuss the hash function used in Bitcoin, which is based on a standard cryptographic hash function called SHA-256.3 The SHA-256 algorithm is so simple you can literally do it by hand, but it manages to scramble the data entirely unpredictably. The SHA-256 hash algorithm takes input blocks of 512 bits (i.e. 64 bytes), combines the data cryptographically, and generates a 256-bit (32 byte) output.

The SHA-256 algorithm consists of a simple round repeated 64 times. The diagram below shows one round, which takes eight 4-byte inputs, A through H, performs a few operations, and generates new values for A through H. As can be seen from the diagram above, only A and E are changed in a round, while the others are just shifted over. Even so, after 64 rounds the input data will be completely scrambled, generating the unpredictable hash output.

The SHA-256 algorithm is pretty simple, about a page of pseudocode and can be easily implemented on a computer, even one as old as the Alto, using simple arithmetic and logic operations.5

SHA-256 round, from Wikipedia

SHA-256 round, from Wikipedia created by kockmeyer, CC BY-SA 3.0.

The dark blue boxes mix up the values in non-linear ways that are hard to analyze cryptographically. (If you could figure out a mathematical shortcut to generate successful hashes, you could take over Bitcoin mining.) The Ch "choose" box chooses bits from F or G, based on the value of input E. The Σ "sum" boxes rotate the bits of A (or E) to form three rotated versions, and then sum them together. The Ma "majority" box looks at the bits in each position of A, B, and C, and selects 0 or 1, whichever value is in the majority. The red boxes perform 32-bit addition, generating new values for A and E. The input data enters the algorithm through the Wt values. The Kt values are constants defined for each round.4

Implementing SHA-256 in BCPL

I implemented SHA-256 in BCPL, a programming language that was a precursor to C. It's a lot like C with syntax changes, except the only type is 16-bit words. My SHA-256 code is in sha256.bcpl. The snippet below (the choose function) will give you an idea of what BCPL looks like. Each value is two words; BCPL does array access with !1 instead of [1]. Like C++, comments are indicated with a double slash. Unlike C, BCPL uses words for xor and not.

   // ch := (e and f) xor ((not e) and g)
   ch!0 = (e!0 & f!0) xor ((not e!0) & g!0)
   ch!1 = (e!1 & f!1) xor ((not e!1) & g!1)

The mining is done in bitcoin.bcpl: it creates a Bitcoin header (from hardcoded values), substitutes the nonce, and calls the SHA-256 code to hash the header twice. One interesting feature of the code is the structure definition for the Bitcoin header in BCPL (below), similar to a C struct. It defines a two word field for version, 16 words for prevHash, and so forth; compare with the Bitcoin structure diagram earlier. Interestingly, ^1,16 indicates an array with indices from 1 to 16 inclusive. BCPL is not 0-indexed or 1-indexed, but lets you start array indices at arbitrary values.7

structure HEADER:
[
version^1,2 word
prevHash^1,16 word
merkleRoot^1,16 word
timestamp^1,2 word
bits^1,2 word
nonce^1,2 word
]

The line shows how structures are accessed in BCPL; it initializes one word of the header, using the slightly strange BCPL syntax. >>HEADER sort of casts the header variable to the HEADER structure described earlier. Then .prevhash^1 accesses the first word of the prevhash field. Also note that #13627 is an octal value; BCPL inconveniently doesn't include hex constants.6

header>>HEADER.prevHash^1 = #13627

The screenshot below shows the output as the program runs. The number on the left is each nonce in sequence as it is tested. The long hex number on the right is the resulting hash value. Each nonce results in a totally different hash value, due to the cryptographic hash algorithm.

On the Alto screen, each line shows a nonce value and the resulting hash.

On the Alto screen, each line shows a nonce value and the resulting hash.

Performance

The Alto can hash about 1.5 blocks per second, which is exceedingly slow by Bitcoin standards. At this speed, mining a single block on the Alto would take about 5000 times the age of the universe The electricity would cost about 2x10^16 dollars. And you'd get 12.5 bitcoins (₿12.5) worth about $30,000. Obviously, mining Bitcoin on a Xerox Alto isn't a profitable venture.

In comparison, a USB stick miner performs 3.6 billion hashes per second. The Alto cost about $12,000 to manufacture in 1973 (about $60,000 in current dollars), while the stick miner costs $200. The Alto used hundreds of watts, while the stick minter uses about 4 watts. The enormous difference in performance is due to huge increase in computer speed since the 1970s described by Moore's law as well as the giant speed gain from custom Bitcoin mining hardware.

The Alto wasn't a particularly fast machine, performing about 400,000 instructions per second. The Alto's instruction set lacks many of the operations you'd find on a modern processor. For instance, the SHA-256 algorithm makes heavy use of Boolean operations including exclusive-OR and OR. These are pretty basic instructions that you'd find on even something as primitive as the 6502, but the Alto doesn't have them. Instead, these operations are implemented with an inefficient subroutine call that does a sequence of operations with the same effect.

In addition, SHA-256 heavily uses bit shift and rotate operations. Modern processors typically have a "barrel shifter" that lets you shift by as many bits as you want in one step. The Alto's shift instructions, on the other hand, only shift a single bit. Thus, to shift by, say 10 bits, the Alto code calls a subroutine that performs 10 separate shift instructions. The result is a shift operation is much slower than you might expect.

You can see the Alto's arithmetic-logic board below. The Alto didn't use a microprocessor but instead built a CPU from simple TTL chips. You can see that even providing single-bit shifts required 8 separate chips—it's not surprising that the Alto doesn't do more complex shift operations.

The Alto doesn't have a microprocessor, but a CPU built from individual TTL chips. The ALU board has chips for arithmetic, chips for shifting, and chips for registers.

The Alto doesn't have a microprocessor, but a CPU built from individual TTL chips. The ALU board has chips for arithmetic, chips for shifting, and chips for registers.

I should point out that I'm not trying to write the best possible mining code for the Alto, and there are plenty of optimizations that one could do.8 For instance, writing the code in microcode would speed it up considerably, but Alto microcode is very hard to understand, let along write. My blog post on generating the Mandelbrot set on the Alto discussed Alto performance optimizations in detail, so I won't say more about optimization here.

Conclusion

The screenshot below shows a successful hash, ending in a bunch of zeros9. (I also display an image to show off the Alto's high-resolution bitmapped display.) Since the Alto would take well beyond the lifetime of the universe to find a successful hash, you might wonder how I found this. For this demonstration I simply used as input a block that had been successfully mined in the past, specifically block #286819. Thus, the algorithm succeeded quickly, but since it was an old block, I didn't make any money off it.

The algorithm found a successful hash, indicated by all the zeros at the end. Bitcoin graphic source probably MoneyWeek.

The algorithm found a successful hash, indicated by all the zeros at the end. Bitcoin graphic source probably MoneyWeek.

My code is on github if you want to look at BCPL code or try it out.

Notes and references

  1. At current difficulty, about 1 in 3x10^21 hashes will be successful at mining a block; a valid hash must start with approximately 17 zeros. The mining difficulty changes periodically to keep the time between blocks at approximately 10 minutes. As mining hardware gets faster, the difficulty factor is automatically updated to make mining more difficult so miners can never really catch up. 

  2. A while back I estimated that Bitcoin mining uses about as much electricity as the entire country of Cambodia. One paper puts mining's energy consumption comparable to Ireland's electricity usage. 

  3. Bitcoin uses "double SHA-256" which simply consists of applying the SHA-256 function twice.  

  4. The K constants used in the SHA-256 algorithm are provided by the NSA. You might worry that the NSA cleverly designed these constants to provide a backdoor. However, to prove that these are just arbitrary random constants, the NSA simply used the cube roots of the first 64 primes. 

  5. While SHA-256 is easy to implement, that's not the case for all the cryptography used by Bitcoin. To create a Bitcoin transaction, the transaction must be signed with elliptic curve cryptography. This requires 256-bit modular arithmetic, which is about as hard as it sounds. Even a simple implementation is 1000 lines of C. I decided that porting this to BCPL was too much effort for me. 

  6. I wrote a simple Python script to convert the many 32-bit hexadecimal constants used by SHA-256 to 16-bit octal constants. It's a good thing that hex has almost entirely replaced octal, as it is much better. 

  7. Some people claim that BCPL arrays are 0-based. However, arrays in BCPL structures can start at an arbitrary value. I start with 1 because that's what the Alto code typically does. (This caused me no end of confusion until I realized the indices weren't zero-based.) 

  8. The code could be made 33% faster by taking advantage of an interaction between SHA-256 and the Bitcoin header structure. Bitcoin performs a SHA-256 hash twice on the 80-byte header. Since SHA-256 only handles 64 bytes at a time, the first hash requires two SHA-256 cycles. The second hash takes a third SHA-256 cycle. However, when mining, the only thing that changes from one attempt to the next is the nonce value in the header. It happens to be in the second half of the header, which means the SHA-256 cycle performed on the first half of the header can be done once and then reused. Thus, the double SHA-256 hash can be done with two SHA-256 cycles instead of three. Bitcoin mining usually performs this optimization, but I left it out of the code to make the code less confusing. 

  9. You might wonder why Bitcoin successful hashes start with a bunch of zeros, while the displayed hash ends with a bunch of zeros. The reason is that Bitcoin reverses the byte order of the SHA-256 output. If you look closely, you'll see that the displayed hash matches the hash in the Bitcoin block diagram if you reverse bytes (i.e. pairs of hex digits).