Op amp on the Moon: Reverse-engineering a hybrid op amp module

I recently obtained a mysterious electronic component in a metal can, flatter and slightly larger than a typical integrated circuit.1 After opening it up and reverse engineering the circuit, I determined that this was an op amp built for NASA in the 1960s using hybrid technology. It turns out that the development of this component ties connected several important people in the history of semiconductors, and one of these op amps is on the Moon.

The module was packaged inside a TO-8 metal can, which is wider and flatter than a typical metal can IC. It is just a bit narrower than a dime.

The module was packaged inside a TO-8 metal can, which is wider and flatter than a typical metal can IC. It is just a bit narrower than a dime.

To determine what this component did and how it worked, I sawed the top off the metal can with a jeweler's saw, revealing the circuitry inside. There wasn't an integrated circuit inside but a larger hybrid module, built from tiny individual transistors on a ceramic substrate. In the photo below, the ceramic wafer has grayish conductive traces printed on it, similar to a printed circuit board. Individual silicon transistors (the smaller shiny squares) are attached to the traces on the ceramic. Thin gold wires connect the components together, and connect the circuit to the external pins.

Sawing off the top of the metal can reveals the hybrid circuitry inside. For scale, the package is slightly smaller than a dime.

Sawing off the top of the metal can reveals the hybrid circuitry inside. For scale, the package is slightly smaller than a dime.

Hybrid circuitry was widely used in the 1960s before complex circuits could be put on an integrated circuit. (The popular IBM System/360 computers (1964), for instance, were built from hybrid modules rather than ICs.) Although integrated circuit op amps were first produced in 1963, hybrids could avoid limitations of IC manufacturing and produce better performance, so hybrids remained popular in the 1970s and even 1980s.

At first, I couldn't identify this part, so I asked op amp expert Walt Jung for help. He identified the "a" on the package for Amelco, which helped me track down the rather obscure 2404BG op amp manufactured by the now-forgotten company Amelco.2 This part sold in 1969 for $58.50 each (equivalent to about $300 today). In comparison, you can get a modern JFET quad op amp for under 25 cents.

Some op amp history

The op amp is one of the most popular components of analog circuits because of its flexibility and versatility. An op amp takes two input voltages, subtracts them, multiplies the difference by a huge value (100,000 or more), and outputs the result as a voltage. In practice, a feedback circuit forces the inputs to be nearly equal; with an appropriate feedback circuit, an op amp can be used as an amplifier, a filter, integrator, differentiator, or buffer, for instance. A key figure in the early development of op amps was George Philbrick who started a company of the same name. The commercial history of the op amp started in 1952 when Philbrick introduced the K2-W op amp, a two-tube module that made op amps popular.3

I'll now jump to Jean Hoerni, who founded Amelco. One of the key events in the history of Silicon Valley was the 1957 departure from Shockley Semiconductor of eight employees, known as the "traitorous eight". They founded Fairchild, which led to dozens of startups and the growth of Silicon Valley. (Moore and Noyce, two of the eight, later left Fairchild to found Intel.) Physicist Jean Hoerni, of the traitorous eight, worked at Fairchild to improve transistors and succeeded beyond anyone's expectations. In 1959, he invented the planar transistor in 1959, which revolutionized semiconductor fabrication. (The planar process is essentially the technique used in modern transistors and ICs, using masks and diffusion on a flat silicon die.) Interestingly, the transistors in the op amp module (below) look identical to Hoerni's original teardrop-shaped planar transistors. Transistors from the 1970s and later look entirely different, so it was a bit surprising to find Hoerni's original design in use in this module.

An NPN transistor inside the hybrid module. Tiny bond wires are connected to the base and emitter, while the collector is on the underside.

An NPN transistor inside the hybrid module. Tiny bond wires are connected to the base and emitter, while the collector is on the underside.

Hoerni left Fairchild in 1961 and helped found a company called Amelco. It focused on semiconductors for space applications, avoiding direct competition with Fairchild. Linear (analog) integrated circuits were a major product for Amelco, with Amelco building op amps for Philbrick (the pioneering op amp company). Amelco also manufactured discrete transistors using Hoerni's planar process. At Amelco, Hoerni developed a technique to built a type of transistor called a JFET using his planar process, and these transistors became one of Amelco's most popular products. The key benefit of a JFET is that the input current to the transistor's gate is extraordinarily small, an advantage for applications such as op amps. Amelco used Hoerni's JFET in the industry's first JFET op amp, producing a high-performance op amp.

Bob Pease,4 a famous analog circuit designer, ties these threads together. In the 1960s, Bob Pease designed op amps for Philbrick, including the Q25AH hybrid FET op amp (1965). Amelco manufactured this op amp for Philbrick, so Bob Pease visited Amelco to help them with some problems. The story (here and here) is that during his visit Bob Pease got in a discussion with some Amelco engineers about NASA's requirements for a new low-power, low-noise amplifier. Bob Pease proceeded to design an op amp during his coffee break that met NASA's stringent requirements. This op amp was used in a seismic probe that Apollo 12 left on the Moon in 1969, so there's one of these op amps on the Moon now. Amelco marketed this op amp as the 2401BG.

As for the 2404BG I disassembled, its circuitry is very similar to Bob Pease's 2401BG design5, so I suspect he designed both parts. The 2404BG op amp also made it to the Moon; it was used in the high voltage power supply of the Lunar Atmosphere Composition Experiment (LACE). LACE was a mass spectrometer left on the Moon by the Apollo 17 mission in 1972. (LACE determined that even though the moon has almost no atmosphere, it does has some helium, argon, and possibly neon, ammonia, methane and carbon dioxide.)

In 1966 Amelco merged with Philbrick, forming Teledyne Philbrick Nexus which after some twists and turns was eventually acquired by Microchip Technology in 2000. (Among other things, Microchip produces the AVR microcontrollers used in the Arduino.)

Inside the hybrid op amp

In this section, I'll describe the construction and circuitry of the 2404BG op amp in more detail. The photo below shows a closeup of the ceramic wafer and the components on it. The grayish lines on the ceramic are conductive circuit traces. Most of the squares are NPN and PNP transistors, each on a separate silicon die. The underside of the die is the transistor's collector, connected to a trace on the ceramic. Tiny gold wires are attached to the emitter and base of the transistor, wiring it into the circuit. The two rectangular transistors in the lower right are the JFETs. The large square in the middle is a collection of resistors, and a single resistor is in the upper right. Note that unlike integrated circuits that can be mass-produced on a wafer, hybrid modules required a large amount of expensive mechanical processing and wiring to mount and connect the individual components.

A closeup of the hybrid module.

A closeup of the hybrid module.

I reverse engineered the circuitry of the op amp module and generated the schematic below.6 This circuit is fairly simple as op amps go, with about half the components of the classic 741 op amp. The inputs are buffered by the JFETs (green). The differential pair (blue), amplifies the input, directing current down one side of the pair or the other. The current source (red) generates a tiny fixed current for the differential pair using a current mirror circuit. The second stage amplifier (orange) provides additional amplification. The output transistors (purple) are set up in a class AB configuration to drive the output. The remaining components (uncolored) bias the output transistors. External capacitors on the compensation pins (8 and 9) prevent the op amp from oscillating.

Schematic of the 2404BG op amp.

Schematic of the 2404BG op amp.

Most of the resistors are on the single die in the middle of the module; this die is 1.7mm (1/16") on a side. The zig-zag shapes are thin-film resistors constructed from tantalum deposited on an oxide-coated silicon wafer. (One advantage of hybrid circuitry over integrated circuits was more accurate and better quality resistors.) The resistance is proportional to the length, so the meandering shapes allowed larger resistors to fit on the die. Around the outside of the die are metal pads; the bond wires attached to the pads connected the resistors to other parts of the circuit. Note the small circle to the left of the upper right pad; one innovation at Amelco was "mark-in-mark" targets to align the masks used for different layers of a chip.

The die in the middle of the module contains multiple resistors.

The die in the middle of the module contains multiple resistors.

The current source circuit needed a very high-valued resistor, so it used a separate resistor die (below). This resistor used a long, thinner trace to produce a higher resistance than the resistors on the previous die. Note the circular alignment target in the lower right. The die for this resistor is 0.8mm on a side.

This resistor controlled current through the op amp. The bond wire in the upper left was knocked off the pad during photography.

This resistor controlled current through the op amp. The bond wire in the upper left was knocked off the pad during photography.

The photo below shows one of the junction FET transistors used in the op amp. The metal fingers connect to source and drain regions. The gate (green) is connected underneath. This design is almost identical to the first planar JFET that Hoerni invented in 1963. It was initially difficult to produce high-quality JFETs on an integrated circuit, which motivated the production of hybrid JFET op amps. It wasn't until 1974 that National Semiconductor engineers developed the ion implantation technique for fabricating consistent, high-quality JFETs and used this "BIFET" technique to build better JFET op amp integrated circuits.

The diagram below compares the structure of the NPN and PNP transistors in the module, with photos at top and a cross-section diagram below.

A FET transistor inside the module. The die is are 0.6×0.3mm.

A FET transistor inside the module. The die is are 0.6×0.3mm.

Each transistor starts with a square die of silicon, which is doped with impurities to form N and P regions with different properties. The N and P doped silicon show up as different colors under the microscope. The shiny metal layer on top is visible, with one bond wire attached to the central emitter. A second bond wire is attached to the base region surrounding the emitter; the "teardrop" shape provides a wider area to attach the base wire. The underside of the die is the collector, which makes contact with the wiring on the ceramic wafer. The NPN transistor follows the straightforward planar structure. The PNP transistor, however, required an extra "annular ring" to operate at the op amp's higher voltages.7

Comparison of NPN and PNP transistors in the module. Each transistor is 0.5mm on a side. Approximate cross-sections are shown below.

Comparison of NPN and PNP transistors in the module. Each transistor is 0.5mm on a side. Approximate cross-sections are shown below.

Conclusions

This random component that I opened up turned out to have a more interesting history than I expected. It ties together the early days of op amps with Philbrick, Bob Pease's analog circuit development, now-forgotten Amelco, and NASA's scientific experiments on the Moon. The transistors inside this module were built using Hoerni's original planar designs, providing a glimpse into the development of the planar process that revolutionized semiconductors. Finally, this op amp shows the capabilities of hybrid technology, now almost completely eliminated by integrated circuits.

If you enjoyed this look inside a hybrid op amp, you may also like my analysis of another JFET op amp and the famous 741 op amp. I announce my latest blog posts on Twitter, so follow me at @kenshirriff. I also have an RSS feed. Thanks to op amp guru Walt Jung for help identifying the module.

Notes and references

  1. The module was packaged in a standard 12-pin TO-8 package. Most metal can integrated circuits are in the smaller TO-5 package, but the larger hybrid circuits require more room. 

  2. The "15818" on the package is a CAGE code, a NATO identifier used to track suppliers. Originally, 15818 was assigned to Amelco; due to mergers, this number now shows up as TelCom Semiconductor

  3. Several sources provided much of the information for this blog post. The book History of Semiconductor Engineering discusses in great detail the history of various semiconductor companies and the people involved. For an extremely detailed history of op amps, including the development of JFET op amps in the 1970s, see Op Amp History by Walt Jung, along with his Op Amp Applications Handbook. IC Op-Amps Through the Ages also has a history of op amps. 

  4. Bob Pease wrote a popular column "Pease Porridge" on analog circuits. He also wrote books such as Troubleshooting Analog Circuits

  5. Bob Pease's article What’s All This 2401BG Stuff, Anyhow? (page 54) provides a schematic of the 2401BG (below). Comparing the schematics, the 2401BG is very similar to the 2404BG that I examined. (I've colored the functional blocks to match my 2404BG schematic to make comparison easier.)

    Bob Pease's schematic of the 2401BG hybrid op amp that he designed for NASA.

    Bob Pease's schematic of the 2401BG hybrid op amp that he designed for NASA.

    The main difference is the output stage: the 2401BG takes the output directly from the second amplifying pair (with a current mirror at the bottom to sink current), while the 2404BG adds a class AB output stage. The 2401BG also has a separate current mirror for the bases of the input NPN transistors. 

  6. After I reverse-engineered the op amp schematic, I found a 1968 databook with a schematic for an Amelco hybrid op amp. The two schematics are almost identical, except the databook schematic includes two compensation capacitors, which are external on the 2404BG.

    Photo of an Amelco hybrid op amp.

    Photo of an Amelco hybrid op amp.

    The databook provided the above photo of the hybrid op amp, which is completely different from the 2404BG I examined. The databook did not give a part number (which is unusual for a databook), so I suspect this was a version of the 2404BG under development at the time. 

  7. You'd expect NPN and PNP transistors to be symmetrical, but the PNP transistors needed to be different to support high-voltage operation. The problem was that an interaction between the P region and the silicon dioxide on top caused N-type properties in a thin layer of the weakly-doped P region. At higher voltages, this could cause the transistor to short out. The solution was to create a strongly-doped P+ "annular ring" to interrupt this unwanted N behavior. Details in Jack Haenichen oral history and patent 3226611

Inside the Apollo Guidance Computer's core memory

The Apollo Guidance Computer (AGC) provided guidance, navigation and control onboard the Apollo flights to the Moon. This historic computer was one of the first to use integrated circuits, containing just two types of ICs: a 3-input NOR gate for the logic circuitry and a sense amplifier IC for the memory. It also used numerous analog circuits built from discrete components using unusual cordwood construction.

The Apollo Guidance Computer. The empty space on the left held the core rope modules. The connectors on the right communicate between the AGC and the spacecraft.

The Apollo Guidance Computer. The empty space on the left held the core rope modules. The connectors on the right communicate between the AGC and the spacecraft.

We1 are restoring the AGC shown above. It is a compact metal box with a volume of 1 cubic foot and weighs about 70 pounds. The AGC had very little memory by modern standards: 2048 words of RAM in erasable core memory and 36,864 words of ROM in core rope memory. (In this blog post, I'll discuss just the erasable core memory.) The core rope ROM modules (which we don't have)2 would be installed in the empty space on the left. On the right of the AGC, you can see the two connectors that connected the AGC to other parts of the spacecraft, including the DSKY (Display/Keyboard).3

By removing the bolts holding the two trays together, we could disassemble the AGC. Pulling the two halves apart takes a surprising amount of force because of the three connectors in the middle that join the two trays. The tray on the left is the "A" tray, which holds the logic and interface modules. The tangles of wire on the left of the tray are the switching power supplies that convert 28 volts from the spacecraft to 4 and 14 volts for use in the AGC. The tray on the right is the "B" tray, which holds the memory circuitry, oscillator and alarm. The core memory module was removed in this picture; it goes in the empty slot in the middle of the B tray.

The AGC is implemented with dozens of modules in two trays. The trays are connected through the three connectors in the middle.

The AGC is implemented with dozens of modules in two trays. The trays are connected through the three connectors in the middle.

Core memory overview

Core memory was the dominant form of computer storage from the 1950s until it was replaced by semiconductor memory chips in the early 1970s. Core memory was built from tiny ferrite rings called cores, storing one bit in each core. Cores were arranged in a grid or plane, as in the highly-magnified picture below. Each plane stored one bit of a word, so a 16-bit computer would use a stack of 16 core planes. Each core typically had 4 wires passing through it: X and Y wires in a grid to select the core, a diagonal sense line through all the cores for reading, and a horizontal inhibit line for writing.4

Closeup of a core memory (not AGC). Photo by Jud McCranie (CC BY-SA 4.0).

Closeup of a core memory (not AGC). Photo by Jud McCranie (CC BY-SA 4.0).

Each core stored a bit by being magnetized either clockwise or counterclockwise. A current in a wire through the core could magnetize the core with the magnetization direction matching the current's direction. To read the value of a core, the core was flipped to the 0 state. If the core was in 1 state previously, the changing magnetic field produced a voltage in the sense wire threaded through the cores. But if the core was in the 0 state to start, the sense line wouldn't pick up a voltage. Thus, forcing a core to 0 revealed the core's previous state (but erased it in the process).

A key property of the cores was hysteresis: a small current had no effect on a core; the current had to be above a threshold to flip the core. This was very important because it allowed a grid of X and Y lines to select one core from the grid. By energizing one X line and one Y line each with half the necessary current, only the core where both lines crossed would get enough current to flip and other cores would be unaffected. This "coincident-current" technique made core memory practical since a few X and Y drivers could control a large core plane.

The AGC's erasable core memory system

The AGC used multiple modules in the B tray to implement core memory. The Erasable Memory module (B12) contained the actual cores, 32768 cores to support 2048 words; each word was 15 bits plus a parity bit. Several more modules contained the supporting circuitry for the memory.5 The remainder of this article will describe these modules.

The erasable memory module in the Apollo Guidance Computer, with the supporting modules next to it. Image courtesy of Mike Stewart.

The erasable memory module in the Apollo Guidance Computer, with the supporting modules next to it. Image courtesy of Mike Stewart.

The photo below shows the Erasable Memory module after removing it from the tray. Unlike the other modules, this module has a black metal cover. Internally, the cores are encapsulated in Silastic (silicone rubber), which is then encapsulated in epoxy. This was intended to protect the delicate cores inside, but it took NASA a couple tries to get the encapsulation right. Early modules (including ours) were susceptible to wire breakages from vibrations. At the bottom of the modules are the gold-plated pins that plug into the backplane.

The erasable core memory module from the Apollo Guidance Computer.

The erasable core memory module from the Apollo Guidance Computer.

Core memory used planes of cores, one plane for each bit in the word. The AGC had 16 planes (which were called mats), each holding 2048 bits in a 64×32 grid. Note that each mat consists of eight 16×16 squares. The diagram below shows the wiring of the single sense line through a mat. The X/Y lines were wired horizontally and vertically. The inhibit line passed through all the cores in the mat; unlike the diagonal sense line it ran vertically.

The sense line wiring in an AGC core plane (mat). The 2048 cores are in a 64×32 grid.

The sense line wiring in an AGC core plane (mat). The 2048 cores are in a 64×32 grid.

Most computers physically stacked the core planes on top of each other but the AGC used a different mechanical structure, folding the mats (planes) to fit compactly in the module. The mats were accordion-folded to fit tightly into the module as shown in the diagram below. (Each of the 16 mats is outlined in cyan.) When folded, the mats formed a block (oriented vertically in the diagram below) that was mounted horizontally in the core module.

This folding diagram shows how 16 mats are folded into the core module. (Each cyan rectangle indicates a mat.)

This folding diagram shows how 16 mats are folded into the core module. (Each cyan rectangle indicates a mat.)

The photo below shows the memory module with the cover removed. (This is a module on display at the CHM, not our module.) Most of the module is potted with epoxy, so the cores are not visible. The most noticeable feature is the L-shaped wires on top. These connect the X and Y pins to 192 diodes. (The purpose of the diode will be explained later.) The diodes are hidden underneath this wiring in two layers, mounted horizontally cordwood-style. The leads from the diodes are visible as they emerge and connect to terminals on top of the black epoxy.

The AGC's memory module with the cover removed. This module is on display at the CHM. Photo courtesy of Mike Stewart.

The AGC's memory module with the cover removed. This module is on display at the CHM. Photo courtesy of Mike Stewart.

Marc took X-rays of the module and I stitched the photos together (below) to form an image looking down into the module. The four rows of core mats in the folding diagram correspond to the four dark blocks. You can also see the two rows of diodes as two darker horizontal stripes. At this resolution, the wires through the cores and the tangled mess of wires to the pins are not visible; these wires are very thin 38-gauge wires, much thinner than the wires to the diodes.

Composite X-ray image of the core memory module. The stitching isn't perfect in the image because the parallax and perspective changed in each image. In particular, the pins appear skewed in different directions.

Composite X-ray image of the core memory module. The stitching isn't perfect in the image because the parallax and perspective changed in each image. In particular, the pins appear skewed in different directions.

The diagram below shows a cross-section of the memory module. (The front of the module above corresponds to the right side of the diagram.) The diagram shows how the two layers of diodes (blue) are arranged at the top, and are wired (red) to the core stack (green) through the "feed thru". Also note how the pins (yellow) at the bottom of the module rise up through the epoxy and are connected by wires (red) to the core stack.

Cross-section of memory module showing internal wiring. From Apollo Computer Design Review page 9-39 (Original block II design.)

Cross-section of memory module showing internal wiring. From Apollo Computer Design Review page 9-39 (Original block II design.)

Addressing a memory location

The AGC's core memory holds 2048 words in a 64×32 matrix. To select a word, one of the 64 X select lines is energized along with one of the 32 Y select lines. One of the challenges of a core memory system is driving the X and Y select lines. These lines need to be driven at high current (100's of milliamps). In addition, the read and write currents are opposite directions, so the lines need bidirectional drivers. Finally, the number of X and Y lines is fairly large (64 + 32 for the AGC), so using a complex driver circuit on each line would be too bulky and expensive. In this section, I'll describe the circuitry in the AGC that energizes the right select lines for a particular address.

The AGC uses a clever trick to minimize the hardware required to drive the X and Y select lines. Instead of using 64 X line drivers, the AGC has 8 X drivers at the top of the matrix, and 8 at the bottom of the matrix. Each of the 64 select lines is connected to a different top and bottom driver pair. Thus, energizing a top driver and a bottom driver produces current through a single X select line. Thus, only 8+8 X drivers are required rather than 64.6 The Y drivers are similar, using 4 on one side and 8 on the other. The downside of this approach is 192 diodes are required to prevent "sneak paths" through multiple select lines.7

Illustration of how "top" and "bottom" drivers work together to select a single line through the core matrix. Original diagram here.

Illustration of how "top" and "bottom" drivers work together to select a single line through the core matrix. Original diagram here.

The diagram above demonstrates this technique for the vertical lines in a hypothetical 9×5 core array. There are three "top" drivers (A, B and C), and three "bottom" drivers (1, 2 and 3). If driver B is energized positive and driver 1 is energized negative, current flows through the core line highlighted in red. Reversing the polarity of the drivers reverses the current flow, and energizing different drivers selects a different line. To see the need for diodes, note that in the diagram above, current could flow from B to 2, up to A and finally down to 1, for instance, incorrectly energizing multiple lines.

The address decoder logic is in tray "A" of the AGC, implemented in several logic modules.9 The AGC's logic is entirely built from 3-input NOR gates (two per integrated circuit), and the address decoder is no exception. The image below shows logic module A14. (The other logic modules look essentially the same, but the internal printed circuit board is wired differently.) The logic modules all have a similar design: two rows of 30 ICs on each side, for 120 ICs in total, or 240 3-input NOR gates. (Module A14 has one blank location on each side, for 118 ICs in total.) The logic module plugs into the AGC via the four rows of pins at the bottom.10

Much of the address decoding is implemented in logic module A14. Photo courtesy of Mike Stewart.

Much of the address decoding is implemented in logic module A14. Photo courtesy of Mike Stewart.

The diagram below shows the circuit to generate one of the select signals (XB6—X bottom 6).11 The NOR gate outputs a 1 if the inputs are 110 (i.e. 6). The other select signals are generated with similar circuits, using different address bits as inputs.

This address decode circuit generates one of the select signals. The AGC has 28 decode circuits similar to this.

This address decode circuit generates one of the select signals. The AGC has 28 decode circuits similar to this.

Each integrated circuit implemented two NOR gates using RTL (resistor-transistor logic), an early logic family. These ICs were costly; they cost $20-$30 each (around $150 in current dollars). There wasn't much inside each IC, just three transistors and eight resistors. Even so, the ICs provided a density improvement over the planned core-transistor logic, making the AGC possible. The decision to use ICs in the AGC was made in 1962, amazingly just four years after the IC was invented. The AGC was the largest consumer of ICs from 1962 to 1965 and ended up being a major driver of the integrated circuit industry.

Each IC contains two NOR gates implemented with resistor-transistor logic. From Schematic 2005011.

Each IC contains two NOR gates implemented with resistor-transistor logic. From Schematic 2005011.

The die photo below shows the internal structure of the NOR gate; the metal layer of the silicon chip is most visible.12 The top half is one NOR gate and the bottom half is the other. The metal wires connect the die to the 10-pin package. The transistors are clumped together in the middle of the chip, surrounded by the resistors.

Die photo of the dual 3-input NOR gate used in the AGC. Pins are numbered counterclockwise; pin 3 is to the right of the "P". Photo by Lisa Young, Smithsonian.

Die photo of the dual 3-input NOR gate used in the AGC. Pins are numbered counterclockwise; pin 3 is to the right of the "P". Photo by Lisa Young, Smithsonian.

Erasable Driver Modules

Next, the Erasable Driver module converts the 4-volt logic-level signals from the address decoder into 14-volt pulses with controlled current. The AGC has two identical Erasable Driver modules, in slots B9 and B10.5 Two modules are required due to the large number of signals: 28 select lines (X and Y, top and bottom), 16 inhibit lines (one for each bit), and a dozen control signals.

The select line driver circuits are simple transistor switching circuits: a transistor and two resistors. Other circuits, such as the inhibit line drivers are a bit more complex because the shape and current of the pulse need to be carefully matched to the core module. This circuit uses three transistors, an inductor, and a handful of resistors and diodes. The resistor values are carefully selected during manufacturing to provide the desired current.

The erasable driver module, front and back. Photo courtesy of Mike Stewart.

The erasable driver module, front and back. Photo courtesy of Mike Stewart.

This module, like the other non-logic modules, is built using cordwood construction. In this high-density construction, components were inserted into holes in the module, passing through from one side of the module to the other, with their leads exiting on either side. (Except for transistors, with all three leads on the same side.) On each side of the module, point-to-point wiring connected the components with welded connections. In the photo below, note the transistors (golden, labeled with Q), resistors (R), diodes (CR for crystal rectifier, with K indicating the cathode), large capacitors (C), inductor (L), and feed-throughs (FT). A plastic sheet over the components conveniently labels them; for instance, "7Q1" means transistor Q1 for circuit 7 (of a repeated circuit). These labels match the designations on the schematic. At the bottom are connections to the module pins. Modules that were flown on spacecraft were potted with epoxy so the components were protected against vibration. Fortunately, our AGC was used on the ground and left mostly unpotted, so the components are visible.

A closeup of the Erasable Driver module, showing the cordwood construction. Photo courtesy of Mike Stewart.

A closeup of the Erasable Driver module, showing the cordwood construction. Photo courtesy of Mike Stewart.

Current Switch Module

You might expect that the 14-volt pulses from the Erasable Driver modules would drive the X and Y lines in the core. However, the signals go through one more module, the Current Switch module, in slot B11 just above the core memory module. This module generates the bidirectional pulses necessary for the X and Y lines.

The driver circuits are very interesting as each driver includes a switching core in the circuit. (These cores are much larger than the cores in the memory itself.)13 The driver uses two transistors: one for the read current, and the other for the write current in the opposite direction. The switching core acts kind of like an isolation transformer, providing the drive signal to the transistors. But the switching core also "remembers" which line is being used. During the read phase, the address decoder flips one of the cores. This generates a pulse that drives the transistor. During the write phase, the address decoder is not involved. Instead, a "reset" signal is sent through all the driver cores. Only the core that was flipped in the previous phase will flip back, generating a pulse that drives the other transistor. Thus, the driver core provides memory of which line is active, avoiding the need for a flip flop or other latch.

The current switch module. (This is from the CHM as ours is encapsulated and there's nothing to see but black epoxy.) Photo courtesy of Mike Stewart.

The current switch module. (This is from the CHM as ours is encapsulated and there's nothing to see but black epoxy.) Photo courtesy of Mike Stewart.

The diagram below shows the schematic of one of the current switches. The heart of the circuit is the switching core. If the driver input is 1, winding A will flip the the core when the set strobe is pulsed. This will produce a pulses on the other windings; the positive pulse on winding B will turn on transistor Q55, pulling the output X line low for reading.14 The output is connected via eight diodes to eight X top lines through the core. A similar bottom select switch (without diodes) will pull X bottom lines high; the single X line with the top low and the bottom high will be energized, selecting that row. For a write, the reset line is pulled low energizing winding D. If the core had flipped earlier, it will flip back, generating a pulse on winding C that will turn on transistor Q56, and pull the output high. But if the core had not flipped earlier, nothing happens and the output remains inactive. As before, one X line and one Y line through the core planes will be selected, but this time the current is in the opposite direction for a write.

Schematic of one of the current switches in the AGC. This switch is the driver for X top line 0. The schematic shows one of the 8 pairs of diodes connected to this driver.

Schematic of one of the current switches in the AGC. This switch is the driver for X top line 0. The schematic shows one of the 8 pairs of diodes connected to this driver.

The photo below shows one of the current switch circuits and its cordwood construction. The switching core is the 8-pin black module between the transistors. The core and the wires wound through it are encapsulated with epoxy, so there's not much to see. At the bottom of the photo, you can see the Malco Mini-Wasp pins that connect the module to the backplane.

Closeup of one switch circuit in the Current Switch Module. The switching core (center) has transistors on either side.

Closeup of one switch circuit in the Current Switch Module. The switching core (center) has transistors on either side.

Sense Amplifier Modules

When a core flips, the changing magnetic field induces a weak signal in the corresponding sense line. There are 16 sense lines, one for each bit in the word. The 16 sense amplifiers receive these signals, amplify them, and convert them to logic levels. The sense amplifiers are implemented using a special sense amplifier IC. (The AGC used only two different ICs, the sense amplifier and the NOR gate.) The AGC has two identical sense amplifier modules, in slots B13 and B14; module B13 is used by the erasable core memory, while B14 is used by the fixed memory (i.e. core rope used for ROM).

The signal from the core first goes through an isolation transformer. It is then amplified by the IC and the output is gated by a strobe transistor. The sense amplifier depends on carefully-controlled voltage levels for bias and thresholds. These voltages are produced by voltage regulators on the sense amplifier modules that use Zener diodes for regulation. The voltage levels are tuned during manufacturing by selecting resistor values and optional diodes, matching each sense amplifier module to the characteristics of the computer's core memory module.

The photo below shows one of the sense amp modules. The eight repeated units are eight sense amplifiers; the eight other sense amplifiers are on the other side of the module. The reddish circles are the pulse transformers, while the lower circles are the sense amplifier ICs. The voltage regulation is in the middle and right of the module. On top of the module (front in the photo) you can see the horizontal lines of the nickel ribbon that connects the circuits; it is somewhat similar to a printed circuit board.

Sense amplifier module with top removed. Note the nickel ribbon interconnect at the top of the module.

Sense amplifier module with top removed. Note the nickel ribbon interconnect at the top of the module.

The photo below shows a closeup of the module. At the top are two amplifier integrated circuits in metal cans. Below are two reddish pulse transformers. An output driver transistor is between the pulse transformers.15 The resistors and capacitors are mounted using cordwood construction, so one end of the component is wired on this side of the module, and one on the other side. Note the row of connections at the top of the module; these connect to the nickel ribbon interconnect.

Closeup of the sense amplifier module for the AGC. The sense amplifier integrated circuits are at the top and the reddish pulse transformers are below. The pins are at the bottom and the wires at the top go to the nickel ribbon, which is like a printed circuit board.

Closeup of the sense amplifier module for the AGC. The sense amplifier integrated circuits are at the top and the reddish pulse transformers are below. The pins are at the bottom and the wires at the top go to the nickel ribbon, which is like a printed circuit board.

The diagram below shows the circuitry inside each sense amp integrated circuit. The sense amp chip is considerably more complex than the NOR gate IC. The chip receives the sense amp signal inputs from the pulse transformer and the differential amplifier amplifies the signal.16 If the signal exceeds a threshold, the IC outputs a 1 bit when clocked by the strobe.

Circuitry inside the sense amp integrated circuit for the AGC.

Circuitry inside the sense amp integrated circuit for the AGC.

Writes

With core memory, the read operation and write operation are always done in pairs. Since a word is erased when it is read, it must then be written, either with the original value or a new value. In the write cycle, the X and Y select lines are energized to flip the core to 1, using the opposite current from the read cycle.

Since the same X and Y select lines go through all the planes, all bits in the word would be set to 1. To store a 0 bit, each plane has an inhibit line that goes through all the cores in the plane. Energizing the inhibit line in the opposite direction to the X and Y select lines partially cancels out the current and prevents the core from receiving enough current to flip it, so the bit remains 0. Thus, by energizing the appropriate inhibit lines, any value can be written to the word in core. The 16 inhibit lines are driven by the Erasable Driver modules.

The broken wire

During the restoration, we tested the continuity of all the lines through the core module. Unfortunately, we discovered that the inhibit line for bit 16 is broken internally. NASA discovered in early testing that wires could be sheared inside the module, due to vibrations between the silicone encapsulation and the epoxy encapsulation. They fixed this problem in the later modules that were flown, but our module had the original faulty design. We attempted to find the location of the broken wire with X-rays, but couldn't spot the break. Time-domain reflectometry suggests the break is inconveniently located in the middle of the core planes. We are currently investigating options to deal with this. Marc has a series of AGC videos; the video below provides detail on the broken wire in the memory module.

Conclusion

Core memory was the best storage technology in the 1960s and the Apollo Guidance Computer used it to get to the Moon. In addition to the core memory module itself, the AGC required several modules of supporting circuitry. The AGC's logic circuits used early NOR-gate integrated circuits, while the analog circuits were built from discrete components and sense amplifier ICs using cordwood construction.

The erasable core memory in the AGC stored just 2K words. Because each bit in core memory required a separate physical ferrite core, density was limited. Once semiconductor memory became practical in the 1970s, it rapidly replaced core memory. The image below shows the amazing density difference between semiconductor memory and core memory: 64 bits of core take about the same space as 64 gigabytes of flash.

Core memory from the IBM 1401 compared with modern flash memory.

Core memory from the IBM 1401 compared with modern flash memory.

I announce my latest blog posts on Twitter, so follow me @kenshirriff for future articles. I also have an RSS feed. See the footnotes for Apollo manuals17 and more information sources18. Thanks to Mike Stewart for supplying images and extensive information.

Notes and references

  1. The AGC restoration team consists of Mike Stewart (creator of FPGA AGC), Carl Claunch, Marc Verdiell (CuriousMarc) on YouTube and myself. The AGC that we're restoring belongs to a private owner who picked it up at a scrap yard in the 1970s after NASA scrapped it. For simplicity I refer to the AGC we're restoring as "our AGC".

    The Apollo flights had one AGC in the command module (the capsule that returned to Earth) and one AGC in the lunar module. In 1968, before the Moon missions, NASA tested a lunar module (with astronauts aboard) in a giant vacuum chamber in Houston to ensure that everything worked in space-like conditions. We believe our AGC was installed in that lunar module (LTA-8). Since this AGC was never flown, most of the modules are not potted with epoxy. 

  2. We don't have core rope modules, but we have a core rope simulator from the 1970s. Yes, we know about Francois; those are ropes for the earlier Block I Apollo Guidance Computer and are not compatible with our Block II AGC. 

  3. Many people have asked if we talked to Fran about the DSKY. Yes, we have. 

  4. There were alternative ways to wire a core plane. Using a diagonal sense wire reduced the noise in the sense wire from X and Y pulses but some used a horizontal sense wire. Some core systems used the same wire for sense and inhibit (which simplified manufacturing), but that made noise rejection more complex. 

  5. If you look carefully at the pictures of modules installed in the AGC, the Erasable Driver module in B10 is upside down. This is not a mistake, but how the system was designed. I assume this simplified the backplane wiring somehow, but it looks very strange. 

  6. The IBM 1401 business computer, for example, used a different approach to generate the X and Y select lines. To generate the 50 X select signals, it used a 5×10 matrix of cores (separate from the actual memory cores). Two signals into the matrix were energized at the same time, flipping one of the 50 cores and generating a pulse on that line. Thus, only 5+10 drivers were needed instead of 50. The Y select signals were similar, using an 8×10 matrix. Details here

  7. The AGC core memory required 192 diodes to prevent sneak paths, where a pulse could go backward through the wrong select lines. Each line required two diodes since the lines are driven one direction for read and the opposite for write. Since there are 64 X lines and 32 Y lines, 2×(64+32) = 192 diodes were required. These diodes were installed in two layers in the top of the core memory module. 

  8. The memory address is mapped onto the select lines as follows. The eight X bottom signals are generated from the lowest address bits, S01, S02 and S03. (Bits in a word are numbered starting at 1, not 0.) Each decoder output has as NOR gate to select a particular bit pattern, along with four more NOR gates as buffers. The eight X top signals are generated from address bits S04, S05, and S06. The four Y bottom signals are generated from address bits S07 and S08. The eight Y top signals are generated from address bits EAD09, EAD10, and EAD11; these in turn were generated from S09 and S10 along with bank select bits EB9, EB10 and EB11. (The AGC used 12-bit addresses, allowing 4096 words to be addressed directly. Since the AGC had 38K of memory in total, it had a complex memory bank system to access the larger memory space.) 

  9. For address decoding, the X drivers were in module A14, the Y top drivers were in A7 and the Y bottom drivers in A14. The memory address was held in the memory address register (S register) in module A12, which also held a bit of decoding logic. Module A14 also held some memory timing logic. In general, the AGC's logic circuits weren't cleanly partitioned across modules since making everything fit was more important than a nice design. 

  10. One unusual thing to notice about the AGC's logic circuitry is there are no bypass capacitors. Most integrated circuit logic has a bypass capacitor next to each IC to reduce noise, but NASA found that the AGC worked better without bypass capacitors. 

  11. The "Blue-nose" gate doesn't have the pull-up resistor connected, making it open collector. It is presumably named after its blue appearance on blueprints. Blue-nose outputs can be connected together to form a NOR gate with more inputs. In the case of the address decoder, the internal pull-up resistor is not used so the Erasable Driver module (B9/B10) can pull the signal up to BPLUS (+14V) rather than the +4V logic level. 

  12. The AGC project used integrated circuits from multiple suppliers, so die photos from different sources show different layouts.  

  13. The memory cores and the switching core were physically very different. The cores in the memory module had a radius between 0.047 and 0.051 inches (about 1.2mm). The switching cores were much larger (either .249" or .187" depending on the part number) and had 20 to 50 turns of wire through them. 

  14. For some reason, the inputs to the current switches are numbered starting at 0 (XT0E-XT7E) while the outputs are numbered starting at 1 (1AXBF-8AXBF). Just in case you try to understand the schematics. 

  15. The output from the sense amplifiers is a bit confusing because the erasable core memory (RAM) and fixed rope core memory (ROM) outputs are wired together. The RAM has one sense amp module with 16 amplifiers in slot B13, and the ROM has its own identical sense amp module in slot B14. However, each module only has 8 output transistors. The two modules are wired together so 8 output bits are driven by transistors in the RAM's sense amp module and 8 output bits are driven by transistors in the ROM's sense amp module. (The motivation behind this is to use identical sense amp modules for RAM and ROM, but only needing 16 output transistors in total. Thus, the transistors are split up 8 to a module.) 

  16. I'll give a bit more detail on the sense amps here. The key challenge with the sense amps is that the signal from a flipping core is small and there are multiple sources of noise that the sense line can pick up. By using a differential signal (i.e. looking at the difference between the two inputs), noise that is picked up by both ends of the sense line (common-mode noise) can be rejected. The differential transformer improved the common-mode noise rejection by a factor of 30. (See page 9-16 of the Design Review.) The other factor is that the sense line goes through some cores in the same direction as the select lines, and through some cores the opposite direction. This helps cancel out noise from the select lines. However, the consequence is that the pulse on the sense line may be positive or may be negative. Thus, the sense amp needed to handle pulses of either polarity; the threshold stage converted the bipolar signal to a binary output. 

  17. The Apollo manuals provide detailed information on the memory system. The manual has a block diagram of the AGC's memory system. The address decoder is discussed in the manual starting at 4-416 and schematics are here. Schematics of the Erasable Driver modules are here and here; the circuit is discussed in section 4-5.8.3.3 of the manual. Schematics of the Current Switch module are here and here; the circuit is discussed in section 4-5.8.3.3 of the manual. Sense amplifiers are discussed in section 4-5.8.3.4 of the manual with schematics here and here; schematics are here and here

Accounting machines, the IBM 1403, and why printers standardized on 132 columns

Have you ever wondered why 132 characters is such a common width for printers? Many printers produced lines of 132 characters, such as the groundbreaking Centronics 101 dot-matrix printer (1970), the ubiquitous DECwriter II terminal (1975), the Epson MX-80 dot-matrix printer (1981), and the Apple Daisy Wheel Printer (1983). Even CRT terminals such as the DEC VT100 (1978) supported 132 columns. But why the popularity of 132 columns?1

After researching this question, I've concluded that there are two answers. The first answer is that there isn't anything special about 132 columns. In particular, early printers used an astoundingly large variety of line widths including 50, 55, 60, 70, 73, 80, 88, 89, 92, 100, 118, 120, 128, 130, 136, 140, 144, 150 and 160 characters.2 This shows there was no strong technical or business reason to use 132 columns. Instead, 132 columns became a de facto standard due to the popularity of the IBM 1401 computer and its high-performance 1403 line printer, which happened to print 132 columns.

The second, more interesting, answer is that a variety of factors in the history of data processing, some dating back a century, led to standardization on several sizes for printed forms. One of these sizes became standard line printer paper holding 132 columns.

The IBM 1401 computer and the 1403 printer

The first printer to use 132 columns appears to be the IBM 1403 line printer, which provided output for the IBM 1401 business computer. The IBM 1401 was the most popular computer of the early 1960s, largely due to its low price. Earlier computers had been limited to large corporations due to their high cost; the IBM 705 business computer rented for $43,000 a month (almost $400,000 in current dollars). But the IBM 1401 could be rented for $2500 per month, opening up the market to medium-sized businesses that used it for payroll, inventory, accounting and many other business tasks. As a result, over 10,000 IBM 1401 computers were in use by the mid-1960s.

The IBM 1403 printer in front of the popular 1401 business computer (right) and 729 tape drives (left).

The IBM 1403 printer in front of the popular 1401 business computer (right) and 729 tape drives (left).

The IBM 1403 printer was an important part of the 1401's success. This high-speed line printer could print 600 lines per minute of high-quality text, said to be the best print quality until laser printers.10 "Even today, [the 1403 printer] remains the standard of quality for high-speed impact printing," at least according to IBM. By the late 1960s, half of the world's continuous forms were printed on IBM 1403 printers.3

Because the IBM 1403 printer was so popular, its 132-column format became a de facto standard, supported by later printers and terminals for backward compatibility. The 14 7/8"×11" green-bar paper that it used4 remains popular to this day, available at office supply stores.5

Accounting machines / tabulators

Now I'll discuss the history that led up to 132 columns on 14 7/8" paper. The key actor in this story is the electric accounting machine or tabulator. While these machines are mostly forgotten now, they were the cornerstone of business data processing in the pre-computer era (history). Tabulators date back to the 1890 census when Herman Hollerith invented these machines to tabulate (i.e. count)6 data stored on punch cards. Later tabulators used relays and electromechanical counters to sum up values, were "programmed" for different tasks by a plugboard of wires, and could process 150 punch cards per minute.

The IBM 403 electric accounting machine. Note the programming plugboard at left with yellow wires. The printer carriage is on top. Cards are fed into the hopper to the left.

The IBM 403 electric accounting machine. Note the programming plugboard at left with yellow wires. The printer carriage is on top. Cards are fed into the hopper to the left.

By 1943, tabulators were popular with businesses and governments; IBM had about 10,000 tabulators in service. These machines were complex, able to handle conditionals while adding or subtracting three levels of subtotals and formatting their alphanumeric output. Accounting machines were used for a wide variety of business data processing tasks such as accounting, inventory, billing, issuing checks, printing shipping labels or even printing W-2 tax forms. While these machines were designed for businesses, tabulators were also pressed into service for scientific computation in the 1930s and 1940s, most famously for nuclear bomb simulations during the Manhattan Project.

IBM 285 accounting machine (1933)

The earliest tabulators displayed the results on mechanical counters so an operator had to write down the results after each subtotal (details). The development of the tabulator printing unit in the 1920s eliminated this inconvenient manual step. One popular printing tabulator was the IBM 285, introduced in 1933. This machine printed values using 3 to 7 "print banks", where each bank consisted of 10 numeric type bars.7 The output below shows 7 column-output, generated by a 285 tabulator with 7 print banks.

Output from the IBM 285 Electric Accounting Machine, showing its 7 columns of counter output. This output is standard typewriter spacing (6 lines per inch), double-spaced. Headings are pre-printed on the form, not printed by the tabulator.

Output from the IBM 285 Electric Accounting Machine, showing its 7 columns of counter output. This output is standard typewriter spacing (6 lines per inch), double-spaced. Headings are pre-printed on the form, not printed by the tabulator.

The character spacing was 5/32" (a value that will be important later), yielding columns 1 7/8" wide. This spacing was about 50% wider than standard typewriter spacing (10 characters per inch) even though the tabulator used standard typewriter line spacing (6 lines per inch). As you can see from the output above, this caused large gaps between the characters. So why did the accounting machine use a character spacing of 5/32"? To understand that, we have to go back a decade.

Early IBM punch cards had 45 columns with round holes spaced 5/32" apart.8 The image below shows one of these cards. Each column contained one hole, representing a digit from 0 to 9. One machine used with punch cards was the "interpreter". It read a card and printed the card's contents at the top of the card above the holes. The interpreter used a 45-column print mechanism with type bars spaced 5/32" apart to match the holes.

An IBM 45-column punch card from the early 1920s. This card used round holes, unlike the rectangular holes on "modern" 80-column punch cards. From Electric Tabulating Machines.

An IBM 45-column punch card from the early 1920s. This card used round holes, unlike the rectangular holes on "modern" 80-column punch cards. From Electric Tabulating Machines.

In 1928, IBM introduced the "modern" punch card, which held 80 columns of data (below). These cards used rectangular holes so the holes could be closer together (0.087" spacing). However, IBM kept many of the mechanisms designed for 45-column cards, along with their 5/32" spacing. The result was mismatched products like the IBM 550 Interpreter (1930) that read an 80-column punch card and printed 45 characters at the top of the card. As a result, the characters didn't line up with the holes, as you can see below.9 Likewise, The 285 accounting machine used a type bar printer with 5/32" spacing, even though it used 80-column cards.

The IBM 550 card interpreter read data punched into an 80-column card and printed 45 columns of that data at the top of the card.

The IBM 550 card interpreter read data punched into an 80-column card and printed 45 columns of that data at the top of the card.

IBM 405 (1934) and 402 (1948) accounting machines

The IBM 285 tabulator could only print digits, but in 1934, IBM introduced the 405, a tabulator that could print alphanumeric information, followed by the improved 402 accounting machine in 1948. Alphanumeric output greatly expanded the uses of the tabulator, as it could print invoices, address labels, employee records, or other forms that required alphanumeric data. The IBM 405 had 88 type bars that moved vertically to print a line of output (below).18 Note the gap for a ribbon guide between the two blocks of type bars.

The IBM 405 accounting machine printed with type bars: 43 alphanumeric ("alphamerical") type bars on the left, and 45 numeric-only type bars on the right. From Electric punched card accounting machines.

The IBM 405 accounting machine printed with type bars: 43 alphanumeric ("alphamerical") type bars on the left, and 45 numeric-only type bars on the right. From Electric punched card accounting machines.

The figure below shows sample output from a 405 tabulator, showing alphanumeric characters on the left side. As with the earlier tabulators, the 5/32" character width resulted in widely separated characters. Note that the headers and boxes were not printed by the tabulator, but were pre-printed on the form.

Output from the IBM 405 tabulator, showing a billing statement. Apparently cocaine was a common product back then. (Electronic Accounting Machines page 17-19.)

Output from the IBM 405 tabulator, showing a billing statement. Apparently cocaine was a common product back then. (Electronic Accounting Machines page 17-19.)

At first forms were hand-fed sheets of paper, but for convenience these were soon replaced by continuous-feed forms.12 To keep forms from slipping out of alignment, holes were added along the sides so forms could be fed by pin-feed or tractor-feed mechanisms. These forms often used a removable 1/2" perforated strip on each side containing the feed holes.22 Thus, the hole-to-hole width was 1/2" less than the overall width, and the printable region was 1" less than the overall width.

Businesses would order customized forms for their particular needs, but these forms were usually produced in standardized widths, given below.11 Surprisingly, these arbitrary-seeming form sizes are still standard sizes available today. Many of the standard form widths are round numbers such as 8" and 11", but there are also strange numbers such as 12 27/32" and 18 15/16".13 I explain most of these sizes in the footnotes.1516 Note that most of the unusual widths are multiples of the 5/32" character width (hole-to-hole); I've highlighted these in yellow. I believe making the width a multiple of 5/32" was a deliberate choice.14

Standard form widths, from the 402 manual, page 151.

Standard form widths, from the 402 manual, page 151.

The 402's 88 character output fit exactly onto a 14 7/8" wide form, while also being a multiple of 5/32" (hole-to-hole).17 I believe that this was the reason that 14 7/8" paper became a standard. This width is the dimension of standard green-bar line printer paper still used today, so this dimension is very important. Note that this paper size became a standard before commercial computers even existed.

IBM 407 accounting machine (1949)

The successor to the IBM 402 accounting machine was the IBM 407 accounting machine, introduced in 1949. The most important feature from our perspective was the move from type bars to type wheels. Each type wheel had 47 characters (letters, numbers and symbols) around the circumference and rotated at high speed to print the correct character.19 The tabulator used 120 of these wheels to print a line of 120 characters.

Type wheel from an IBM 407 accounting machine.

Type wheel from an IBM 407 accounting machine.

The narrow type wheels enabled the 407 to print 10 characters per inch (standard typewriter pica pitch). The output below shows how the tabulator could issue checks using pre-printed forms. Note that the 407's output looks like normal typing compared to the widely spaced characters of the earlier 405 and 402.

Sample output from an IBM 407 accounting machine. Character spacing is much more natural than the earlier 402 output. Sprocket-fed forms are now common. Figure 128 from Manual of Operation.

Sample output from an IBM 407 accounting machine. Character spacing is much more natural than the earlier 402 output. Sprocket-fed forms are now common. Figure 128 from Manual of Operation.

The 407 operating manual described how to design forms for the 407,20 and listed eleven standard form sizes (below).21 Despite the switch from 5/32" characters to much narrower 0.1" characters, most of the new standard form widths matched the earlier 402 widths (indicated in green). Many of the previous strange form widths (such as 17 25/32") were dropped, but 13 5/8" and 14 7/8" were preserved, which will prove important.

Standard widths for forms for the IBM 407. From 407 Operating Manual page 187.

Standard widths for forms for the IBM 407. From 407 Operating Manual page 187.

The IBM 1403 printer (1959) and its 132 columns

Finally we arrive at the 1403 line printer (1959). This printer supported line widths of 100 character, 120 characters, and 132 characters at 10 characters per inch. The 120 character line is obviously useful for backward compatibility with the 407. But what about 132 characters?

Note that the 13 5/8" form conveniently fit the 407's (or 1403's) 120 character line with a small margin.23 The next-larger standard form width was 14 7/8". The increase of 1.25 inches allows you to add 12.5 characters.24 Thus, the jump from 120 to 132 characters was an obvious product improvement since it makes use of the next standardized form width. One objection is that 130 seems like a more sensible round number—the UNIVAC printer used 130 characters per line—so why not use 130 instead of 132? Due to the complex alignment between the 1403's chain and the print hammers, a line width divisible by 3 (such as 132) works out better.25 I suspect this is the primary reason that the IBM 1403 used 132 characters rather than 130.26 A width of 128 might seem better because of binary, but it's not; the 1401 was a decimal machine so there's no benefit to 128.27

The IBM 1403 printer generating a Mandelbrot set on standard 14 7/8"×11" green-bar paper. The IBM 1401 computer is at the left.

The IBM 1403 printer generating a Mandelbrot set on standard 14 7/8"×11" green-bar paper. The IBM 1401 computer is at the left.

Conclusion

To summarize my hypothesis,28 the 132-character line on 14 7/8" wide paper has its roots in the dimensions of punch cards over a century ago. IBM's early 45-column punch cards resulted in the creation of a printing mechanism with a wide character spacing of 5/32" to match the punch card hole spacing. Even though IBM moved to 80-column cards in 1928, accounting machines continued to use 5/32" characters in the 1930s and 1940s. This resulted in standardized form widths, most importantly 14 7/8" which fit a line of 88 characters. In 1949, IBM's tabulators moved to a standard 10 characters per inch spacing. With that character size and 14 7/8" paper, a 132-character line is natural, and this was implemented on the IBM 1403 printer in 1959.

Because the 1403 printer was wildly popular, 132 character lines on 14 7/8" paper became a de facto standard supported by many other companies. This is why even though punch cards are long obsolete, you can easily buy 14 7/8" green-bar line printer paper to this day.

I announce my latest blog posts on Twitter, so follow me at @kenshirriff for future articles. I also have an RSS feed. I've written about accounting machines before, here and here, if you want to learn more. Thanks to Dag Spicer and Sydney Olson (CHM) and Max Campbell (IBM Archives) for research assistance.

Notes and references

  1. I've been wondering about 132 columns for a long time. I asked the 1401 restoration team about 132 columns a while ago, but didn't get any solid answers. Retrocomputing StackExchange discussed the source of 132 columns last year, but I find the answers unconvincing. 

  2. It's interesting to look at the history of printers and their assorted line widths.

    IBM kept old printing technology around for decades. The print mechanism from the 407 (1949) was reused in the IBM 716 and 717 printers for the IBM 700 series vacuum tube mainframes (1952+) and the IBM 7000 series transistorized mainframes (1958+). The IBM 407 was also used as an output unit for the IBM 650 drum-based computer (1955) and IBM 305 RAMAC disk-based computer (1956). The IBM 1132 printer for the low-end IBM 1130 computer (1965) also used the 407's print mechanism.

    IBM introduced high-speed wire printers (i.e. dot matrix) in 1955, which output 1000 lines per minute; the 719 printer had 60 column output, while the 730 had 120 columns. Unlike modern dot matrix printers, these printers had multiple print heads (30 or 60) for high speed. Unfortunately, these printers were unreliable and a commercial failure. (For details see pages 484-486 of IBM's Early Computers and the Manual of Operation.)

    The RAMAC system (1956) used the IBM 370 printer (unrelated to the later IBM System/370 mainframe), which printed 80 character lines at 10 characters per inch. This printer was very slow; it took about 2 seconds to print a line using a single octagonal typestick (manual).

    In 1970, IBM introduced the IBM System/370 along with the IBM 3211, a new high-speed (2000 lines per minute) line printer. This printer had 132 columns or optionally 150. It was similar to a chain printer except it used a train of character slugs.

    I don't have as much information on non-IBM printers, but in 1954, Remington Rand introduced the first high-speed drum printer, the "High Speed Off Line Printer System" for the UNIVAC computer. This printer produced 600 lines per minute at 130 characters per line. The printer used 18 kW of power and was cooled by 8 gallons per minute of chilled water (details, details). As far as other manufacturers, Analex produced a 120-column line printer. Bull had a printer with 80 print bars and a 92-character alphabetical printer Samsatronic had a 140-character dot matrix printer in the 1950s. Burroughs introduced a fast (1000 line per minute) dot matrix printer (called a wire printer) in 1954 that printed 100 character lines. The Burroughs 220 High-Speed Printer System (1959) used a drum to produce 120 character lines at 1225 lines per minute.

    For an extensive history of printers, see Print Unchained: 50 Years of Digital Printing, 1950-2000 and Beyond. IBM's Early Computers has a detailed discussion of the history and development of IBM printers (Chapter 12.4). It doesn't mention the reason behind different line lengths, unfortunately. For information on printing dimensions of IBM's printers of the 1970s, see Form Design Reference Guide for Printers. More information on early printers can also be found in The U.S. Computer Printer Industry

  3. The estimate that half of the continuous forms volume was printed on IBM 1403 printers is from Print Unchained: 50 Years of Digital Printing, 1950-2000 and Beyond page 102. The estimate is attributed to "one observer" at "some point in the latter 1960s." The IBM 1403 had a long life; IBM 360 and IBM 370 mainframe systems used it into the 1970s. 

  4. As a measure of the popularity of 14 7/8" forms, in 1971 that width was estimated to make up 50% of forms. (Computer Industry Annual, 1971, p309.) 

  5. Although line printers are best known for using 14 7/8" wide paper, the IBM 1403 printer supported forms from 3 1/2" to 18 3/4" wide; see IBM 1403 Printer Component Description pages 11 and 12. Note that the printable region is 13.2", so forms can be much wider than the printable region. 

  6. Confusingly the word "tabulator" was used for two totally different types of machine. Originally, a "tabulator" was a person, "one who tabulates, or draws up a table or scheme" (OED, 1885). The first type of machine using the name "tabulator" was Hollerith's punch-card machine that processed punch cards for the 1890 census, leading to the Hollerith Integrating Tabulator. Note that these tabulators generated output on counter dials (below); they didn't print any output, tabular or otherwise.

    Hollerith Electric Tabulating System (replica). Cards were manually fed into the reader on the right, and results were counted on the dials.

    Hollerith Electric Tabulating System (replica). Cards were manually fed into the reader on the right, and results were counted on the dials.

    The second type of tabulator was the tabulating typewriter (1890). These devices were simply typewriters with tab stops to make it easier to produce tables. (The "tabulating key" on these typewriters is is the root of the "tab" key on the modern computer keyboard.) The decimal tabulator (1896) added multiple tab keys that would tab to the proper location for a 1-digit number, 2-digit number, 3-digit number, etc.

    Underwood 6 typewriter with decimal tabulator (1934). Inset shows the decimal tab keys enlarged: "Tab Stop Clear", ".", "1", "10", "100", "1000", "Tab Stop Set". Interestingly, the platen scale shows 132 tick marks. Photo courtesy of J Makoto Smith.

    Underwood 6 typewriter with decimal tabulator (1934). Inset shows the decimal tab keys enlarged: "Tab Stop Clear", ".", "1", "10", "100", "1000", "Tab Stop Set". Interestingly, the platen scale shows 132 tick marks. Photo courtesy of J Makoto Smith.

    Later IBM punch card tabulators included a printer and printed tabular output, so they were tabulators in both senses. Soon afterward, IBM stopped calling them tabulators and changed the name to Electric Accounting Machine or EAM (1934). 

  7. In the 285 tabulator, the last type bar in a print bank often had an asterisk or "CR" symbol rather than numbers. An asterisk was used to indicate a subtotal, and "CR" indicated a credit (negative number). 

  8. Why did IBM's early punch cards have 45 columns with 5/32" spacing? See The Punched Card for history. The short answer is 28-column cards (from the 1890 census) used 1/8" holes with 1/8" between holes. The gap between holes was halved to 1/16" for 36-column cards, and halved again to 1/32" for 45-column cards, yielding 5/32" spacing. 

  9. IBM's early interpreters printed 45 columns of numeric-only data, on short 34-column cards, 45-column cards, or "modern" 80-column cards (details). (While 45-column cards were originally thought to have almost unlimited capacity to meet all requirements, the increase to 80 columns was soon necessary.) In the 1950s IBM introduced alphabetic interpreters that could print 60 columns of alphanumeric information on a punch card. The IBM 552 interpreter used 60 type bars. The IBM 557 interpreter (1954) switched to 60 typewheels. Apparently, the IBM 557 had reliability problems and later 60-column interpreters went back to type bars: the IBM 548 and IBM 552 (1958 and 1957). 

  10. The high quality of the IBM 1403's print was largely due to the use of a horizontally rotating chain. Earlier printers used type bars, type wheels, or drums. These approaches have the problem that any error in timing or positioning results in characters that are shifted vertically, resulting in objectionably wavy text. On the other hand, positioning errors with a type chain are horizontal, and people are much less sensitive to type that is spaced unevenly. 

  11. Why would customers care about standard form sizes? The 407 reference manual stated that forms of standard sizes can be obtained more quickly and economically from forms manufacturers than non-standard sizes. In addition, when using the pin-feed platen, the platen dimensions had to match to the form width. IBM had standard platens to match the standard form sizes (see 923 parts catalog page 16), but non-standard forms required a custom platen.  

  12. Accounting machines added support for continuous forms in several steps. On early tabulators, the operator needed to stop the machine and manually advance the paper to the top of each form. The IBM 921 Automatic Carriage (1935) provided a motorized mechanism to automatically jump to the top of a form or a particular position on the form. But even with an automatic carriage, the operator needed to ensure the forms didn't slip out of alignment (especially multiple-copy forms with carbon paper). Standard Register Co. solved this problem in 1931 with the pin-feed platen driving forms with feed holes along the edges. IBM tabulators supported these forms with a pin-feed platen or the IBM F-2 Forms Tractor (407 Manual p70). By 1936, Machine Methods of Accounting stated, "The use of continuous forms in business has been increasing at a rapid pace in recent years due to the perfection of more and better mechanical devices for handling such forms."  

  13. The standard form widths had a long lifetime, with most of them still available. The book Business Systems Handbook (1979) has a list of typical widths for continuous forms: 4 3/4", 5 3/4", 6 1/2", 8, 8 1/2", 9, 9 1/2", 9 7/8", 10 5/8", 11, 11 3/4", 12, 12 27/32", 13 5/8", 14 7/8", 15", 15 1/2", 16", 16 3/4", 17 25/32". (18 15/16" is the only standard IBM size missing from this list.) Even though IBM dropped many sizes in their 407 standard list (such as 12 27/32" and 17 25/32"), they were unsuccessful in killing off these sizes. 

  14. A major part of my analysis is that the standard form hole-to-hole width is typically divisible by 5/32" (although not always). I couldn't find a stated reason for this, but I have a theory. To support continuous forms, pin wheel assemblies (below) are attached to the ends of the platen cylinder. Consequently, the hole-to-hole distance is determined by the platen width. It makes sense for the platen width to be a multiple of 5/32" so characters fit exactly. The distance from the edge of the platen to the pin centers appears to be 5/32". (I measured pixels in photos to determine this; I don't have solid proof.) Thus, the hole-to-hole distance will also be a multiple of 5/32"

    The pin-feed platen consists of two pin wheels that go on the end of the platen cylinder. Adapted from IBM Operators Reference Guide (1959) page 80.

    The pin-feed platen consists of two pin wheels that go on the end of the platen cylinder. Adapted from IBM Operators Reference Guide (1959) page 80.

    Many of the standard IBM form widths are divisible not only by the character width (5/32") but also divisible by the width of 4 characters (5/8"). I found a mention in Machine Methods of Accounting (page 17-1) that the IBM 405's original friction-feed carriage was adjustable in units of 4 characters, held in position by a notched rod. This suggests that these widths were easier to configure for mechanical reasons. 

  15. The IBM 285 tabulator was configured with 3 to 7 print banks, each 1 7/8" wide. (See Machine Methods of Accounting: Electric Tabulating Machines page 14.) I believe these were the source of the standard form widths 8", 9 7/8", 11 3/4", 13 5/8" and 15 1/2" (after adding some margin). Many of the other standard sizes are nice round numbers (e.g. 11" and 16"). 18 15/16" was probably selected to yield 18" paper (Arch B) with the punched margins (actually 1/16" smaller than 18" so the hole-to-hole width is a multiple of 5/32"). I can't come up with any plausible explanation for 17 25/32", but it may be related somehow to 17" ledger paper (ANSI B) or perhaps untrimmed paper sizes (SRA2, Demy).

    The 12 27/32" width was derived from loose-leaf accounting binders, which date back to 1896. In 1916, the Manufacturers of Loose Leaf Devices held a meeting in Atlantic City to establish standard sizes. They decided on 9 1/4"×11 7/8", 11 1/4"×11 7/8" and 7 1/2"×10 3/8". The standardization was successful since the smaller two sizes are still available today. To support the 11 7/8" ledgers, IBM apparently shaved off 1/32" to make the hole-to-hole width divisible by 5/32", yielding 11 27/32". Adding the 1/2" punched margins on each side results in the standard form width 12 27/32". While loose leaf may not seem exciting now, Office Appliances (1917) has a remarkable description of the victory of loose-leaf ledgers over "Prejudice, Indifference, Distrust" so they now "stand supreme as Leaders in modern progressiveness" in the "battle for Modern Efficiency". 

  16. Standardized tabulator form sizes were so prevalent that special business form rulers were produced to help design business forms. These rulers had markings indicating standard form widths and 5/32" and 0.1" scales for tabulator character spacing. These rulers are still available

  17. Since the 14 7/8" standardized width is very important, I'll discuss the math in more detail. The 405 accounting machine had 88 type bars, but there was one blank space (for a ribbon guide) between the alphanumeric and numeric type bars. Thus, the printing region was 89 × 5/32" = 13 29/32" wide. (As mentioned before, this just fits (probably by design) onto a 14" unperforated page.) Since standard perforated forms had 1/2" marginal perforations on each side to remove the holes, reasonable form widths would be 14 29/32" or 15". These values are not divisible (hole-to-hole) by 5/32"14. However, since the 402's characters have excessive white space around them, the characters still fit if we trim off 1/32" from the width. This yields a 13 7/8" line. Hole-to-hole, this is 14 3/8", divisible by 5/32" and even better 5/8". Adding the perforated margin, this yields 14 7/8" width as the "best" size to support the 405's 88-character output. (This seemed like random math, even to me, at first. But since the same approach explains the 12 27/32" width, I'm reasonably confident it's the right approach.) 

  18. Why did the 405 accounting machine (1934) and 402 accounting machine (1948) have 88 print positions? The accounting machines had a 43-column alphabetical print unit and a 45-column numeric-only print unit (below), with a 5/32" gap between for the ribbon. I think these type bar print units were derived from the 45-column type bar print unit used for the IBM 550 interpreter (1930), since they have the same 5/32" character spacing. But that raises the question of why two columns are missing from the alphabetical print unit. The 405's line of 88 characters with a gap between the units just fits onto a 14" page. (A 14" page without holes, friction-fed.) This is mentioned in Alphabetic Accounting Machines (1936) page 17-2, so it's presumably deliberate. So I think they used 88 columns instead of 90 in order to fit on 14" paper. 

  19. I talked to an old IBM engineer who serviced a company's collection of 407 tabulators as a new engineer, but after cleaning the type wheels he didn't put enough oil on them. A couple weeks later, the type wheels started seizing up and would hit the platen at high speed, sawing notches into the platen. (The platen is the roller that the paper goes around.) He used up IBM's entire East Coast collection of platens to fix these tabulators. He was afraid IBM would fire him over this, but they were supportive of engineers and he stayed for a long career. 

  20. The design of business forms was a complex task. The book Business Systems Handbook (1979) has a 30-page chapter discussing forms design, including how to meet customer needs, determining the size and layout of the form, and ideas on form techniques for various purposes. 

  21. In the 1970s, IBM still had 11 standard form widths, but there were a couple changes from the 407 era. An IBM patent mentions this in a vague way; apparently the 16 3/4" width was dropped and 11" added. 

  22. The diagram below gives some information on the dimensions of forms for the IBM 407 accounting machine.

    IBM's recommended specifications for forms. From Reference Manual, IBM 407 Accounting Machine.

    IBM's recommended specifications for forms. From Reference Manual, IBM 407 Accounting Machine.

    One important takeaway from this diagram is that the printing width is 1" less than the overall form width. It's interesting to note that the hole width is 5/32", exactly the same as the character width on a 402 accounting machine. 

  23. A width of 13 5/8" gives a printable region of 12 5/8". This fits 120 characters with 5/8" of extra space. A margin makes it easier to align the paper so characters fit between the perforations. Note that a margin is more important with 0.1" characters than 5/32" characters because the wider characters are already surrounded by white space. 

  24. Using 14 7/8" paper gives a printable region of 13 7/8" (13.875"), so you could fit 138 characters on a line. I couldn't find any 138-character printers, but several used 136-character lines. CDC in particular liked 136 columns, as shown by the CDC 501 line printer and later CDC 580. The book Print Unchained. said that 136 columns was a European width, but I haven't been able to find any line printer models fitting this. Later dot matrix printers from Epson, Datapro and other companies often supported 136 columns. 

  25. The timing of the 1403 printer is fairly complicated; I've made an animation here to illustrate it. The important thing for the current discussion is that every third character position gets a chance to print in sequence, so the printer makes three "subscans" to cover all the character positions. Thus, it makes sense for the line length to be a multiple of 3 so all the subscans are equal. Obviously it's possible for the 1403 printer to support a line length that isn't a multiple of 3, since some 1403 printer models supported 100-character lines. 

  26. This may be numerology, but it seems that IBM liked increasing print capacity in steps of 12. On the IBM 285 accounting machine, this made sense since each print bank was 12 characters wide. The IBM 407 accounting machine had 120 columns. The IBM 1443 printer (1962) had line lengths of 120 or 144 characters (2*12 characters more). So it seems plausible that the 132 column line was considered reasonable as it added one more 12-character column. 

  27. You might think that 128 characters per line would be more convenient since it's a power of 2. However, the IBM 1401 was a decimal (BCD) machine with decimal addressing. (For instance, it had 4000 characters of storage, not 4096.) Since it needed to count the line in decimal, there is no hardware advantage to 128. 

  28. To summarize the summary: 88 × 5/32" ≈ 132 × 0.1" (with a bit of margin)