Software woven into wire: Core rope and the Apollo Guidance Computer

Onboard the Apollo spacecraft, the revolutionary Apollo Guidance Computer helped navigate to the Moon and land on its surface. One of the first computers to use integrated circuits, the Apollo Guidance Computer was lightweight enough and small enough (70 pounds and under a cubic foot) to fly in space. An unusual feature that contributed to its small size was core rope memory, a technique of physically weaving software into high-density storage. In this blog post, I take a close look at core rope and the circuitry that made it work.1

Detail of core rope memory wiring from an early (Block I) Apollo Guidance Computer. Photo from Raytheon.

Detail of core rope memory wiring from an early (Block I) Apollo Guidance Computer. Photo from Raytheon.

The Apollo Guidance Computer (AGC) had very little memory by modern standards: 2048 words of RAM in erasable core memory and 36,864 words of ROM in core rope memory. In the 1960s, most computers (including the AGC) used magnetic core memory for RAM storage, but core ropes were unusual and operated differently. Erasable core memory and core rope both used magnetic cores, small magnetizable rings. But while erasable core memory used one core for each bit, core rope stored an incredible 192 bits per core, achieving much higher density.2 The trick was to put many wires through each core (as shown above), hardwiring the data: a 1 bit was stored by threading a wire through a core, while the wire bypassed the core for a 0 bit. Thus, once a core rope was carefully manufactured, using a half-mile of wire, data was permanently stored in the core rope.

The Apollo Guidance Computer. The empty space on the left held the core rope modules. The connectors on the right linked the AGC to the rest of the spacecraft.

The Apollo Guidance Computer. The empty space on the left held the core rope modules. The connectors on the right linked the AGC to the rest of the spacecraft.

We3 are restoring the Apollo Guidance Computer shown above. The core rope modules (which we don't have)4 would be installed in the empty space on the left. On the right of the AGC, you can see the two connectors that connected the AGC to other parts of the spacecraft, including the DSKY (Display/Keyboard). By removing the bolts holding the two trays together, we could disassemble the AGC. Pulling the two halves apart takes a surprising amount of force because of the three connectors in the middle that join the two trays. The tray on the left is the "A" tray, which holds the logic and interface modules. The tray on the right is the "B" tray, which holds the memory circuitry, oscillator, and alarm. The six core rope modules go under the metal cover in the upper right. Note that the core ropes took up roughly a quarter of the computer's volume.

The AGC is implemented with dozens of modules in two trays. The trays are connected through the three connectors in the middle.

The AGC is implemented with dozens of modules in two trays. The trays are connected through the three connectors in the middle.

How core rope works

At a high level, core rope is simple: sense wires go through cores to indicate 1's, or bypass cores to indicate 0's. By selecting a particular core, the sense wires through that core were activated to provide the desired data bits.

Magnetic cores have a few properties that made core memory work.7 By passing a strong current along a wire through the core, the core becomes magnetized, either clockwise or counterclockwise depending on the direction of the current. Normally the cores were all magnetized in one direction, called the "reset" state, and when a core was magnetized the opposite direction, this is called the "set" state. When a core flips from one state to another, the changing magnetic field induces a small voltage in any sense wires through the core. A sense amplifier detects this signal and produces a binary output.

The key advantage of core rope is that many sense wires pass through a single core, so you can store multiple bits per core and achieve higher-density storage. (In the case of the AGC, each core has 192 sense wires passing through (or around) it5, so each core stored 12 words of data.) This is in contrast to regular read/write core memory, where each core held one bit.

Core rope used an unusual technique to select a particular core to flip and read. Instead of directly selecting the desired core, inhibit lines blocked the flipping of every core except the desired one. In the diagram below, the current on the set line (green) would potentially flip all the cores. However, various inhibit lines (red) have a current in the opposite direction. This cancels out the set current in all the cores except #2, so only core #2 flips.

This diagram illustrates how core rope memory worked. Simplified diagram from MIT's Role in Project Apollo vol III, Fig. 3-12

This diagram illustrates how core rope memory worked. Simplified diagram from MIT's Role in Project Apollo vol III, Fig. 3-12

In the diagram above, only the sense lines (blue) passing through core #2 pick up an induced voltage. Thus, the weaving pattern of the sense lines controls what data is read from core #2. To summarize, the inhibit lines control which core is selected, and the sense wires woven through that core control what data value is read.

The inhibit lines are driven from the address lines and arranged so that all inhibit lines will be inactive for just the desired core. For any other address, at least one inhibit line will be activated, preventing the core from flipping and being read. 6

The AGC's core ropes

The Apollo Guidance Computer contained six core rope modules, each storing 6 kilowords of program information. The AGC was a 15-bit machine: each word consisted of 15 data bits and a parity bit. (While a word that isn't a power of two may seem bizarre now, computers in the 1960s were designed with whatever word size fit the problem.8) Each module contained 512 cores, each storing 12 words of data. That is, each module had 192 (12×16) sense wires going either through or around each core. Each group of 16 sense wires for a word was called a "strand", so there were 12 strands.

AGC with rope modules. A rope module is partially inserted into one of the 6 slots in the AGC. Photo © Draper Labs via Caltech.

AGC with rope modules. A rope module is partially inserted into one of the 6 slots in the AGC. Photo © Draper Labs via Caltech.

The photo above shows how the core rope modules slid into the Apollo Guidance Computer; the pins on the end of each module meshed with connectors in the AGC. The core rope module below (and its companion) held an early Lunar Module program called Retread 50. We took our AGC to the Computer History Museum to read the data from these modules and we put the results online.

This core rope module held the Retread 50 software for the Apollo Guidance Computer. This module is from the Computer History Museum.

This core rope module held the Retread 50 software for the Apollo Guidance Computer. This module is from the Computer History Museum.

The 512 cores in each module were arranged physically as two layers of 256 cores (but electrically as four planes of 128 cores).9 A set and reset line went through all the cores in a plane, allowing a particular plane in the module to be selected.6 The photo below shows the interior of a core rope module. One layer of 256 cores is visible, with the tiny wires threaded through them. (The second layer of 256 cores is underneath.) Note that the cores only take up about half the module space. Surrounding the cores are hundreds of resistors and diodes that were used to select the desired word.10: These components were mounted with cordwood construction, with the components installed vertically through holes in the module.

Inside a core rope. The 32×8 cores visible form a layer of 256 cores; a second layer is underneath. Photo from CHM/Raytheon CN-4-421-C.

Inside a core rope. The 32×8 cores visible form a layer of 256 cores; a second layer is underneath. Photo from CHM/Raytheon CN-4-421-C.

The photo below shows one of the Retread 50 modules from the Computer History Museum with the cover removed. The cores were encased (potted) in protective epoxy to protect them during flight, so the cores are not visible.

A core rope module with the cover removed.  This module is at the Computer History Museum.

A core rope module with the cover removed. This module is at the Computer History Museum.

Manufacturing the core rope

Wiring of the core rope was a tedious process that took about 8 weeks and cost $15,000 per module. As a result, the computer code needed to be frozen months in advance and last-minute patches to the code were not possible.11 The core ropes (and the AGC) were manufactured by Raytheon in Waltham, Massachusetts. Many of the women building the ropes were hired from the local textile industry for their sewing skills; other skilled women came from the Waltham Watch Company, a company that also helped with the high-precision gyroscopes used on the Apollo missions.12

Much of the core rope wiring (address inhibit wires, set, reset, etc.) was the same for all core rope modules. Two women passed a needle back and forth through the cores to create this wiring. The needle was hollow and contained the necessary length of wire. The clip above (from Computers for Apollo via xpascal) shows this process.

"Space age needleworker 'weaves' core rope memory for guidance computers used in Apollo missions.
Memory modules will permanently store mission profile data on which critical maneuvers in space are based.
Core rope memories are fabricated by passing needle-like, hollow rod containing a length of fine wire through cores in the module frame.
Module frame is moved automatically by computer-controlled machinery to position proper cores for weaving operation. Apollo guidance computer and associated display keyboard are produced at Raytheon Company plant in Waltham, Massachusetts."
Caption and photo are from a Raytheon document, courtesy of 
Transistor Museum.

"Space age needleworker 'weaves' core rope memory for guidance computers used in Apollo missions. Memory modules will permanently store mission profile data on which critical maneuvers in space are based. Core rope memories are fabricated by passing needle-like, hollow rod containing a length of fine wire through cores in the module frame. Module frame is moved automatically by computer-controlled machinery to position proper cores for weaving operation. Apollo guidance computer and associated display keyboard are produced at Raytheon Company plant in Waltham, Massachusetts." Caption and photo are from a Raytheon document, courtesy of Transistor Museum.

To store the desired binary data, the core rope's sense lines were threaded through or around cores in the proper sequence. Originally, this wiring was done entirely manually, which was slow and error-prone. Raytheon improved the process by combining automated positioning with manual threading. First, the program's assembly code was fed into an assembler called YUL that produced a Mylar punched tape. An automated system (above, below) read this tape and step-by-step moved the proper core into position. A woman manually threaded the sense line through an aperture into the indicated core. The aperture then jogged down to pull the wire around a nylon pin, moving the wire out of the way for the next sense wire to be threaded. Once all the cores were threaded, the nylon pins were removed and the final core rope module was tested by an automated system, again controlled by punched tape.

A woman wiring sense lines in a core rope. She is threading the wire through a white circular aperture that indicates the core to wire. Source: Raytheon CN-4-20C / Smithsonian Institution WEB15435-2016.

A woman wiring sense lines in a core rope. She is threading the wire through a white circular aperture that indicates the core to wire. Source: Raytheon CN-4-20C / Smithsonian Institution WEB15435-2016.

Core rope circuitry

Core rope required a lot of digital and analog circuitry to drive and read the ropes. This section briefly describes this circuitry and shows the modules that implemented it. The bottom four modules in the picture below (Sense Amplifier, Strand Select, and two Rope Drivers) implemented the analog circuitry. Logic modules (in the "A" tray shown earlier) decoded the address into rope, module, and strand select signals. We carefully tested the analog and digital modules individually before powering up the AGC.

Tray B of the Apollo Guidance Computer. The erasable memory module is the large black module. Most of the other modules on the left are support for the erasable (RAM) and fixed (core rope) memory. The core rope modules would slide into the right hand side under the metal cover.

Tray B of the Apollo Guidance Computer. The erasable memory module is the large black module. Most of the other modules on the left are support for the erasable (RAM) and fixed (core rope) memory. The core rope modules would slide into the right hand side under the metal cover.

Sense Amplifier Modules

When a core flipped (either in fixed memory or erasable memory), the changing magnetic field induced a weak signal in a sense line, one sense line for each bit in the word. This signal needed to be amplified and converted to a logic signal; this was the job of the sense amplifiers. The sense amplifiers were implemented using a custom sense amplifier IC. (The AGC used only two different types of integrated circuits, the sense amplifier and a dual NOR gate.) The AGC had two identical sense amplifier modules; one (in slot B13) was used by the erasable core memory, while the other (B14) was used by the fixed core rope memory.

The photo below shows a sense amp module. Eight sense amplifiers are visible and eight other sense amplifiers are on the other side of the module. The sense amplifiers required carefully-tuned voltage levels for bias and thresholds so the modules included voltage regulation circuitry (center and right in photo). On top of the module (front in the photo), you can see the horizontal lines of the nickel ribbon that connected the circuits; it is somewhat similar to a printed circuit board.

Sense amplifier module with the top removed. Note the nickel ribbon interconnect at the top of the module.

Sense amplifier module with the top removed. Note the nickel ribbon interconnect at the top of the module.

The closeup photo below shows the module's cordwood construction. In this high-density construction technique, components were inserted into holes in the module. Resistors and capacitors passed through from one side of the module to the other, with one lead on either side. On each side of the module, components were connected by point-to-point wiring. This wiring was welded, not soldered. White insulating sleeves were placed over the wires to prevent short circuits.

Closeup of the sense amplifier module for the AGC. The sense amplifier integrated circuits are at the top and the reddish pulse transformers are below. The pins are at the bottom and the wires at the top go to the nickel ribbon, which is like a printed circuit board.

Closeup of the sense amplifier module for the AGC. The sense amplifier integrated circuits are at the top and the reddish pulse transformers are below. The pins are at the bottom and the wires at the top go to the nickel ribbon, which is like a printed circuit board.

Near the top of the photo are two amplifier integrated circuits in metal cans. Below are two reddish pulse transformers. An output driver transistor is between the pulse transformers.13 Only the ends of resistors are visible, due to the cordwood construction. At the top of the module are connections to the nickel ribbon interconnect. Modules that were flown on spacecraft were potted in plastic so the components were protected against vibration. Since our AGC was used on the ground, most modules were unpotted and the components are visible.

Address decoding

Address decoding for the core rope required a fair amount of logic for two reasons. The first problem was the AGC's instructions used 12-bit addresses, which could only address 4 kilowords of storage. Since the AGC had 36 kilowords of fixed memory and 2 kilowords of erasable memory, it used a complex system of bank registers and mapping logic to convert a 12-bit address into the correct physical memory location. The second problem was that each core rope module held 6 kilowords, which is not a power of two. Thus, moderately complex decoding circuitry was required to generate module and strand select signals from the address.

The core rope addressing, control, and timing logic is spread across several logic modules. Most of the address decoding was implemented in logic module A15. Other core rope logic is in modules A6, A8, and A14. Photo courtesy of Mike Stewart.

The core rope addressing, control, and timing logic is spread across several logic modules. Most of the address decoding was implemented in logic module A15. Other core rope logic is in modules A6, A8, and A14. Photo courtesy of Mike Stewart.

The AGC's logic circuitry (including the processor) was implemented with NOR gates. Each integrated circuit implemented two NOR gates using RTL (resistor-transistor logic), an early logic family. These ICs were costly; they cost $20-$30 each (around $150 in current dollars). There wasn't much inside each IC, just six transistors and eight resistors. Even so, the ICs provided a density improvement over the planned core-transistor logic, making the AGC practical. The decision to use ICs in the AGC was made in 1962, amazingly just four years after the IC was invented. The AGC was the largest consumer of ICs from 1962 to 1965 and ended up being a major driver of the integrated circuit industry.

Each IC contained two NOR gates implemented with resistor-transistor logic. From SCD 2005011.

Each IC contained two NOR gates implemented with resistor-transistor logic. From SCD 2005011.

Strand select

The strand select module in position B15. Photo from Mike Stewart.

The strand select module in position B15. Photo from Mike Stewart.

The address decoding logic described above produced signals indicating the desired strand and module. These logic-level signals needed to be converted to 14-volt pulses to drive the core rope modules. This task was performed by the strand select module, which consisted of transistor driver circuits using NPN and PNP transistors. The resistors in these circuits were individually selected to produce exactly the desired currents.

A closeup of the strand select module, showing its cordwood construction.

A closeup of the strand select module, showing its cordwood construction.

Rope driver

The rope driver modules generated the high-current pulses (up to 450mA) necessary to flip the cores. Like the strand select module, the rope driver modules used NPN and PNP transistor driver circuits with carefully-selected resistors to ensure the desired current. They also used inductors to control the pulse shape, necessary to keep switching noise from overwhelming the small signals generated by the cores. The modules generated 16 inhibit signals (7 address bits and parity, along with complements) as well as two core set signals and four core reset signals.

The rope driver modules in B16 and B17 provided high-current pulses to the rope module. Photo from Mike Stewart.

The rope driver modules in B16 and B17 provided high-current pulses to the rope module. Photo from Mike Stewart.

The core rope simulator

Unfortunately, the core ropes were missing from the Apollo Guidance Computer we are restoring. Instead, this AGC has core rope simulator boxes in place of the ropes. The purpose of these simulator boxes was to feed code into an AGC for development and ground testing without requiring a new core rope to be manufactured every time. The simulator allowed an external computer to supply data words in place of the core rope, allowing the AGC to run arbitrary programs.

The core rope simulators are installed in the left side of the AGC in place of the real core ropes. Two round connectors on the left allowed the simulators to be connected to an external computer that provided the data.

The core rope simulators are installed in the left side of the AGC in place of the real core ropes. Two round connectors on the left allowed the simulators to be connected to an external computer that provided the data.

The simulator consists of two boxes that plugged into the AGC's core rope slots. These boxes are visible in the upper-left side of the AGC above, with round military-style connectors for connection to the external computer. One box exported address information each time the AGC performed a core rope read, while the other box fed 16 data bits into the AGC, "tricking" the AGC into thinking it had read from core rope. I built an interface from the core rope simulator boxes to a Beaglebone and will write more about that project later.

Conclusion

Core memory was the most common storage technology for computers in the 1960s. However, the Apollo Guidance Computer also used core ropes for read-only storage, an uncommon storage technique that achieved high density but required considerable labor. Core ropes made it possible to fit complex software into the compact physical space of the AGC. While 36K of code seems ludicrously small by modern standards, it held enough code to navigate and land on the Moon. And now, decades later, we can recover this code from core rope modules and learn more about it.

Marc has a series of AGC videos; the video below discusses the core rope simulators. I announce my latest blog posts on Twitter, so follow me @kenshirriff for future articles. I also have an RSS feed. See the footnotes for more information sources15. Thanks to Mike Stewart for supplying images and extensive information.

Notes and references

  1. Prototype core rope memories had a single bundle of wire that looked similar to a rope. The final core rope memories didn't look as rope-like, but kept the name. 

  2. The core rope memory achieved a density of 1500 bits per cubic inch, including the driving hardware and packaging. This was about 5 times the density of erasable core memory. See MIT's Role in Project Apollo vol III pages 91 and 274.

    The cores in core rope were not regular ferrite cores, but permalloy ribbon wound around a non-magnetic steel bobbin. If you're used to ferrite cores, winding a metallic ribbon around a bobbin may seem like a strange way to make a core, but that was how the earliest cores were built, until ferrite cores started to be used in the early 1950s. The ferromagnetic materials used in wound-ribbon cores were developed by Germany in World War II and used by the German navy in magnetic amplifiers. After the war, American personnel brought back the material along with a rolling mill, and started US manufacturing under the name Deltamax. In other words, core memory had its origin in Nazi technology captured by the US. See Memories that shaped an industry, pages 39-40, 52, 87, 90.

    The cores used in the core rope were rather large, .249" in diameter, about 5 times the diameter of the cores used in the erasable core memory. (In comparison, some IBM 360 mainframes of the same era used tiny cores that were .021" in diameter.) When a rope core flipped, it yielded a fairly large voltage pulse of 215-430 mV. Properties of the cores were defined in NASA specification control drawing SCD-1006320

  3. The AGC restoration team consists of Mike Stewart (creator of FPGA AGC), Carl Claunch, Marc Verdiell (CuriousMarc on YouTube) and myself. The AGC that we're restoring belongs to a private owner who picked it up at a scrap yard in the 1970s after NASA scrapped it. For simplicity, I refer to the AGC we're restoring as "our AGC".

    The Apollo flights had one AGC in the command module (the capsule that returned to Earth) and one AGC in the lunar module. In 1968, before the Moon missions, NASA tested a lunar module with astronauts aboard in a giant vacuum chamber in Houston to ensure that everything worked in space-like conditions. We believe our AGC was installed in that lunar module (LTA-8). Since this AGC was never flown, most of the modules were not potted with epoxy. 

  4. Yes, we know about Francois and the rope modules he read; those are ropes for the earlier Block I Apollo Guidance Computer and are not compatible with our Block II AGC. Also, many people have asked if we talked to Fran about the DSKY. Yes, we have. 

  5. Mike Stewart pointed out that a core wouldn't have all 192 sense wires passing through it. Because the system used odd parity, at most 15 of the 16 bits can be high. Thus, at most 180 sense wires would pass through a core. 

  6. The Apollo Guidance Computer had one additional pair of inhibit lines for address parity so all non-matching cores will have at least two inhibit lines activated. (If a core was one line away from being activated, the parity would also be different, yielding two active inhibit lines.) The purpose of this was to ensure that every non-selected core received two inhibit signals and was solidly inhibited. Otherwise, a core with just one inhibit line high might receive a bit of net set current, changing its magnetism slightly and introducing noise.

    Note that 7 address lines select one of 128 cores. Each module consists of four (logical) planes of 128 cores, yielding 512 cores. To select one of the four planes, four different reset lines are used, to reset just the core in the desired plane. Thus, only that plane is read. Two set lines are used, one to set cores in planes A and B, and one for planes C and D. To avoid setting cores in two planes, the reset line in the undesired plane was activated at the same time as the set line, blocking the set in that plane. The obvious approaches would be to use four set lines (one per plane), or one set line (and use reset to block the others). I don't know why they used two set lines.

    Each module also had a "clear" line that passed through all the cores. This was similar to reset, but was needed due to a complexity of the AGC's opcode decoding. To fit more opcodes into a 15-bit instruction, the AGC used "quartercode" instructions. The idea was that some instructions didn't make sense on a fixed memory address, such as increment. The AGC would perform an entirely different instruction in that case, allowing a larger instruction set. The problem was that by the time instruction decoding had decided that the instruction didn't apply to the specified address, the read of fixed memory had already started, and the core was set. The "clear" line allowed this core to be reset so it wouldn't interfere with the desired read. I don't know why the existing reset lines couldn't be used for this purpose. (Summary and details.) The schematic is here

  7. If you are familiar with regular core memory (i.e. erasable RAM core memory), there are many similarities, but also many important differences. First, erasable core memory was arranged in a grid, and a particular core was selected by energizing an X line and a Y line in the grid. Second, erasable core memory stored a single bit per core, with the direction of magnetization indicating a 0 or 1. Core rope, on the other hand, stored many bits per core, with a 0 or 1 depending on if a sense wire goes through the core or not. The cores in core rope were much larger, since about 200 wires went through each core, while erasable core memory typically had 3 wires through each core. Finally, erasable core memory used the inhibit line for writing a 0, while core rope used inhibit lines for addressing. 

  8. For more information on why the Apollo Guidance Computer used 15-bit words, see MIT's Role in Project Apollo vol III page 32. The short answer is they required about 27-32 bits accuracy for navigation computations, and about 15 bits for control variables. Using a 15-bit word for small values and double-precision values for navigation provided sufficient accuracy. A 14-bit word was too small. A 17- or 18- bit word would simplify some things but increase costs. 

  9. Core rope memory in the earlier Block I Apollo Guidance Computer was configured slightly differently. The memory was folded into 4 layers of 128 cores instead of 2 layers of 256 cores. As a result, the Block I rope modules had a different shape: roughly square in cross-section, unlike the flat Block II modules. In early Block I modules, each core had 128 sense wires (8 words) threaded through it, yielding 4K words per module. With 6 modules, an early Block I AGC had 24K words of rope storage. In later Block I AGCs, the rope modules provided more storage: 192 sense wires (12 words) per core, yielding 6K words per module. Thus, 24K of rope storage required just 4 modules. For the Block II computers used for the Moon missions, 6 modules × 512 cores per module × 192 bits per core ÷ 16 bits per word = 36864 (36K) words in total. 

  10. Selection of a particular module and strand was done through a resistor/diode biasing network using hundreds of resistors and diodes in each module. The diagram below shows how this network operated. The selected strand line was pulled high, while the selected module line was pulled low. This resulted in current (red) through the selected strand and through the sense amplifier transformer. The remaining diodes were reverse-biased, so no current flowed. A voltage pulse on the sense wire through the selected core perturbed the current flow, resulting in a voltage imbalance across the sense amplifier transformer. This signal was detected by the sense amplifier, resulting in a 1 bit. (This circuit is rather confusing; you might expect the circuit to be a loop through the sense wire and the sense amplifier transformer, but instead, the currents are flowing towards the 0-volt module line in the middle. Study the directional arrows carefully to see how the current flows. The current from a flipped core is essentially superimposed on this current flow.)

    A particular strand and module were selected by a resistor/diode network. The non-selected diodes were reverse-biased, blocking those signals from the sense amplifiers.  Based on MIT's Role in Project Apollo vol III, Fig. 3-13

    A particular strand and module were selected by a resistor/diode network. The non-selected diodes were reverse-biased, blocking those signals from the sense amplifiers. Based on MIT's Role in Project Apollo vol III, Fig. 3-13

    The resistor and diodes in the left green box were repeated once for each strand (i.e. 192 times in each module), while the resistor and diodes in the right box were repeated once for each sense line (i.e. 16 times in each module). 

  11. The need to freeze the software design weeks in advance was viewed as a feature: "The inability to change the program without rebuilding one or more modules provides an effective management tool for the control of software changes. It also provides another incentive to make the software error free." See MIT's Role in Project Apollo vol III page 274. Much of the information in this section on core rope manufacturing is from One Giant Leap. The 1965 video Computers For Apollo shows AGC manufacturing process (including core ropes) in detail. 

  12. There are interesting gender issues behind the manufacture of core rope and core memory that I'll only mention briefly. The core rope has been referred to as LOL memory, referencing the "Little Old Ladies" who assembled it, but this name erases the women of color who also assembled core ropes. (I'm also not convinced that the LOL name was used at the time.) The software for a particular flight was managed by "rope mother" who was generally male (although the famous Margaret Hamilton was rope mother on LUMINARY). Also see Making Core Memory: Design Inquiry into Gendered Legacies of Engineering and Craftwork.

    Women weaving a core rope. Raytheon photo, via BBC.

    Women weaving a core rope. Raytheon photo, via BBC.

  13. The sense amplifier output circuitry is a bit confusing because the erasable core memory (RAM) and fixed rope core memory (ROM) sense amp outputs were wired together to connect to the CPU. The RAM had one sense amp module with 16 amplifiers in slot B13, and the ROM had its own identical sense amp module in slot B14. However, each module only had 8 output transistors. The two modules were wired together so 8 output bits are driven by transistors in the RAM's sense amp module and 8 output bits are driven by transistors in the ROM's sense amp module. (The motivation behind this was to use identical sense amp modules for RAM and ROM, but only needing 16 output transistors in total. Thus, the transistors are split up 8 to a module.) 

  14. I'll give a bit more detail on the sense amps here. The key challenge with the sense amps is that the signal from flipping a core is small and there are multiple sources of noise that the sense line can pick up. By using a differential signal (i.e. looking at the difference between the two inputs), noise that is picked up by both ends of the sense line (common-mode noise) can be rejected. The differential transformer improved the common-mode noise rejection by a factor of 30. (See page 9-16 of the Design Review.) The other factor is that the sense line goes through some cores in the same direction as the select lines, and through some cores the opposite direction. This helps cancel out noise from the select lines. However, the consequence is that the pulse on the select line may be positive or may be negative. Thus, the sense amp needed to handle pulses of either polarity; the threshold stage converted the bipolar signal to a binary output.

    Schematic of the circuitry inside a sense amplifier IC.

    Schematic of the circuitry inside a sense amplifier IC.

    The sense amplifier (above) was a custom integrated circuit designed by Sperry Rand in 1962. This chip pushed the state of the art for analog ICs and it may be the first integrated circuit amplifier. The sense amp chips initially cost $200 each, equivalent to $1700 now. 

A look at IBM S/360 core memory: In the 1960s, 128 kilobytes weighed 610 pounds

I recently received a vintage core memory array, part of an IBM System/360 mainframe computer. These arrays were used in a 128-kilobyte core memory system that filled a large cabinet weighing 610 pounds.1 This article explains how core memory worked, how this core array was used in mainframes, and why core memory was so bulky.

A 64 KB core array from the IBM S/360 Model 50. There are 18 core planes stacked front-to-back. The blue cables are the sense/inhibit lines. Driver cards are plugged into the front of this array.

A 64 KB core array from the IBM S/360 Model 50. There are 18 core planes stacked front-to-back. The blue cables are the sense/inhibit lines. Driver cards are plugged into the front of this array.

The IBM System/360 was a groundbreaking family of mainframe computers announced introduced in 1964, and much of the success of System/360 was due to core memory technology. The S/360 was an extremely risky "bet-the-company" project that cost IBM over $5 billion. The project was nearly derailed as the operating system OS/360 grew out of control: it was originally targeted for 16 KB systems, but grew to require 32 KB and then 64 KB. Fortunately, IBM was able to build larger core memories at a price that customers could still afford, so the operating system was usable.2 The System/360 project ended up being a huge success and ensured IBM's dominance of the computer industry for the next two decades. (For more about the System/360, see my recent article.)

How core memory worked

Core memory was the dominant form of computer storage from the 1950s until it was replaced by semiconductor memory chips in the early 1970s. Core memory was built from tiny ferrite rings called cores, storing one bit in each core. Each core stored a bit by being magnetized either clockwise or counterclockwise. A core was magnetized by sending a pulse of current through a wire threaded through the core. The magnetization could be reversed by sending a pulse in the opposite direction. Thus, each core could store a 0 or 1.

To read the value of a core, a current pulse flipped the core to the 0 state. If the core was in the 1 state previously, the changing magnetic field created a voltage in a sense wire. But if the core was already in the 0 state, the magnetic field wouldn't change and the sense line wouldn't pick up a voltage. Thus, the value of the bit in the core could be read by resetting the core to 0 and testing the tiny voltage on the sense wire. (An important side effect was that the process of reading a core erased its value so it needed to be rewritten.)

Closeup of an IBM 360 Model 50 core plane. There are three wires through each core. The X and Y wires select one core from the grid. The green wire is the sense/inhibit line.  These cores are 30 mils in diameter (.8mm).

Closeup of an IBM 360 Model 50 core plane. There are three wires through each core. The X and Y wires select one core from the grid. The green wire is the sense/inhibit line. These cores are 30 mils in diameter (.8mm).

Using a separate wire for each core would be impractical, but in the 1950s a technique called "coincident-current" was developed that used a grid of wires to select a core. This depended on a special property of cores called hysteresis: a small current had no effect on a core, but a current above a threshold would magnetize the core. This allowed a grid of X and Y lines to select one core from the grid. By energizing one X line and one Y line each with half the necessary current, only the core where both lines crossed would get enough current to flip while the other cores were unaffected.

A Model 50 core plane is arranged as a grid of cores. The Y lines run horizontally. X and sense/inhibit lines run vertically.  The sense/inhibit lines form loops at the top and bottom. Each of the four vertical pairs of blocks has separate sense/inhibit lines.
Each core plane was about 10¾ × 6¾ × ⅛ inches.

A Model 50 core plane is arranged as a grid of cores. The Y lines run horizontally. X and sense/inhibit lines run vertically. The sense/inhibit lines form loops at the top and bottom. Each of the four vertical pairs of blocks has separate sense/inhibit lines. Each core plane was about 10¾ × 6¾ × ⅛ inches.

To store a word of memory, multiple core planes were stacked together, one plane for each bit in the word. The X and Y drive lines passed through all the core planes, selecting one bit of the word from each plane. Each plane had a separate sense line to read that bit.7 The IBM core stack below stored a 16-bit word along with two parity bits, so there were 18 core planes.

The core memory consisted of 18 core planes stacked in horizontal layers. Connections to the edge of each plane are visible at the front. The cores themselves are not visible in this assembled array.

The core memory consisted of 18 core planes stacked in horizontal layers. Connections to the edge of each plane are visible at the front. The cores themselves are not visible in this assembled array.

Writing to core memory required additional wires called the inhibit lines, one per core plane. In the write process, a current passed through the X and Y lines, flipping the selected cores (one per plane) to the 1 state, storing all 1's in the word. To write a 0 in a bit position, the plane's inhibit line was energized with half current, opposite to the X line. The currents canceled out, so the core in that plane would not flip to 1 but would remain 0. Thus, the inhibit line inhibited the core from flipping to 1.

To summarize, a typical core memory plane had four wires through each core: X and Y drive lines, a sense line, and an inhibit line. These planes were stacked to form an array, one plane for each bit in the word. By energizing an X line and a Y line, one core in each plane could be magnetized, either for reading or writing. The sense line was used to read the contents of the bit, while the inhibit line was used to write a 0 (by inhibiting the writing of a 1).

Some interesting features of the IBM core memory stack

The IBM core memory I examined was fairly advanced, so there were some enhancements to the generic core memory described above. This core memory used the same wire for sense and inhibit, so there were three wires through each core instead of four, as you can see in the earlier closeup photo. This made manufacturing simpler, but complicated the circuitry. In addition, the core plane has some unusual features to reduce the amount of noise picked up by the sense wire, making it feasible to detect the tiny voltage in the sense wire. First, each plane had four sense/inhibit wires, not one. Since a sense wire only passed through 1/4 of the plane, it picked up less noise.7 In addition, the sense wire was shifted between the top half of the plane and the bottom half, so noise induced by an X line in one half would be canceled out in the second half. The photo below shows the sense wire (green) shifting over.

This closeup of the core plane shows the X and Y wires (red) and the sense wire (green) threaded through cores. Note that the sense wire shifts over two columns in the middle of the plane to reduce noise. Building planes with this shift used a patented manufacturing technique. Wires are connected to the frame of the core plane on the right.

This closeup of the core plane shows the X and Y wires (red) and the sense wire (green) threaded through cores. Note that the sense wire shifts over two columns in the middle of the plane to reduce noise. Building planes with this shift used a patented manufacturing technique. Wires are connected to the frame of the core plane on the right.

Each core plane in this memory array was rectangular, with 130 Y drive lines and 256 X drive lines. Since there was a core at each intersection, this yielded 33,280 cores. You might notice that this isn't a power of 2; the core plane held 32,768 cores for regular storage (32K) along with 512 extra cores for I/O storage. This extra storage was called "bump" storage. It was not part of the address space but was accessed through special circuitry.3

After passing through all the planes, the drive lines reached this circuit board. The Y lines (left and right sides) were returned to the drive circuitry through the yellow cables in the center. The X lines (top and bottom) were sent back through the stack for phase reversal.

After passing through all the planes, the drive lines reached this circuit board. The Y lines (left and right sides) were returned to the drive circuitry through the yellow cables in the center. The X lines (top and bottom) were sent back through the stack for phase reversal.

The X and Y drive lines in a core array pass through all the planes in the stack. Core arrays typically used jumper wires between the core planes, requiring a large number of soldered connections. One innovation in IBM's core memory design was to weld the planes together directly. Alternating pins along the edges of the plane were bent up or down and welded to the neighboring plane, simplifying manufacturing. This structure is shown in the photo below.

Eighteen core planes were stacked to store two bytes along with two parity bits. Each plane had metal pins that were alternately bent up and down, and welded to tabs on the neighboring planes. The black and yellow wires connected the lines to the driver circuitry. The X lines are visible in this photo.

Eighteen core planes were stacked to store two bytes along with two parity bits. Each plane had metal pins that were alternately bent up and down, and welded to tabs on the neighboring planes. The black and yellow wires connected the lines to the driver circuitry. The X lines are visible in this photo.

Systems using this core memory

I'll now take a detour to describe the systems that used this core array, and then discuss the circuitry that supported the array. IBM used several different core memory systems in the S/360 line. The core array in this article was used in the Model 40, Model 50, and the FAA's IBM 9020 air traffic control system.

IBM System/360 Model 40

The Model 40 was a popular midrange computer for scientific and commercial applications and was one of IBM's most profitable computers. It typically rented for about $9,000-$17,000 per month and brought IBM over a billion dollars in revenue by 1972. To achieve better performance than the low-end models, the Model 40 used a two-byte datapath; the core memory system was designed to fetch two bytes at a time rather than one.

The IBM S/360 Model 40. Photo source unknown.

The IBM S/360 Model 40. Photo source unknown.

The IBM S/360 Model 40 was a compact system (for the time), with the computer in one frame known as the main frame. This frame held the circuit cards that make up the CPU, along with the power supplies, microcode (stored on metalized mylar sheets read by transformers), and core memory storage. (In contrast, large 360 systems might have dozens of frames.) The 128 KB core memory unit was mounted in the front right of the Model 40's frame, behind the console. The Model 40 could support an additional 128 KB of core memory, but this required a second storage frame, a five foot by two foot cabinet weighing 610 pounds.

IBM System/360 Model 50

IBM S/360 Model 50. The console was attached to the main frame, about 5 feet deep. The storage frame and power frame are the black cabinets at the back. Photo from Pinterest.

IBM S/360 Model 50. The console was attached to the main frame, about 5 feet deep. The storage frame and power frame are the black cabinets at the back. Photo from Pinterest.

The Model 50 was a powerful mid-range machine in the System/360 lineup, significantly faster than the Model 40. The Model 50 typically rented for about $18,000 - $32,000 per month. The diagram below shows how the Model 50 consisted of three frames: the CPU frame (main frame) in front with the console, a power frame holding the power supplies, and the storage frame. In the photo above, the main frame is orange and about 2.5 feet wide by 5 feet deep. The power frame is the black cabinet behind the main frame, about 5 feet wide and 2 feet deep. The storage frame is the same size, on the left behind the women. The storage frame could hold up to 256 KB; by adding a second storage frame behind it, the Model 50 could be expanded to 512 KB.

This diagram shows the three frames that made up the basic S/360 Model 50. Source: Model 50 Maintenance Manual page 138.

This diagram shows the three frames that made up the basic S/360 Model 50. Source: Model 50 Maintenance Manual page 138.

The FAA's IBM 9020 multiprocessor system

The core memory array I examined was from an air traffic control system called the IBM 9020. In the mid-1960s, the FAA realized that computerization was necessary to handle increased air traffic. From the early 1970s to the 1990s, the FAA used the IBM 9020 to track flights and integrate radar data. The 9020 was a multiprocessor system designed for reliability, consisting of up to 12 mainframes connected together, driving dozens of air traffic displays (the classic round CRT displays). The system used modified Model 65 computers to process data and used modified Model 50 computers for I/O control (essentially expensive DMA controllers).4

FAA center in Jacksonville using multiple IBM mainframes. Three Model 65s are along the left wall, while three Model 50s are towards the back. The control panel in front of the Model 50s is not a computer but a system monitoring panel. From FAA: A historical perspective, chapter 4.

FAA center in Jacksonville using multiple IBM mainframes. Three Model 65s are along the left wall, while three Model 50s are towards the back. The control panel in front of the Model 50s panel is not a computer but a system monitoring panel. From FAA: A historical perspective, chapter 4.

The complete core memory system

The stack of core planes isn't enough to implement a working memory; a lot of circuitry is required to generate the appropriate X and Y signals, amplify the sense line signals for reads, and drive the inhibit lines for writes. In this section, I explain how the stack of core memory planes was used as part of a full memory system.

Each X and Y line through the core plane required two transistor drivers, one to generate the current pulse for reading and one to generate the opposite current pulse for writing. Thus, with 128 X lines and 128 Y lines,5 a total of 512 drive transistors were required, a very large number of transistors. These were provided by 16 "driver gate" cards, each with 32 drive transistors, plugged into each core stack.10 The photo below shows the 16 driver gate boards (each with 32 transistors) plugged into the core array.

Core memory array with transistor driver boards.

Core memory array with transistor driver boards.

The photo below shows a closeup one of the transistor "driver gate" cards, with a transistor and diode for each line.6 Cards like this with discrete transistors were unusual in the IBM System/360, which for the most part used SLT modules, hybrid modules somewhat like integrated circuits.

Each core array used 16 of these driver cards for the X and Y drive lines. Each card had 32 drive transistors, as well as diodes.

Each core array used 16 of these driver cards for the X and Y drive lines. Each card had 32 drive transistors, as well as diodes.

The core array and the drive transistors generated significant heat so the assembly was cooled by a fan mounted underneath. Plastic covers over the boards directed the airflow, as well as providing protection for the boards. The photo below shows the core memory mounted on a metal frame with the fan attached.

Core memory array with the fan mounted below and plastic covers installed. The core planes are at the back and the circuit cards are at the front. This assembly is about 2 feet tall.

Core memory array with the fan mounted below and plastic covers installed. The core planes are at the back and the circuit cards are at the front. This assembly is about 2 feet tall.

The photo below shows the 128 KB unit, consisting of two core arrays (on the left), along with about 62 small circuit cards of supporting circuitry. This unit was rather bulky, almost three feet long and two feet high.8 Over half of these circuit cards were sense preamplifiers to read the weak signals from the core planes.7 Other cards decoded the address to select the right lines9, handled timing, or did other tasks. The slower Model 40 computer accessed one of the two arrays at a time, reading 16 bits (a half-word). In contrast, the Model 50 accessed both arrays in parallel, reading a full 32-bit word at once for higher performance.

128 KB core module assembly. Two core memory stacks are at the left, and the supporting circuitry is on the right. Fans (black) are at the bottom. Photo from IBM's 360 and Early 370 Systems.

128 KB core module assembly. Two core memory stacks are at the left, and the supporting circuitry is on the right. Fans (black) are at the bottom. Photo from IBM's 360 and Early 370 Systems.

The diagram below shows how two of these core memory units (i.e. four stacks of 18 planes) were installed in the Model 50's storage frame, providing 256 kilobytes of storage. This frame was about 5 feet by 2 feet and 6 feet tall and weighed 1150 pounds. The storage frame also held the optional "storage protect" feature, that protected memory blocks from access by other programs. Note that even though the core planes themselves were fairly compact, the entire storage frame was rather large. This diagram also illustrates why the cabinets were called frames: it was built from a frame of metal bars with side panels hung off the frame to enclose it.

The storage frame of the Model 50 held two 128-kilobyte core memory units (red and blue), along with other memory circuitry. Diagram based on the Parts Catalog.

The storage frame of the Model 50 held two 128-kilobyte core memory units (red and blue), along with other memory circuitry. Diagram based on the Parts Catalog.

Other IBM S/360 memory systems

I've described the core memory used in the Model 40 and Model 50, but high-end models used even larger core memory systems based on different core planes. The photo below shows a high-performance Model 85 system. The four cabinets in front are IBM 2365 Processor Storage; each one held 256 kilobytes of core memory and weighed over a ton. High-end systems could also use the 2385 Processor Storage holding 2 megabytes of memory in a sprawling 400 square foot unit that weighed almost 8 tons. The IBM 2361 Large Capacity Storage (LCS) also held 2 megabytes; it was slower but weighed just one ton. It used large 4-foot core planes that looked more like screen doors than typical core planes.1

IBM System/360 Model 85. The double-H cabinet in the middle is the processor. Each cabinet in the front left held 256 kilobytes of storage. Photo from IBM.

IBM System/360 Model 85. The double-H cabinet in the middle is the processor. Each cabinet in the front left held 256 kilobytes of storage. Photo from IBM.

Conclusion

Computers in the early 1950s used memory technologies such as mercury delay lines and Williams tubes that were small and slow. Core memory was much superior and it led to the rise of the computer era in the late 1950s and 1960s. As manufacturing technology improved, the price of core memory rapidly dropped, from several dollars per bit to a penny per bit. By 1970, IBM was producing over 20 billion cores per year.

However, even with its steady improvements, core memory was not able to survive the introduction of integrated circuits and semiconductor memory in the late 1960s. In 1968, IBM switched its development efforts from core memory to semiconductor memory. This led to the introduction in 1971 of the world's first commercial computer with semiconductor memory, the IBM S/370 Model 145. The capacity of integrated circuit memories grew exponentially as their price fell, as described by Moore's law. As a result, semiconductors took over the memory market from magnetic cores by the end of the 1970s. Now, thanks to DRAM memories, modern computers have memory measured in gigabytes rather than kilobytes and memory comes in a small DIMM module rather than a large cabinet.

I announce my latest blog posts on Twitter, so follow me @kenshirriff for future articles. I also have an RSS feed. I've written before about core memory in the IBM 1401 and core memory in the Apollo Guidance Computer. Thanks to Robert Garner for supplying the core array. Thanks to Gio Wiederhold and Marianne Siroker for research assistance.

More information

In the video below, Marc, Carl, and I wired up a different type of IBM core memory plane and manually read and wrote a bit. It was harder than we expected; the signal from a flipping core is very small and hard to distinguish from noise.

The book Memories That Shaped an Industry describes the history of core memory in detail, with a focus on IBM. The book IBM's 360 and early 370 systems thoroughly describes the history of the S/360. The memory system used in the System/360 Model 40 is explained in Model 40 Functional Units, page 141 onwards. See ibm360.info for documents on the FAA's IBM 9020 system.

Notes and references

  1. The weight of additional memory depended on the computer model. For the Model 40, adding 128K to get the "H" configuration requires the addition of Frame 2, weighing 610 pounds. For the Model 50, the first 256K ("H" configuration) fit in Frame 2 (1150 pounds), while the next 256K ("I" configuration) required the addition of Frame 4, which weighed 1500 pounds. So, depending on the particular computer, 128K weighed 575, 610, or 750 pounds. For details, see the physical planning guide, which provides the dimensions and weight of the various S/360 components. 

  2. The book Memories That Shaped an Industry discusses how IBM's leadership in core memory development made the IBM System/360 possible. IBM's near-disaster in developing software for the S/360 led to the legendary book The Mythical Man-Month by Fred Brooks, who managed development of the S/360 hardware and operating system. 

  3. Each core planes used in the Model 40 and 50 computers had an extra 512 bits of "bump" storage. This extra storage held information on I/O operations (the "unit control word") without using the main storage. The computers also had "local storage", a separate small core storage used for registers. 

  4. The computers used in the IBM 9020 system were based on the S/360 Model 50 and Model 65, but had modifications to operate in a networked high-availability system. For instance, most console controls were disabled except during maintenance to avoid accidental button presses. The computers also included address translation so they could access multiple shared external storage units, allowing failover of storage. The Model 65's instruction set was extended with highly specialized instructions such as CVWL (convert weather lines) that converted weather data coordinates for the displays, as well as instructions for multiprocessing. (Details in FETOM.)

    The IBM 9020 network architecture, using Model 50 computers as I/O controllers connected to Model 65 computers, may seem pointlessly complex. It turns out that is the case. According to Brooks, IBM originally designed a reasonable FAA system with just Model 50 computers. However, the (somewhat arbitrary) design specification created by MITRE required separate processing and I/O tiers in the network, so IBM added the Model 65 computers even though they were unnecessary and made the system less reliable. 

  5. The core plane had 256 X lines and 130 Y lines, but the circuitry drove 128 X lines and 128 Y lines, so you might wonder why the numbers don't match. There are two factors here. First, a technique called "phase reversal"11 used each X line twice, in opposite directions, so the 256 physical X lines through the core plane only required 128 X drivers, reducing the amount of drive hardware required. Second, the 2 extra Y lines for bump storage were driven by separate circuitry.  

  6. An X or Y line was connected to a read gate card at one end, and a write gate card at the other end; these lines had a high-current (360 mA) pulse for 400-700 nanoseconds to flip a core. The current path through a line was as follows: the +60V supply was connected through a terminating resistor to a "terminator gate", a transistor that controlled the timing. Then through the diode on a gate card, through the cores, and through a transistor on the gate card at the other end. Finally, the current passed through a driver card to ground. (The gate card is the card full of transistors plugged directly into the array while the driver card does part of the address decoding.) The memory system is explained in Model 40 Functional Units, page 141 onwards.  

  7. One unusual feature of this core plane is that it had four separate sense/inhibit lines, each covering 1/4 of the plane. This reduced the length of each sense line and thus reduced the noise it picked up, but required four times as many amplifiers to read the sense lines. Since there were 18 planes (18 bits) in the core array, 72 sense pre-amplifiers were required for each of the two core arrays. The pre-amplifiers were differential amplifiers, amplifying the difference between the two sense line inputs. (The idea is that noise on both inputs will cancel out, yielding just the desired signal.) The outputs from all the pre-amplifiers were fed into 18 "final amplifiers" yielding the 18-bit output (2 bytes + 2 parity bits) from the array. The sense and inhibit lines were shared in this core plane, so there also 72 inhibit lines. One circuit card implemented both the sense preamp and the inhibit driver for 4 lines, so the two stacks of core planes required 36 Sense Preamp and Z [i.e. inhibit] Driver cards. 

  8. The diagram below shows the dimensions of the 128 KB core memory unit, containing two sets of core planes. Each colored block is a core array of 18 planes, corresponding to the array shown at the beginning of this article.

    The 128 KB core memory unit contained two arrays and was almost three feet long.

    The 128 KB core memory unit contained two arrays and was almost three feet long.

  9. The Model 50 core unit with two core arrays held 128 kilobytes as 32K words of 32 bits plus parity. Addressing one of 32K words required 15 address bits, decoded as follows. For X, 4 bits selected one of 16 "gate decoder lines" and 3 bits selected one of 8 "drivers". These two selections were combined in the transistor matrix to select one of the 128 X drive lines. The Y address was decoded similarly, with 7 bits selecting one of 128 Y lines. One bit controls "phase reversal", selecting the polarity of the Y drive line. Although the System/360 was byte-addressable, accessing a specific byte in a word was done by the processor, rather than by the memory system. Because the Model 40 read half-words at a time from memory, it used a slightly different decoding scheme. One address bit was used to select between the two core arrays in the unit and only one array was accessed at a time. 

  10. Each gate driver card had 32 transistors in a grid, with 8 inputs to transistor emitters and 4 inputs to transistor bases. By activating one base input and one emitter input, the corresponding transistor turned on, energizing the corresponding line. Thus, each card allowed 32 lines to be controlled by selecting one of 4 inputs, and one of 8 inputs. Each card also had 32 diodes that provided the current into the appropriate line. If a transistor was activated, the diode on the card connected to the opposite end of the line sourced the current. 

  11. One subtlety of the coincident-current design is that if the wires pass through the core in opposite directions, the currents subtract instead of add. In the diagram below, the wires pass through the left core in the same direction so the currents coincide. But in the right core, the wires pass through in opposite directions, so the currents cancel out. This is important because neighboring cores are rotated 90° to prevent magnetic coupling. In order for currents to coincide, the direction of current must be reversed in every other line. To accomplish this reversal, lines through the core stack were wired alternating bottom-to-top versus top-to-bottom.

    If currents pass through a core in the same direction, they add. This is the principle behind "coincident-current" core memory. However, if currents pass through a core in opposite directions (as on the right), the currents cancel.

    If currents pass through a core in the same direction, they add. This is the principle behind "coincident-current" core memory. However, if currents pass through a core in opposite directions (as on the right), the currents cancel.

    The "phase reversal" technique used this cancellation to cut the number of X drive lines in half for this core memory plane. The trick was to run the X lines through half of the core plane normally, and then through the other half of the core plane "backward". When an X line and a Y line are activated, two cores will receive both X and Y currents. But only one of these cores will receive the currents in the same direction and will flip; for the other core, the currents will cancel out. On the other hand, by reversing the current through the Y line, the opposite cancellation will occur and the other core will be selected. Thus, phase reversal allows the system to support twice as many cores with essentially the same driver hardware, just switching the Y current direction as needed.

    On the left, the coincident currents select a core in segment A. By reversing the direction of the Y current, a core in segment B is selected instead. With this phase reversal technique, one wire went through two X rows. Diagram based on Model 40 Functional Units page 147.

    On the left, the coincident currents select a core in segment A. By reversing the direction of the Y current, a core in segment B is selected instead. With this phase reversal technique, one wire went through two X rows. Diagram based on Model 40 Functional Units page 147.