The Texas Instruments TMX 1795: the (almost) first, forgotten microprocessor

The first 8-bit microprocessor, the TMX 1795 had the same architecture as the 8008 but was built months before the 8008. Never sold commercially, this Texas Instruments processor is now almost forgotten even though it had a huge impact on the computer industry. In this article, I present the surprising history of the TMX 1795 in detail, look at other early processors, and explain how the TMX 1795 almost became the first microprocessor. (Originally I thought the TMX 1795 was the first microprocessor, but it appears that the 4004 slightly beat it.)

The Texas Instruments TMX 1795 microprocessor. Courtesy of Computer History Museum.

The Texas Instruments TMX 1795 microprocessor. Courtesy of Computer History Museum.

The story starts with the Datapoint 2200[1], a "programmable terminal" sized to fit on a desktop. While originally sold as a terminal, the Datapoint 2200 was really a minicomputer that could be programmed in BASIC or PL/B. Some people consider the Datapoint 2200 the first personal computer as it came out years before systems such as the Apple II or even the Altair.

The Datapoint 2200 programmable terminal / computer. Photo by Ecksemmess CC BY-SA 3.0  via Wikimedia Commons.

The Datapoint 2200 programmable terminal / computer. Photo by Ecksemmess CC BY-SA 3.0 via Wikimedia Commons.

The Datapoint 2200 had an 8-bit processor built out of dozens of TTL chips, which was the normal way of building computers at the time. The photo below shows the processor board. Keep in mind that there's no processor chip—the whole board is the processor, with a chip or two for each register, a few chips for the adder, a few chips to decode instructions, a few chips to increment the program counter, and so forth. [28] Nowadays, we think of MOS chips as high-performance and building a CPU out of TTL chips seems slow and backwards. However, in 1970, TTL logic was much faster than MOS. Even operating one bit at a time as a serial computer, the Datapoint 2200 performed considerably faster than the 8008 chip, unless it needed to wait for the slow serial memory.

The processor board from the Datapoint 2200. The 8008 was built to replace this board. Photo courtesy of zuigadrummer.

The processor board from the Datapoint 2200. The 8008 was built to replace this board. Image courtesy of zuigadrummer.

While building the Datapoint 2200, its designers were looking for ways to make the processor board smaller and generate less heat. Datapoint met with Intel in December 1969, and what happened next depends on whether you listen to Intel or Datapoint. Intel's story is that Datapoint asked if Intel could build memory chips for the processor stack that had an integrated stack pointer register. Intel engineer Stan Mazor told Datapoint that Intel could not only do that, but could put the whole 2200 processor board on a chip.[2][3] Datapoint's story is that Datapoint founder Gus Roche and designer Jack Frassanito suggested to Intel's co-founder Robert Noyce that Intel build a single-chip CPU with Datapoint's design.[4] but Noyce initially rejected the idea, thinking that a CPU chip wouldn't have a significant market.

In any case, Intel ended up agreeing to build a CPU chip for Datapoint using the architecture of the Datapoint 2200.[5] Intel developed a functional specification for the chip by June 1970 and then put the project on hold for six months. During this time, there was a mention of future 8008 chip in Electronic Design (below)—I suspect I've found the first public mention of the 8008. You might expect there was a race to build the first microprocessor, so you may be surprised that both the 4004 and 8008 projects were put on hold for months. Meanwhile, Datapoint built a switching power supply for the 2200[6], which eliminated the heating concerns, and was planning to start producing the 2200 with the processor board of TTL chips. Thus, Datapoint wasn't particularly interested in the 8008 any more.

First description of the Intel 8008 processor in print. Electronic Design, Oct 25 1970.

First description of the Intel 8008 processor in print. Electronic Design, Oct 25 1970.

A Texas Instruments salesman learned that Intel was building a processor for Datapoint and asked if Texas Instruments could build them one too. Datapoint gave TI the specifications and told them to go ahead. Texas Instruments came up with a three-chip design, but came up with a single-chip CPU after Datapoint pointedly asked, "Can't you build it on one chip like Intel?" Texas Instruments started building a CPU for Datapoint around April 1970 and this chip became the TMX 1795.

There's a lot of debate on just how much information about Intel's design was given to Texas Instruments. The main TI engineer on the project, Gary Boone, says they received hints that Intel was doing better, but didn't improperly receive any proprietary information. According to Intel, though, Texas Instruments received Intel's detailed design documents through Datapoint. For instance, the TI processor copied an error that was in Intel's documentation leaving the TI chip with broken interrupt handling.[7]

The TI chip was first mentioned in March, 1971 in Businessweek magazine, in a short paragraph calling the chip a "milestone in LSI [Large-Scale Integration]" for jamming the CPU onto a single chip.[8] A few months later, the chip received a big media launch with an article and multi-page advertising spread in Electronics (below), complete with die photos of the TMX 1795.

Article on the TMX 1795 and TI advertising section featuring the chip. Electronics, June 7 1971.

Article on the TMX 1795 and two pages from the TI advertising section featuring the chip. Note the die photos of the TMX 1795. Electronics, June 7 1971.

The article, entitled "CPU chip turns terminal into stand-alone machine", described how the chip would make the Datapoint 2200 computer much more powerful. "The 212-by-224 mil chip turns the 2200 into a complete computer that doesn't have to be connected to a time-sharing system." The components of the chip are "similar to units previously available separately, but this is the first time that they've been combined monolithically", consolidated "into a single chip". The chip and 2K of memory would cost about $100. This "central processor on a chip" would make the new Datapoint 2200 "a powerful computer with features the original one couldn't offer."

That didn't happen. Datapoint tested the TMX 1795 chip and rejected it for four reasons. First, the chip and memory didn't tolerate voltage fluctuations of more than 50mV. Second, the TMX 1795 required a lot of support chips (although not as many as the 8008 would), reducing the benefit of a single-chip CPU. Third, Datapoint had solved the heat problem with a switching power supply.[6] Finally, Datapoint had just about completed the 2200 Version II, with a much faster parallel implementation of the CPU. The TMX 1795 (operating in parallel) was slightly faster than the original serial Datapoint 2200, but the 2200 Version II was much faster than the TMX 1795. (This illustrates the speed advantage of TTL chips over MOS at the time.)

Intel engineers provided another reason for the commercial failure of the TMX 1795: the chip was too big to manufacture cost-effectively. I created the diagram below to compare the TMX 1795, 4004, and 8008 at the same scale. The TMX 1795 is larger than the 4004 and 8008 combined! One reason is that Intel had silicon-gate technology, which in effect allowed three layers of circuitry instead of two. But even taking that into account, Texas Instruments didn't seem to put much effort into the layout, which Mazor calls "pretty sloppy techniques" and "throwing some blocks together".[9] While the 4004 and especially the 8008 are densely packed, the TMX 1795 chip has copious unused and wasted space.

Comparative die sizes of the TMX 1795, 4004 and 8008 microprocessors. Note that the 4004 and 8008 are nearly the same size, while the TMX 1795 is more than twice as large.

Comparative die sizes of the TMX 1795, 4004 and 8008 microprocessors. Note that the 4004 and 8008 are nearly the same size, while the TMX 1795 is more than twice as large. The top third of the TMX 1795 is instruction decoding and control logic, the middle is the 8-bit ALU, and the bottom is storage (stack and registers). TMX 1795 die photo courtesy of Computer History Museum.

As well as rejecting the TMX 1795, Datapoint also decided not to use the 8008 and gave up their exclusive rights to the chip. Intel, of course, commercialized the 8008, announcing it in April 1972. Two years later, Intel released the 8080, a microprocessor based on the 8008 but with many improvements. (Some people claim that the 8080 incorporates improvements suggested by Datapoint, but a close examination shows that later Datapoint architectures and the 8080 went in totally different directions.) The 8080 was followed by the x86 architecture, which was designed to extend the 8080. Thus, if you're using an x86 computer now, you're using a computer based on the Datapoint 2200 architecture.[10]

Some sources dismiss the TMX 1795 as a chip that never really worked. However, the video below shows Gary Boone demonstrating the TMX 1795 in 1996. A TMX 1795 board was installed in a laptop (probably a TI LT286) for the purpose of the demo. It runs a simple text editor, a sort program, a simple budget spreadsheet, and Fibonacci numbers. The demo isn't particularly thrilling, but it shows that the TMX 1795 was a functional chip.

Considering the size of Intel and the microprocessor market, Datapoint's decision to give up exclusive rights to the 8008 seems like a huge blunder, possibly "one of the worst business decisions in history". However, it's unlikely that Datapoint would have sold 8008 chips, given that they were a computer company, not a chip company like Intel.[11] In addition, Intel had plans to produce microprocessors even without the rights to the 4004 or 8008.[12]

After rejecting the TMX 1795 (and the 8008), Datapoint continued to build processors out of TTL chips until the early 1980s. While these processors were faster and more powerful than microprocessors for a surprisingly long time, eventually Moore's law led to processors such as the 80286, which outperformed Datapoint at a lower cost. Under heavy competition from PCs, Datapoint's stock crashed in 1982, followed by a hostile takeover in 1984. The company limped along before going bankrupt in 2000. Given that Datapoint designed the architecture used in the 8008, it's ironic that Datapoint was killed by x86 microprocessors which were direct descendents of the 8008.

The TMX 1795 microprocessor installed in a circuit board.

The TMX 1795 microprocessor installed in a circuit board. This board was used in a laptop for the 1996 demo.

Unlike Intel, who commercialized the 8008 chip, Texas Instruments abandoned the TMX 1795 after Datapoint's rejection. The chip would have disappeared without a trace, except for one thing, which had a huge impact on the computer industry.

The "Dallas Legal Firm" and "TI v. Everybody"[13]

Texas Instruments figured out early on that patent litigation and licensing fees could be very profitable. After (co-)inventing the integrated circuit and receiving patents on it, Texas Instruments engaged in bitter patent battles, earning the nickname "the Dallas legal firm" for their "unethical and unprofessional legal tactics".[13] Texas Instruments continued their legal practices with the TMX 1795, receiving multiple patents on it, issued between 1973 and 1985.[14][15]

Needless to say, Intel was not happy that Texas Instruments patented the TMX 1795, since building a single-chip processor for Datapoint was Intel's idea.[16] Intel was even unhappier that that Texas Instruments had used parts of Intel's specification when designing and patenting the TMX 1795.[7][17] Intel had wanted to patent the 4004[18], but their patent attorney told them that it wasn't worth it, and the idea of putting a computer on a chip was fairly obvious. Likewise, Datapoint had considered patenting the single-chip microprocessor but was told by their patent attorney that there was nothing patentable in the idea.[3]

In order to extract substantial licensing fees, Texas Instruments sued multiple companies using their microprocessor and microcontroller patents (including the TMX 1795 patent) in a case that Gordon Bell called "TI v. Everybody".[13] Dell decided to fight back in a "bet the company" lawsuit.[14] The lawsuit dragged on for years and was about to go to trial when the case suddenly turned against Texas Instruments.

Lee Boysel of Four-Phase Systems had built a 24-bit MOS-based minicomputer in 1970, as will be discussed in more detail below. The computer had a 9-chip CPU, but in an amazing hack, Boysel took one of the three 8-bit arithmetic/logic chips and was able to build a working microcomputer from it. Since this chip was a year before than the TMX 1795, it torpedoed Texas Instruments' case and it never went to trial. As a result, many people consider the Four-Phase AL1 to be the first microprocessor. However, as I'll explain below, the demo wasn't quite what most people think.

The Four-Phase AL1 running as a single-chip processor in a patent litigation demo. From Boysel's EECS presentation.

The Four-Phase AL1 running as a single-chip processor in a patent litigation demo. From Boysel's EECS presentation.

Is the TMX 1795 really the first microprocessor?

There's a fair bit of argument of what is the first microprocessor. Several candidates for first microprocessor were introduced in a short period of time between 1968 and 1971. These are all interesting chips, but most of them have been forgotten. In this section, I'll discuss various candidates, but first I'll look at whether it makes sense to consider the microprocessor an invention.

Giving some hardware background will help the following discussion. The transistors you're probably most familiar with are bipolar transistors—they are fast, but bipolar integrated circuits can't contain large numbers of transistors. The TTL chips used in the Datapoint 2200 and other systems are built from bipolar transistors. A later technology produced MOS transistors, which are slower than bipolar, but can now be squeezed onto a chip by the millions or billions. The final term is LSI or Large-Scale Integration, referring to an integrated circuit containing a large number of components: 100 gates or more. The introduction of MOS/LSI is what made it possible to build a processor with a few chips or a single chip, rather than a board full of chips.

The inevitability of microprocessors

One perspective is that the microprocessor isn't really an invention, but rather something that everyone knew would happen, and it was just a matter of waiting for the technology and market to be correct. This view is convincingly presented in Schaller's thesis,[19] which has some interesting quotes:
The idea of putting the computer on a chip was a fairly obvious thing to do. People had been talking about it in the literature for some time.—Ted Hoff, 4004 designer
At the time in the early 1970s, late 1960s, the industry was ripe for the invention of the microprocessor.- Hal Feeney, 8008 designer
The question of ‘who invented the microprocessor?’ is, in fact, a meaningless one in any non-legal sense. - Microprocessor Report

I largely agree with this perspective. It was obvious in the late 1960s that a CPU would eventually be put on a chip, and it was just a matter of time for the density of MOS chips to improve to the point that it was practical. In addition, in the 1960s, MOS chips were slow, expensive, and unreliable[11]—a computer built out of a bunch of bipolar chips was obviously better, and this included everything from the IBM 360 mainframe to the PDP-11 minicomputer to the desktop Datapoint 2200. At first a MOS-based computer only made sense for a low-performance application (calculators, terminal), or when high density was required (aerospace, calculators).

To summarize this view, the microprocessor wasn't anything to specifically invent, but just something that happened when MOS technology improvements and a marketing need made it worthwhile to build a single chip processor.

Defining "microprocessor"

Picking the first microprocessor is largely a linguistic exercise in how you define "microprocessor". It also depends on how you define "first": this could be first design, first manufactured chips, first sales, or first patent. But I think for reasonable definitions, the TMX 1795 is first.

There's no official definition of a microprocessor. Various sources define a microprocessor as a CPU on a chip, or an arithmetic-logic unit (ALU) on a chip, or on a few chips. One interesting perspective is that "microprocessor" is basically a marketing term driven by the need of companies like Intel and Texas Instruments to give a label to their new products.[11]

In any case, I consider a microprocessor to be a CPU on a single chip, including the ALU, control, and registers. Storage and I/O is generally outside the chip. There will generally be additional support and interface chips such as buffers, latches, and clock generation. I also consider it important that a microprocessor be programmable as a general-purpose computer. This definition, I think, is a reasonable definition for a microprocessor.

One architecture that I don't consider a microprocessor is a microcoded system, where the control unit is separate and provides micro-instructions to control the ALU and the rest of the system. In this system, the microcode can be provided by a ROM and a latch steps through the micro-instructions. Since the ALU doesn't need to do instruction decoding, it can be a much simpler chip than a full-blown CPU. I don't think it's fair to call it a microprocessor.

Timeline of early microprocessors

There are several processors that are frequently argued to be the first microprocessor, and they were created in a span of just a few years. I created the timeline below to show when they were developed. In the remainder of this article, I describe the different processors in detail. Timeline of early MOS/LSI processors.

Timeline of early MOS/LSI processors.

Four-Phase AL1

If one person could be considered the father of MOS/LSI processors, it would be Lee Boysel. While working at Fairchild, he came up with the idea of a MOS-based computer and methodically designed and built the necessary cutting-edge chips (ROM in 1966, ALU in 1967, DRAM in 1968). Along the way he published several influential articles on MOS chips, as well as a 1967 "manifesto" explaining how a computer comparable to the IBM 360 could be built from MOS.

Four-Phase AL4 arithmetic-logic chip (variant of AL1)

Four-Phase AL4 arithmetic-logic chip (variant of AL1)

Boysel left Fairchild and started Four-Phase Systems in October 1968 to build his MOS-based system. In 1970, he demoed the System/IV, a powerful 24-bit computer. The processor used 9 MOS chips: three 8-bit AL1 arithmetic / logic chips, three microcode ROMs, and three RL random logic chips. This computer sold very well and Four-Phase became a Fortune 1000 company before being acquired by Motorola in 1981.

Die photo of Four-Phase AL1 arithmetic-logic chip. Courtesy of Computer History Museum.

Die photo of Four-Phase AL1 arithmetic-logic chip. Courtesy of Computer History Museum.

As described earlier, Boysel used an AL1 chip as a processor in a courtroom demonstration system in 1995 to show prior art against TI's patents. Given this demonstration, why don't I consider the AL1 to be the first microprocessor? It used an AL1 chip as the processor, along with ROM, RAM, and I/O and some address latches, so it seems like a single-chip CPU. But I've investigated this demonstration system closely, and while it was a brilliant hack, there's also some trickery. The ROM and its associated latch are actually set up as a microcode controller, providing 24 control lines to the rest of the system. The ROM controls memory read/write, selects an ALU operation, and provides the address of the next microcode instruction (there's no program counter). After close examination, it's clear that the AL1 chip is acting as an Arithmetic/Logic chip (thus the AL1 name), and not as a CPU.

There are a few other things that show the AL1 wasn't working as a single-chip computer. The die photo published as part of the trial has the components of the AL1 chip labeled, including "Instruction Register 23 bits". However, that label is entirely fictional—if you study the die photo closely, there's no instruction register or 23 bits there, just vias where the ground lines pass under the clock lines. I can only conclude that this label was intended to trick people at the trial. In addition, the AL1 block diagram used at the trial has a few subtle changes from the originally-published diagram, removing the program counter and adding various interconnections. I examined the code (microcode) used for the trial, and it consists of super-bizarre microcode instructions nothing like the AL1's original instruction set.

Detail of AL1 die photo showing fictional 'Instruction Register 23 bits' label.

Detail of AL1 die photo showing fictional 'Instruction Register 23 bits' label.

While the demo was brilliant and wildly successful at derailing the Texas Instruments lawsuits, I don't see it as showing the AL1 was a single-chip microprocessor. It showed that combined with a microcode controller, the AL1 could be used as a barely-functioning processor. In addition, you could probably use a similar approach to build a processor out of an earlier ALU chip such as the 74181 or Fairchild 3800, and nobody is arguing that those are microprocessors.

Looking at the dates, it appears that Viatron (described below) shipped their MOS/LSI computer a bit before Four-Phase, so I can't call Four-Phase the first MOS/LSI computer. However, Four-Phase did produce the first computer with semiconductor memory (instead of magnetic core memory), and thus the first all-semiconductor computer.

Viatron

Viatron is another interesting but mostly forgotten company. It began as a hugely-publicized startup founded in November, 1967. About a year later, they announced System 21, a 16-bit minicomputer with smart terminals, tape drives, and a printer, built from custom MOS chips. The plan was volume: by building a large number of systems, they hoped to produce the chips inexpensively and lease the systems at amazingly low prices—computer rental for $99 a month.[20] Unfortunately, Viatron ran into poor chip yields, delays, and price increases. As a result, the company went spectacularly bankrupt in March 1971.

The Viatron System 21: color display, terminal keyboard, 'robot' printer, and computer. From Viatron brochure, via bitsavers.org.

The Viatron System 21: color display, terminal keyboard, 'robot' printer, and computer. From Viatron brochure, via bitsavers.org.

Viatron is literally the originator of the microprocessor—they were the first to use the word "microprocessor" in their October 1968 announcement of the 2101 microprocessor. However, this microprocessor wasn't a chip—it was an entire smart terminal, leasing for the incredibly low price of $20 a month. Viatron used the term microprocessor to describe the whole desktop unit complete with keyboard and tape drives. Inside the microprocessor cabinet were a bunch of boards—the processor itself consisted of 18 custom MOS chips on 3 boards, with more boards of custom MOS and CMOS chips for the keyboard interface, tape drive, memory, and video display.

The 3-board processor inside the 2101 was specialized for its terminal role. It read and wrote multiple I/O control lines, moved data between I/O devices and memory, updated the display, and provided serial input and output.[20] The processor was very limited, not even providing arithmetic. Nonetheless, I think the Viatron 2101 "microprocessor" can be considered the first (multichip) MOS/LSI processor, shipping before the Four Phase System/IV.

One of the three CPU boards from the Viatron System 21 terminal. Photo courtesy of UMMR.

CPU board #2 of three from the Viatron System 21 terminal. Top row holds two RAR register chips and six ROM chips. Bottom chips are IBR multiplexer, flag chip and ROM multiplexer, Photo courtesy of UMMR.

Viatron also built an advanced general-purpose 16-bit computer, the 62-pound 2140 minicomputer, which leased for $99 a month and came with a Fortran compiler. It had 4K 16-bit words of core memory and two 16-bit arithmetic units. The microcoded processor had an extensive instruction set including multiply and divide operations, and supported 48-bit arithmetic. Coming on the market slightly before the Four-Phase computer, the Viatron 2140 appears to be the first MOS/LSI general-purpose computer. Unfortunately, sales were poor and the 2140 projected ended in 1973.

MP944 / F-14 CADC

The Central Air Data Computer was a flight control system for the F-14 fighter, using the MP944 MOS/LSI chipset developed between 1968 and 1970. This computer processed information from sensors and generated outputs for instrumentation and to control the aircraft. The main operation it performed was computing polynomial functions on the inputs. This chipset was designed by Ray Holt, who argues on his website (firstmicroprocessor.com) that this 20-bit serial computer should be considered the first microprocessor.

Block diagram of the F14A CADC computer. From 'Architecture Of A Microprocessor'.

Block diagram of the F14A CADC computer. Module 1 performs multiplication, module 2 performs division, and module 3 performs special logic functions. From Architecture Of a Microprocessor.

The architecture of this computer is pretty unusual; it consists of three functional modules: a multiplier, a divider, and "special logic". Each functional unit has a microcode ROM (including an address register) that provides a 20-bit microinstruction, a data steering unit (SL) that selects between 13 data inputs and performs addition, the arithmetic chip (multiply (PMU), divide (PDU) or special logic (SLF)), and a small RAM chip for storage (RAS). Each data line transfers a 20-bit fixed-point value, shifted serially one bit at a time. The main purpose of the SLF (special logic function) chip is to clamp a value between upper and lower bounds. It also converts Gray code to binary[21] and performs other logic functions.[22]

I don't consider this a microprocessor since the control, arithmetic, and storage are split across four separate chips in each functional unit.[23] Not only is there no CPU chip, there's not even a general-purpose ALU chip. Computer architecture expert David Patterson says, "No way Holt's computer is a microprocessor, using the word as we mean it today."[24] Even if you define a microprocessor as including a multi-chip processor, Viatron beat the CADC by a few months. While the CADC processor is very interesting, I don't see any way that it can be considered the first microprocessor.

Intel 4004

The well-known Intel 4004 is commonly considered the first microprocessor, but I believe the TMX 1795 beat it. I won't go into details of how Busicom contracted with Intel to have the 4004 built for a calculator, since the story is well-known.[25] I did a lot of research into the dates of the 4004 to determine which was first: the 4004 or the TMX 1795. According to the 4004 oral history, the first successful 4004 chip was the end of February 1971 and shipped to Busicom in March. TI wrote a draft announcement with photos of the TMX 1795 on February 24, 1971, and it was written up in Businessweek in March. The TMX 1795 was delivered to Datapoint in the summer and TI applied for a patent on August 31. The 4004 wasn't announced until November 15.

To summarize, the dates are very close but it appears that the TMX 1795 chip was built first (assuming the chip was working for the Feb 24 writeup) and announced first, while the 4004 was delivered to customers first. On the other hand, Federico Faggin claims that the 4004 was a month or two before the TMX 1795[17]. However, the TMX 1795 was patented; I assume that someone would have mentioned in all the patent litigation if the 4004 really beat the TMX 1795 (rather than building a demo out of the Four-Phase AL1). Based on the evidence, I conclude that the TMX 1795 was slightly before the 4004 as the first microprocessor built, while the 4004 is clearly the first microprocessor sold commercially. Texas Instruments claims on their website: "1971: Single-chip microprocessor invented", and I agree with this claim.

Intel 8008

Many people think of the Intel 8008 as the successor to the 4004, but the two chips are almost entirely independent and were developed roughly in parallel. In fact, some of the engineers on the 4004 worried that the 8008 would come out first because the 8008 project consisted of one chip to the four in the 4004 project. The 8008 was originally called the 1201 in Intel's naming scheme because it was the first custom MOS chip Intel was developing. The 4004 would have been the 1202 except Faggin, a key engineer on the project, convinced management that 4004 was a much better name. The 1201 was renamed the 8008 before release to fit the new naming pattern.

According to my research, the 8008 may be the first microprocessor described in print. I found a reference to it (although without the 8008 name) in a four-paragraph article in Electronic Design in Oct 25, 1970, discussing Intel's chip under development for the Datapoint 2200. The article briefly describes the chip's instruction set, architecture, and performance. It said the processor would be used in the 2200 "smart terminal" (which of course didn't happen), and said the chip was scheduled for January, 1971 delivery (it slipped and was officially announced in March 1972).

Gilbert Hyatt's microcontroller patent

The story of how Gilbert Hyatt obtained a broad patent covering the microcontroller in 1990 and lost it a few years later is complex, but I will try to summarize it here. The story starts with the founding of Micro-Computer Incorporated in 1968. Hyatt built a 16-bit serial computer out of TTL chips and sold it as a numerical control computer. He had plans to build this processor as a single chip, but before that could happen, the company went out of business in 1971. Mr. Hyatt claims that investors Noyce and Moore (of Intel fame) cut off funding because "their motive was to sell the company and take the technology."

The Nu-troller IV CNC machine using Gilbert Hyatt's 16-bit processor built from TTL chips. Photo from Numerical Control Society Proceedings, 1971.

The Nu-troller IV CNC machine using Gilbert Hyatt's 16-bit processor built from TTL chips. Photo from Numerical Control Society Proceedings, 1971.

In 1990, seemingly out of nowhere, Gilbert Hyatt received a very general patent (4942516) covering a computer with ROM and storage on a single chip. Hyatt had filed a patent on his computer in 1969, and due to multiple continuations, he didn't receive the patent until 1990.[15] This patent caused considerable turmoil in the computer industry since pretty much every microcontroller was covered by this patent. Hyatt ended up receiving substantial licensing fees until Texas Instruments challenged the patent a few years later and the patent office canceled Hyatt's key patent claims.[26] In any case, Gilbert Hyatt's microprocessor was never built (except in TTL form), there was no design for it, and the patent didn't provide any information on how to put the computer on a chip. Thus, while this computer built from TTL chips is interesting, it never became a microprocessor.

TMS 0100 calculator-on-a-chip / microcontroller

Texas Instruments created the TMS 1802NC calculator-on-a-chip in 1971; this was the first chip in the TMS 0100 series.[27] This chip included program ROM, storage, control logic and an ALU that performed arithmetic on 11-digit decimal numbers under the control of 11-bit opcodes.

The TMS 1802 calculator chip, first chip in the TMS 0100 series. Photo courtesy of datamath.org.

The TMS 1802 calculator chip, first chip in the TMS 0100 series. Photo courtesy of datamath.org.

While the TMS 0100 series was usually called a calculator-on-a-chip, it was also intended for microcontroller tasks. The patent describes "Programming of the calculator system for non-calculator functions", including digital volt meter, tax-fare meter, scale, cash register operations, a controller, arithmetic teaching unit, clock, and other applications. As the first "computer-on-a-chip", the TMS 0100 gave Texas Instruments several important microcontroller patents. which they used in patent litigation (including the Dell case described earlier).[14] (The key difference between a microcontroller and a microprocessor is the microcontroller includes the storage and program ROM, while the microprocessor has them externally.)

The TMX 1795 (first microprocessor) and TMS 0100 (first microcontroller) were both developed by Gary Boone and team (Mike Cochran, Jerry Vandierendonck, and others) at Texas Instruments almost simultaneously, which is a remarkable accomplishment. The TMS1802NC / TMS 0100 was announced September 17, 1971.

In 1974, Texas Instruments released the successor to the TMS 0100 series, the TMS 1000 series, and marketed it as a microcontroller. Externally, the TMS 1000 series had I/O similar to the TMS 0100 series, but internally it was entirely different. The 11-bit opcodes of the TMS 0100 were replaced by 8-bit opcodes and the 11-digit decimal storage was replaced by 4-bit binary storage. Some sources call the TMS 1000 series the first microcontroller or first microprocessor. This is entirely wrong and based on confusion between the two series. Confusing the TMS 0100 and TMS 1000 is like confusing the 8008 and 8080: the latter is a related, but entirely new chip.

Conclusions

Because the TMX 1795 wasn't commercially successful, the chip is almost forgotten, even though the chip has an important historical role. I've uncovered some history about this chip and take a detailed technical look at other chips that are sometimes considered the first microprocessor. The "first microprocessor" title depends on how exactly you define a microprocessor, but the TMX 1795 is first under a reasonable definition—a CPU-on-a-chip. It's interesting, though, how multiple MOS/LSI processor chips were built in a very short span once technology permitted, and how most of them are now almost entirely forgotten. In a future article, I'll look at the implementation and circuitry of the TMX 1795 in detail.

Thanks to Austin Roche for detailed information on Datapoint. Thanks to K. Kroslowitz of the Computer History Museum" for obtaining TMX 1795 photos for me; the chip is so obscure, there were no photos of it on the internet up until now.

Notes and references

[1] The Datapoint Corporation was founded in 1968 as CTC (Computer Terminal Corporation), CTC later changed its name to Datapoint as the name of its product was much better known than the company name itself. For simplicity, I'll use Datapoint instead of CTC to refer to the company in this article.

[2] The Computer History Museum's Oral History Panel on the Development and Promotion of the Intel 8008 Microprocessor discusses the history of the 8008 in great detail. The story of the initial idea to build a single chip for Datapoint is on page 2. Texas Instruments' chip development is on page 3-4. The use of little-endian format is discussed on page 5. TI's chip is discussed on page 6. Automated design of TI's chip is on page 25.

[3] The Computer History Museum's Oral History of Victor (Vic) Poor provides a lot of history of Datapoint. Page 34 describes Stan Mazor suggesting that Intel put Datapoint's processor on a single chip. Page 43 describes the TI chip and its noise issues. Page 46 explains how Datapoint's patent attorney told them there was nothing patentable about the single-chip microprocessor.

[4] Much of the information on Datapoint comes from the book Datapoint: The Lost Story of the Texans Who Invented the Personal Computer Revolution. The story of Datapoint suggesting a single-chip CPU to Noyce is on pages 70-72.

[5] The 8008 processor was originally given the number 1201 under Intel's numbering scheme. The first digit indicated the type of circuitry: 1 for p-MOS. The second digit indicated the type of chip: 2 for random logic. The last two digits were a serial number. For some reason, the 4004 was numbered after the 8008 and would have been the 1202. Fortunately, its developers argued that 4004 would be a better name for marketing reasons. The 1201 was later renamed the 8008 to fit this pattern. Thus, the 8008 is often though of as a successor to the 4004, even though the chips were developed in parallel and have totally different architectures.

[6] A switching power supply is much more efficient than the less complex linear power supplies commonly used at the time, so it generates much less heat. The Datapoint 2200 used a push-pull topology switching power supply. Steve Jobs called the Apple II's power supply "revolutionary", saying "Every computer now uses switching power supplies, and they all rip off Rod Holt's design." Note that the Datapoint 2200 with its swiching power supply came out 6 years before the Apple II. I've written a lot more about the history of switching power supplies here. (By the way, don't confuse Ray Holt of the CADC with Rod Holt of Apple.)

[7] According to Ted Hoff[18], Intel had a flaw in the original interrupt handling specification for the 8008 and TI copied that error in the TMX 1795, demonstrating that TI was using Intel specifications. In particular, when the 8008 processor is interrupted, a RESTART instruction can be forced onto the bus, redirecting execution to the interrupt handler. The stack pointer must be updated by the RESTART instruction to save the return address, but Intel didn't include that in the initial specification. (The RESTART instruction is not part of the original Datapoint architecture.)

I've verified from the patent that the RESTART logic in the TMX 1795 doesn't update the stack pointer, so interrupt handling is broken and there's no way to return from an interrupt. (The interrupt handling section of the TMX 1795 patent is kind of a mess. It discusses a "CONTINUE" instruction that doesn't exist.) According to Ted Hoff, this demonstrates that Texas Instruments was using Intel's proprietary specification without entirely understanding it.

[8] The text of the TMX 1795 announcement in Businessweek, March 27 1971, p52:
"Computer Terminal Corp., of San Antonio, Tex., has designed a remote cathode-ray computer terminal no bigger than a typewriter that also functions as a powerful minicomputer. In what must rank as a milestone in LSI, Texas Instruments has managed to jam this terminal's entire central processing unit- the equivalent of 3,100 MOS transistors-on a single custom chip roughly 2 in. square."

[9] In the Intel 8080 Oral History, the layout of the TMX 1795 is criticized on page 35.

[10] One enduring legacy of the Datapoint 2200 is the little-endian storage used by Intel x86 processors, which is backwards compared to most systems. Because the Datapoint 2200 had a serial processor, it accessed bits one at a time. For arithmetic, it needed to start with the lowest bit, in order to handle carries (the same as long addition starts at the right). As a consequence of this, Datapoint 2200 instructions had the low-order byte before the high-order byte. There's no need for a processor accessing bits in parallel to be little endian: processors such as the 6800 and 8051 use the more natural big-endian format. But all the microprocessors descended from the 8008 (8080, Z80, x86) kept the little-endian format used by Datapoint. (See also 8008 Oral History, page 5.)

[11] The perspective that Four-Phase and Intel treated the microprocessor differently because For Phase was a computer manufacturer and Intel is a chip manufacturer is discussed at length in When is a microprocessor not a microprocessor? in Exposing Electronics. This also goes into the history of Boysel and Four-Phase. It contains the interesting remark that the Texas Instruments litigation turned an old integrated circuit (the Four-Phase AL1) into a new microprocessor. Related discussion is in the book To the Digital Age: Research Labs, Start-up Companies, and the Rise of MOS Technology.

[12] While designing the 4004, Intel had a little-known backup plan in case the 4004 turned out to be too complex to build. This backup plan would also allow Intel to sell processors even though Busicom had exclusive rights to the 4004. (The 4004 was built under contract to calculator manufacturer Busicom, who had exclusive rights to the 4004 (which they later gave up). Federico Faggin explains (Oral History) that while Busicom had exclusive rights to use the 4004, they didn't own the intellectual property, so Intel was free to build similar processors.) This backup plan was the simpler 4005 chip. While the 4004 had 16 registers and an on-chip stack, the 4005 just had the program counter, a memory address register, and an accumulator, using external RAM for registers. When the 4004 chip succeeded, Intel didn't need the 4005 and licensed it to a Canadian company, MicroSystems International, which released the chip as the MF7114 in the second half of 1972. Sales were poor and the MF7114 was abandoned in 1973, so the chip is almost unknown today. The history of the MF7114 is described in detail in The MIL MF7114 Microprocessor.

[13] The description "TI versus Everybody trial" is from The Evolution to the Computer History Museum" by Gordon Bell, p28. Texas Instruments was referred to as "The Dallas Legal Firm" by the CEO of Cypress Semiconductors according to History of Semiconductor Engineering p 194-195.

[14] Texas Instruments received several broad patents on the TMX 1795. 3,757,306: "Computing Systems CPU" covers a CPU on a single chip with external memory. 4,503,511: "Computing system with multifunctional arithmetic logic unit in single integrated circuit" covers an ALU, registers, and logic on a chip. 4,225,934: "Multifunctional arithmetic and logic unit in semiconductor integrated circuit" describes an ALU on a single chip with a parallel bus.

The Texas Instruments v. Dell litigation featured multiple patents. The TMX 1795 patent in the litigation was 4,503,511: "Computing system with multifunctional arithmetic logic unit in single integrated circuit"; the other TMX 1795 patents were not part of the litigation. Several were TMS 0100 calculator/microcontroller patents: 4,326,265: "Variable function programmed calculator", 4,471,460: "Variable function programmed system", 4,471,461: "Variable function programmed system", 4,485,455: "Single-chip semiconductor unit and key input for variable function programmed system". Finally there were some miscellaneous patents: 3,720,920: "Open-ended computer with selectable I/O control", 4,175,284: "Multi-mode process control computer with bit processing", RE31,864: "Self-test feature for appliance or electronic systems operated by microprocessor".

The broader lawsuit Texas Instruments v. Daewoo, et al was against computer manufacturers Cordata (formerly Corona Data Systems), Daewoo, and Samsung. It went on from 1990 to 1993, and ended up with the companies needing to license the patents. The Dell lawsuit, Texas Instruments v. Dell, also went from 1990 to 1993 but ended in a settlement favorable to Dell after Boysel's demonstration of the AL1 chip acting as a single-chip CPU in 1992.

[15] It may seem strange that someone can get a patent a decade or two after their invention. This is accomplished through a "continuation", which lets you file updated patents with additional claims. This process can be dragged out for decades, resulting in a submarine patent.

Patents used to be good for 17 years from the date it was granted, no mater how delayed. This delay can make a patent much more valuable; there are a lot more companies to sue over a microprocessor patent in 1985 than in 1971, for instance. Plus, if you have a similar non-delayed patent too, it's like having a free extension on the patent. US patents are now valid for 20 years from filing, eliminating submarine patents (except for those still in the system).

[16] Ted Hoff's article Impact of LSI on future minicomputers, IEEE International Convention Digest, Mar. 1970, discusses the difficulty of building LSI parts that can be used in large (and thus cost-effective) volumes. He suggests that since a MOS chip can hold 1000 to 6000 devices, a standardized CPU could be built on a single LSI chip and sold for $10 to $20.

[17] The 4004 Oral History has information on the 4004 timeline. Federico Faggin says that the TI chip was a month or two after the 4004 (page 32). Page 33 discusses the interrupt problem on the TMX 1795.

[18] Interview with Marcian (Ted) Hoff (archived) provides a lot of background on development of the 4004. It describes how by October 1969 they were committed to building the 4004 as a computer on a chip. The first silicon for the 4004 was in January 1971, and by February 1971 the chip was working. In May 1971, Busicom ran into financial difficulties and negotiated a lower price for the 4004 in exchange for giving up exclusive rights to the chip. He describes how at the Fall Joint Computer Conference, many customers would argue that the 4004 wasn't a computer but just a bit slice; after looking at the datasheet, they realized that it was a computer. Ted Hoff also describes the origins of the 8008, saying that he and Stan Mazur proposed the single-chip processor to Datapoint, much to Vic Poor's surprise, but later Vic Poor claimed that he had planned a single-chip processor all along.

[19] The thesis Technological Innovation in the Semiconductor Industry by Robert R. Schaller, 2004, has several relevant chapters. Chapter 6 analyzes the history of the integrated circuit in detail. Chapter 7, The Invention of the Microprocessor, Revisited, provided a lot of background for this article. Chapter 8 is a detailed analysis of Moore's Law.

[20] By carefully studying the Viatron terminal schematics, I uncovered details about the multi-chip processor in the Viatron terminal. The processor handled 8-bit characters and was programmed in 12-bit microcode, 512 words stored in ROM chips. It had three data registers (IBR, TEMP, and AUX), and two microcode ROM address registers (RAR and RAAR). Arithmetic operations appear to be entirely lacking from the processor. The memory was built from shift register memory chips and was used for the display. The Viatron price list is in the Viatron System 21 Brochure.

[21] The Gray code is a way of encoding values in binary so only one bit changes at a time. This is useful for mechanical encoding because it avoids errors during transitions. For instance, if you use binary to encode the position of an aircraft control, as it moves from 3 to 4 the binary values are 011 and 100. If the first bit changes before the rest, you get 111 (i.e. 7) and your plane may crash. With Gray code, 3 and 4 are encoded as 010 and 110. Since only one bit changes, it doesn't matter if the bits don't change simultaneously—you either have 3 or 4 and no bad values in between.

[22] Ray Holt's firstmicroprocessor.com calls the SLF (special logic function) chip the CPU. In the original paper, this chip was not called the CPU and was only described briefly. In the paper, each of the three multi-chip functional units is called a CPU. It's clear that the SLF chip was recently renamed the CPU just to support the claim that the CADC was the first microprocessor.

[23] The MP944 chips had considerably fewer transistors than the 4004: 1063 in the PMU, 1241 in the PDU, 743 in the SLF, and 771 in the SLU, compared to 2300 in the 4004.

[24] David Patterson's analysis of the CADC computer can be found on the firstmicroprocessor.com website.

[25] The inventors of the 4004 wrote a detailed article about the chip: The history of the 4004. Other articles with details on the 4004's creation are The birth of the microprocessor and The Microprocessor.

[26] For more information on Gilbert Hyatt's patent, see Chip Designer's 20-Year Quest and For Texas Instruments, Some Bragging Rights, Inventor battling U.S. over patents from '70s and Gilbert Who? An obscure inventor's patent may rewrite microprocessor history.

The specific legal issues and maneuvering over Hyatt's patent are complex, but described in the appeal summary and Berkeley Technology Law Journal. If you try to follow this, note that Boone's '541 application and '541 patent are two totally different things, even though they have the same title and end in 541. The presentation Patent litigations that shaped their industries provides an overview of the litigation over the "Single Chip Computer" and other inventions.

[27] Note that the TMS 0100 is actually a series of chips (TMS 01XX) and likewise the TMS 1000 is also a series. Confusingly, the first chip in the TMS 0100 series was the TMS 1802NC calculator chip, which was renamed the TMS 0102; despite its name, it was not in the TMS 1000 series.

[28] The Datapoint 2200 was a serial processor—while it was an 8-bit processor, it operated on one bit at a time, had a one-bit ALU, and a one-bit internal bus. While this seems bizarre from our perspective, implementing a processor serially was a fairly common way to reduce the cost of a processor; the PDP-8/S was another serial minicomputer. (This should not be confused with the Motorola MC14500B, which genuinely is a one-bit processor designed for simple control applications.)

12-minute Mandelbrot: fractals on a 50 year old IBM 1401 mainframe

When I found out that the Computer History Museum has a working IBM 1401 computer[1], I wondered if it could generate the Mandelbrot fractal. I wrote a fractal program in assembly language and the computer chugged away for 12 minutes to create the Mandelbrot image on its line printer. In the process I learned a bunch of interesting things about the IBM 1401, which I discuss in this article.

The IBM 1401 at the Computer History Museum printing the Mandelbrot fractal on the 1403 printer.

The IBM 1401 mainframe computer (left) at the Computer History Museum printing the Mandelbrot fractal on the 1403 printer (right). Note: this is a line printer, not a dot matrix printer.

The IBM 1401 computer was announced in 1959, and went on to become the best-selling computer of the mid-1960s, with more than 10,000 systems in use. The 1401 leased for $2500 a month[2] (about $20,000 in current dollars), a low price that let many more companies use computers. Even a medium-sized business could use the 1401 for payroll, accounting, inventory, order processing, invoicing, analysis, and many other tasks. The 1401 was called the Model-T of the computer industry due to its low price and great popularity.[3] Even for its time, IBM 1401 only had moderate performance, especially compared to a high-end business computer like the IBM 7080 (rental fee: $48,000 a month).[2] But the IBM 1401 became hugely popular because of its affordability, reliability, ease of use, high-quality printer and stylish appearance[4].

The 1401 was an early all-transistorized computer. These weren't silicon transistors, though, they were germanium transistors, the technology before silicon. The transistors and other components were mounted on circuit boards about the size of a playing card. These boards were called Standard Modular System (SMS) boards and each one provided a function such as a flip flop or simple logic functions. The IBM 1401 could contain thousands of SMS cards, depending on the features installed - the basic system had about 933 cards[5], while the system I used has 2881 SMS cards. (For more information on SMS cards, see my earlier article.)

SMS cards inside the IBM 1401. These cards are part of the tape drive control, amplifying signals read from tape.

SMS cards inside the IBM 1401. These cards are part of the tape drive control, amplifying signals read from tape.

The SMS cards plug into racks (which IBM confusingly calls "gates"), that fold out from the computer as shown below. The 1401 is designed for easy maintenance - to access a gate, you just grab the handle and it swings out from the computer, exposing the wires and boards for maintenance. At the bottom of the gate, wiring harnesses connect the gate to other parts of the computer.[6] There are 24 of these gates in total.

The IBM 1401 computer is built from thousands of SMS circuit cards. This open rack (called a gate) shows about 150 SMS cards.

The IBM 1401 computer is built from thousands of SMS circuit cards. This open rack (called a gate) shows about 150 SMS cards.

Unusual features of the IBM 1401

It's interesting to look at old computers because they do things very differently. Some of the unusual features of the IBM 1401 are that it used decimal arithmetic and 6-bit characters, it had arbitrary-length words, and additional instructions were available for a rental fee.

The IBM 1401 is based on decimal arithmetic, not binary. Of course it uses 0's and 1's internally, but numbers are stored as digits using binary coded decimal (BCD). The number 123 is stored as three characters: '1', '2', and '3'. If you add 7 and 8, you get the digit 1 and the digit 5. Addresses are in decimal, so storage is in multiples of 1000, not 1024: the system with 16K of memory stores exactly 16,000 characters. All arithmetic is done in base-10. So if you divide two numbers, the IBM 1401 does base-10 long division, in hardware.

The IBM 1401 does not use bytes.[9] Instead, it uses 6-bit BCD storage. Every character is stored as a 4-bit BCD digit with two extra bits called "zone bits", named A and B.[7] The two extra zone bits allow upper-case letters (and a few special symbols) to be stored, as well as digits.[8] Using a byte as the unit of operation didn't become popular until later with the IBM System/360; in the early 1960s, computers often used strange word sizes such as 13, 17, 19, 22, 26, 33, 37, 41, 45, and 50 bit words.[9]

The photo below shows the core memory module from the IBM 1401, with 4,000 characters of memory. Each bit is stored in a tiny donut-shaped ferrite core with wires running through it. The core module is more complex than you'd expect, with 16 layers (frames) in total. Eight frames hold the 6 bits of data, plus the word mark bits (explained below) and parity bits. Six frames hold data from the card reader brushes and the print hammers, for data and error checking.[10] The remaining two frames are just used for wiring.

The 4000 character core memory module from the IBM 1401 computer.

The 4000 character core memory module from the IBM 1401 computer requires a huge amount of wiring.

Probably the most unusual feature of the 1401 is that it uses variable-length words, with word marks indicating each word. You might expect that variable-length words would let you use words of perhaps 8, 16, and 32 bits. But the IBM 1401 permitted words of arbitrarily many characters, up to the total size of memory! For instance, an instruction could move a 47-character string, or add 11-digit numbers. (Personally, I think it's easier to think of it as variable-length fields, rather than variable-length words.)

The word mark itself is a bit that is set on a memory location to indicate the boundary of a word (i.e. field).[11] An instruction on the IBM 1401 processes data through memory sequentially until it hits a word mark. It's important to remember that word marks are not part of the characters, but more like metadata, so they remain as new data records are read in and processed.[11] The main motivation behind variable-length words was to save expensive core memory, since each field length can be fit to the exact size required.

Another interesting thing about the IBM 1401 is that many instructions were extra-price options. The "advanced programming" feature provided new instructions for moving records, storing registers, and using index registers; this required the installation of 105 new SMS cards and (coincidentally) cost $105 a month. Even the comparison instruction cost extra. Because the 1401 uses BCD, you can't just subtract two characters to compare them as you would on most processors. Instead, the 1401 uses a bunch of additional circuitry for comparison, about 37 SMS cards for which you pay $75 a month.[12] Renting the printer buffer feature for $375 a month added a separate core storage module, 267 more SMS cards, and two new instructions. The bit test instruction cost only $20 a month and additional card punch control instructions were $25 a month. If you bought one of these features, an IBM engineer would install the new cards and move some wires on the backplane to enable the feature. The wire-wrapped backplanes made it relatively easy to update the wiring in the field.

The 1401 could be expanded up to 16,000 characters of core memory storage: 4,000 characters in the 1401 itself, and 12,000 characters in a 1406 expansion box, about the size of a dishwasher. The 12K expansion sold for $55,100 (about $4.60 per character), or rented for $1,575 a month. (You can see why using memory efficiently was important.) Along with the expanded memory came additional instructions to manipulate the larger addresses.

The tiny magnetic cores providing storage inside the IBM 1401's 4,000 character memory.

The tiny magnetic cores providing storage inside the IBM 1401's 4,000 character memory. Wires pass through each core to read and write memory. You can see multiple layers of cores in this photo.

One feature that you'd expect a computer to have is a subroutine call instruction and a stack. This is something the 1401 didn't have. To call a subroutine on the IBM 1401, you jump to the start of the subroutine. The subroutine then stores the return address into a jump instruction at the end, actually modifying the code, so at the end of the subroutine it jumps back to the caller.[13] If you want recursion, you're on your own.

Some advanced features of the 1401

Compared to modern computers, the IBM 1401 is extremely slow and limited. But it's not as primitive as you might expect and it has several surprisingly advanced features.

One complex feature of the IBM 1401 is Editing, which is kind of like printf implemented in hardware. The Edit instruction takes a number such as 00123456789 and a format string. The computer removes leading zeros and inserts commas as needed, producing an output such as 1,234,567.89. With the optional Expanded Editing feature (just $20/month more), you can obtain floating asterisks (******1,234.56) or a floating dollar sign ($1,234.56), which is convenient for printing checks. Keep in mind that this formatting is not done with a subroutine; it is implemented entirely in hardware, with the formatting applied by discrete transistors.

Another advanced feature of the IBM 1401 is extensive checking for errors. With tens of thousands of components on thousands of boards, many things can go wrong. The 1401 catches malfunctions so they don't cause a catastrophe (such as printing million dollar payroll checks). First, the memory, internal data paths, instruction decode, and BCD conversion are all protected by parity and validity checks. The ALU uses qui-binary addition to detect arithmetic errors. The card reader reads each card twice and compares the results.[10] The 1401 verifies the printer's operation on each line. (The read, punch, and print checks use the additional core memory planes discussed earlier.) As a result, the 1401 turned out to be remarkably reliable.

Because the IBM 1401 has variable word length, it can perform arbitrary-precision arithmetic. For instance, it can multiply or divide thousand-digit numbers with a single instruction. Try doing that on your Intel processor! (I tried multiplying 1000-digit numbers on the 1401; it takes just under a minute.) Hardware multiply/divide is another extra-cost feature; to meet the 1401's price target, they made it an option with the relatively steep price of $325 per month. You do get a lot of circuitry for that price, though - about 246 additional SMS cards installed in two gates.[14] And remember, this is decimal multiplication and division, which is much more difficult to do in hardware than binary.

The 1401 I used is the Sterling model which it supports arithmetic on pounds/shillings/pence, which is a surprising thing to see implemented in hardware. (Up until 1971, British currency was expressed in pounds, shillings, and pence, with 12 pence in a shilling and 20 shillings in a pound. This makes even addition complicated, as tourists often discovered.) By supporting currency arithmetic in hardware, the 1401 made code faster and simpler.[15]

A maze of wire-wrapped wires connects the circuits of the IBM 1401 computer.

A maze of wire-wrapped wires on the back of a gate connects the circuits of the IBM 1401 computer. The wiring was installed by automated machinery, but the wiring could be modified by field engineers as needed.

Implementing the Mandelbrot in 1401 assembly language

Writing the Mandelbrot set code on the 1401 is a bit tricky since I did it in assembly language (called Autocoder). The hardest part was thinking about word marks. Another complication was the 1401 doesn't have native floating point arithmetic, so I used fixed point: I scaled each number by 10000, so I could represent 4 decimal places with an integer. The 1401 is designed for business applications, not scientific applications, so it's not well-suited for fractal generation. But it still got the job done.

The 1401 didn't need to be programmed in assembly language - it supports languages such as Fortran and COBOL - but I wanted the full 1401 experience. It does amaze me though that you can run a COBOL compiler on a machine with just 4,000 characters of memory. The Fortran compiler required a machine with 8,000 memory location; in order to fit, it ran in 63 separate phases.

The assembly language code for the Mandelbrot fractal is shown below. The first part of the code defines constants and variables with DCW. This is followed by three nested loops to loop over each row, each column, and the iterations for each pixel. Some of the instructions in the code are M (multiply), A (add), S (subtract), and C (compare). Comments start with asterisks.

               JOB  MANDELBROT
     *GENERATES A MANDELBROT SET ON THE 1401
     *KEN SHIRRIFF  HTTP://RIGHTO.COM
               CTL  6641
               ORG  087
     X1        DCW  001  *INDEX 1, COL COUNTER TO STORE PIXEL ON LINE
               ORG  333
     *
     *VALUES ARE FIXED POINT, I.E. SCALED BY 10000
     *Y RANGE (-1, 1). 60 LINES YIELDS INC OF 2/60*10000
     *
     YINC      DCW  333
     XINC      DCW  220          *STEP X BY .0220
     *
     *Y START IS -1, MOVED TO -333*30 FOR SYMMETRY
     *
     Y0        DCW  -09990       *PIXEL Y COORDINATE
     *
     *X START IS -2.5
     *
     X0INIT    DCW  -22000       *LEFT HAND X COORDINATE
     X0        DCW  00000        *PIXEL X COORDINATE
     ONE       DCW  001
     ZR        DCW  00000        *REAL PART OF Z
     ZI        DCW  00000        *IMAGINARY PART OF Z
     ZR2       DCW  00000000000  *ZR^2
     ZI2       DCW  00000000000  *ZI^2
     ZRZI      DCW  00000000000  *2 *ZR *ZI
     ZMAG      DCW  00000000000  *MAGNITUDE OF Z: ZR^2 + ZI^2
     TOOBIG    DCW  00400000000  *4 (SCALED BY 10000 TWICE)
     I         DCW  00           *ITERATION LOOP COUNTER
     ROW       DCW  01
     ROWS      DCW  60
     COLS      DCW  132
     MAX       DCW  24           *MAXIMUM NUMBER OF ITERATIONS
     *
     *ROW LOOP
     *X1 = 1  (COLUMN INDEX)
     *X0 = -2.2 (X COORDINATE)
     *
     START     LCA  ONE, X1     *ROW LOOP: INIT COL COUNT
               LCA  X0INIT, X0  *X0 = X0INIT
               CS   332         *CLEAR PRINT LINE
               CS               *CHAIN INSTRUCTION
     *
     *COLUMN LOOP
     *
     COLLP     LCA  @00@, I     *I = 0
               MCW  X0, ZR      *ZR = X0
               MCW  Y0, ZI      *ZI = Y0
     *
     *INNER LOOP:
     *ZR2 = ZR^2
     *ZI2 = ZI^2
     *IF ZR2+ZI2 > 4: BREAK
     *ZI = 2*ZR*ZI + Y0
     *ZR = ZR2 - ZI2 + X0
     *
     INLP      MCW  ZR, ZR2-6   *ZR2 =  ZR
               M    ZR, ZR2     *ZR2 *= ZR
               MCW  ZI, ZI2-6   *ZI2 =  ZI
               M    ZI, ZI2     *ZI2 *= ZI
               MCW  ZR2, ZMAG   *ZMAG = ZR^2
               A    ZI2, ZMAG   *ZMAG += ZI^2
               C    TOOBIG, ZMAG  *IF ZMAZ > 4: BREAK
               BH   BREAK
               MCW  ZI, ZRZI-6  *ZRZI = ZI
               M    ZR, ZRZI    *ZRZI = ZI*ZR
               A    ZRZI, ZRZI  *ZRZI = 2*ZI*ZR
               MCW  ZRZI-4, ZI  *ZI = ZRZI (/10000)
               MZ   ZRZI, ZI    *TRANSFER SIGN
               A    Y0, ZI      *ZI += Y0
               S    ZI2, ZR2    *ZR2 -= ZI2
               MCW  ZR2-4, ZR   *ZR = ZR2 (/10000)
               MZ   ZR2, ZR     *TRANSFER SIGN
               A    X0, ZR      *ZR += X0
     *
     *IF I++ != MAX: GOTO INLP
     *
               A    ONE, I      *I++
               C    MAX, I      *IF I != MAX THEN LOOP
               BU   INLP
               MCW  @X@, 200&X1  *STORE AN X INTO THE PRINT LINE
     BREAK     C    X1, COLS    *COL LOOP CONDITION
               A    ONE, X1
               A    XINC, X0    *X0 += 0.0227
               BU   COLLP
               W                *WRITE LINE
     *
     *Y0 += YINC
     *IF ROW++ != ROWS: GOTO ROWLP
     *
               C    ROW, ROWS   *ROW LOOP CONDITION
               A    ONE, ROW
               A    YINC, Y0    *Y0 += 0.0333
               BU   START
     FINIS     H    FINIS       HALT LOOP
               END  START

I compiled and ran the code with the ROPE compiler and simulator before using the real computer.[16] The cards were punched automatically by an IBM 029 keypunch controlled by a PC through a bunch of USB-controlled relays. The photo below shows the keypunch in operation. Each blank card drops down from the feeder in the upper right. The card is punched as it moves to the left. Punched cards are then flipped up and stacked in the upper left area (empty in this picture).

An IBM 029 keypunch preparing a card deck that generates the Mandelbrot fractal.

An IBM 029 keypunch preparing a card deck that generates the Mandelbrot fractal.

The resulting card deck is shown below, along with the output of execution. The program fits onto just 16 cards, but the card format is a bit unusual. The machine code for the Mandelbrot program is punched into the left half of the each card, with code such as M384417A395417. An interesting thing about the 1401 is the machine code is almost human-readable. M384417 means Move field from address 384 to address 417. A395417 means Add the number at address 395 to the number at address 417. The text on these cards is the actual machine code that gets executed, not the assembly code. Since the machine is character-based, not binary, there's no difference between the characters "428" and the address 428.

The card deck to generate the Mandelbrot fractal on the IBM 1401 computer.

The card deck to generate the Mandelbrot fractal on the IBM 1401 computer, along with the output. The white stripe through the fractal near the right is where a hammer in the printer malfunctioned.

If you look at the right half of the cards, there's something totally different going on, with text like L033540,515522,5259534. There's no operating system, so, incredibly, each card has code to copy its contents into the right place in memory (L instruction), add the word marks (, instruction), and load the next card. In other words, the right hand side of each card is a program that runs card-by-card to load into memory the program on the left hand side of the card deck, which is executed after the last card is loaded.[17]

To run the program, first you hit the "Power On" button on the IBM 1401 console. Relays clunk for a moment to power up the system and then the computer is ready to go (unlike modern computers that take so long to boot). You put the cards into the card reader and hit the "Load" button. The cards fly through the reader at the remarkable speed of 800 cards per minute so the Mandelbrot program loads in just over a second. The console starts flickering as the program runs, and every few seconds the line printer hammers out another line of the fractal. After 12 minutes of execution, the fractal is done. (Interestingly enough, the very first picture of a Mandelbrot set was printed on a line printer in 1978.[18])

The console of an IBM 1401 mainframe.

The console of the IBM 1401 mainframe. The top half shows the data flow through the computer, from storage to the B and A registers and the logic unit. Each 6-bit value is displayed as 1248ABC, where A and B are zone bits and C is the check (parity) bit. On the right, "OP" shows the operation being executed. Below are knobs to manually access memory. On the left, the "Start Reset" button clears an error, such as the card read failures I would hit. At the bottom are the important buttons to turn the computer on and off. Note the Emergency Off handle that immediately cuts the power.

Conclusions

Writing a Mandelbrot program for the IBM 1401 was an interesting project. You think a bit differently about programming when using decimal numbers and keeping track of word marks. But I have to say that comparing the performance with a current machine - not to mention the storage capacity - makes me appreciate Moore's Law.

The Computer History Museum in Mountain View runs demonstrations of the IBM 1401 on Wednesdays and Saturdays. It's amazing that the restoration team was able to get this piece of history working, so if you're in the area you should definitely check it out. The schedule is here. Tell the guys running the demo that you heard about it from me and maybe they'll run my prime number program or Pi program. You probably wouldn't want to wait for the Mandelbrot to run.

Thanks to the Computer History Museum and the members of the 1401 restoration team, Robert Garner, Ed Thelen, Van Snyder, and especially Stan Paddock. The 1401 team's website (ibm-1401.info) has a ton of interesting information about the 1401 and its restoration.

Notes and references

[1] The Computer History Museum has two working 1401 computers: the "German 1401" and the "Connecticut 1401" (based on where they came from). I used the German 1401 since the Connecticut 1401 was undergoing card reader maintenance at the time.

[2] While the $2500 per month rental rate is quoted in many places, the price could climb to $10,000 a month for a full system with multiple tape drives. The price varied greatly depending on the 1401 model, the amount of memory, and the peripherals (tape drives, card reader, printer, disk drive). The minimal configuration (1401 Model A, 1402 card reader, and 1403 printer) went for $2,475 a month (or purchased for $125,600 - about $1 million accounting for inflation). A "recommended" configuration with 8K of memory, processor options, and another printer went for $4,610 a month. Tape drives boosted the price at $980 a month for the interface and $1100 a month for each 729 IV tape drive. A 4000 character memory expansion cost $575 per month.

Detailed information on 1961 computers including rental rates is available in an interesting survey of computers in 1961, the thousand-page A Third Survey of Domestic Electronic Digital Computing Systems", Report No. 1115, March 1961 (1401 page). The basic IBM rental price was for one 8-hour shift (176 hours a month). The computers included a time counter, and users were billed extra if they went over the allotted time. Customers often paid a higher rental fee so they could run 24/7.

[3] The comment that the 1401 became the Model-T of the computer industry is from the article IBM System/360, by IBM VP Bob Evans. One piece of trivia from that article is the IBM 1620 rented for $1600 a month, making it the first IBM system renting for a price less than its model number.

[4] The IBM 1401 has a very distinctive style, especially compared to earlier IBM computers (such as the 650 or 704) with a very utilitarian, industrial appearance. The sleek, modernist style of the 1401 isn't arbitrary, but the result of a detailed industrial design process. The book The Interface: IBM and the Transformation of Corporate Design has a very interesting discussion of the effort IBM put into industrial design. Edgar Kaufmann, Jr came up with important design ideas that were developed by Eliot Noyes. Some design concepts were recessed pedestals for a feeling of floating and lightness, the concealment of most of the circuitry, expressing the "inherent drama" of computers, the carefully controlled color scheme, and modern materials for the cabinets. The tape drives in particular were wildly successful at expressing the "inherent drama" of computing, to the point that spinning tape drives became a movie cliche (tvtropes: Computer Equals Tapedrive).

[5] The number of SMS cards in an IBM 1401 depends on the model, the options installed, engineering changes (i.e. fixes) applied to the system, and the amount of memory in use. I got the number 1206 by analyzing the SMS plug chart and counting 933 basic cards, 267 Sterling basic cards, 6 power supply cards, and 11 cards for storage support. This machine is the Sterling model, so it is slightly more complex than the regular model.

[6] The IBM 1401 has 32 "potential" gates: 16 on the front and another 16 on the back, but only 24 of these are gates with circuitry. The two potential panels in the upper left are taken up by the control panel, which swings out to reveal the core memory behind it. Four panels have power supplies behind them (although much of the power supply is inside the card reader, strangely). Two more spots are occupied by the surprisingly thick cables connecting the 1401 to peripherals. This leaves 24 swing-out gates; some may be unused, depending on the optional features installed.

[7] The zone bits are closely related to the zone punches in IBM punch cards. The top row of a punch card is the 12 (Y) zone, and the row beneath it is the 11 (X) zone. A number has one hole punched in the card row corresponding to the number (rows 0 through 9). A character usually has two holes punched: 1 through 9 for the BCD value, and a zone punch for the zone bits. The zone punch is card zone 11 for zone bit B set, card zone 12 for zone bits A and B, or card row 0 for zone bit A.

There are a few complications, though, that mess up this pattern. First, for characters outside the 0-9 range, two digit punches are used: 8 and the digit for the low three bits. (e.g. '#' is stored as bits 8, 2, and 1, so it is punched as 8 and 3.) Second, because card row 0 is used both for the digit 0 and as a zone punch, there is a conflict and the value 0 is treated as 10 in certain conditions (and punched as 8 and 2). Because a blank has no punches and is stored as 0 internally, the digit 0 is stored as 10. Different IBM systems treat these corner cases differently. Custom features were available for the 1401 to provide compatibility as needed.

[8] The zone bits are used for a few things in addition to letters. A zone bit is added to the low-order digit of a number to indicate the sign of the number. Memory addresses are expressed as three digits, which would allow access to 1000 locations; by using zone bits, the three digit address can reach 16,000 locations. The zone bits also track overflow in arithmetic operations.

[9] Originally byte referred to the group of bits used to encode a character, even if it wasn't 8 bits (see Planning a Computer System: Project Stretch, p40). Some examples of unusual word lengths: The RCA 601 supported 6, 8, 12, 16-bit, or variable-length words. SPEC used 13-bit words. The Hughes Airborne Computer used 17-bit words, while the Hughes D Pat used 19-bit words and the Hughes M 252 used 20-bit words. The RW 300 used 18-bit words, while the RW 400 used 26-bit words. The Packard Cell 250 used 22-bit words. UNIVAC 1101 used 24-bit words. ALWAC II used 32 bits plus sign (33 bits). COMPAC used 37 bits (36 + sign). AN/MJQ used 41-bit words. SEAC used 45 bits (44 plus sign). AN/FSQ 31 used 48 bit words. ORACLE used 50-bit words. The Rice University computer used 54-bit words. Details on these computers are in A Third Survey of Domestic Electronic Digital Computing Systems.

[10] The card reader reads each card twice and verifies that the hole count is the same for both reads. If the counts don't match, the card reader detects the error and stops. In more detail, each card is read "sideways", a row of 80 positions at a time. Two bits keep the status of each column. One bit is turned on if there is any hole. The other bit is toggled for each hole. (Thus, it's not exactly a count, simplifying the logic.) The process is reversed on the second read, so both bits will end up back at 0 for a correct read.

Since the next card is already getting read as the first card is getting verified, two sets of bits are needed, one for the first card and one for the second card. Thus, four planes of 80 bits each are used in total to verify card reads.

Each of the 240 brushes in the card reader has a separate wire that goes through a specific core in the 1401's core memory. Likewise, each of the 132 print hammers in the printer is wired directly to an individual core. Thus, there are thick cables containing hundreds of wires between the IBM 1401 and the card reader and the printer.

[11] There are several details of wordmarks I'll point out. The IBM 1401 is obviously big-endian, since that's how numbers are punched on cards. Since arithmetic operations need to start with the lowest-order digit, they start at the "end" of the number and work backwards through memory to the highest-order digit. The consequence is an instruction is given the address of the end of the field and progresses to lower addresses until it hits the word mark, which is at the beginning of the field. This seems backwards if you're a C programmer, where you start at the beginning of a string and go forwards until you hit the end.

Word marks are also used to indicate the start of each instruction. Instructions can be 1 to 8 characters long, and the presence of a word mark controls the length. Bootstrapping the word marks for the first instructions loaded into the computer requires some tricks.

[12] The comparison logic is more complex than you'd expect. Surprisingly, the comparison order doesn't match the binary order of characters. Also, comparisons aren't implemented with subtraction (like most processors). Instead, logic first determines if the characters are special characters or not - special characters are before regular characters (with some exceptions: for example, - is between I and J). Then a lot of AND-OR logic performs basically brute-force comparison by looking at various bit patterns. The results of a comparison can be seen on the control panel in the Logic box. The optional compare logic is shown on the Intermediate Level Diagrams (ILD), page 37.

[13] Self-modifying code, where the program changes its own instructions, was common in the past. A guide to IBM 1401 programming, 1961, has a whole chapter (6) on this, discussing how "we are able to operate on instruction in storage just as through they were data". Treating code as data wasn't done only by Lisp programmers. In fact, the book calls the ability of a program to modify itself "by all odds the most important single feature of the stored program concept." As well as subroutine returns, IBM 1401 programmers used self-modifying code for indexing, address computation, and complex conditional branching. On current machines, Self-modifying code is rare because it's harder to debug and messes up the instruction pipeline.

[14] For details on how the multiply and divide operations work internally, see the optional feature manual. This circuit has some complicated optimizations. For example, to speed up the repeated additions, it will add the doubled value instead if appropriate. But doubling a decimal value takes a fairly complicated circuit (unlike binary doubling, which is trivial). And there's error checking to make sure nothing goes wrong in the doubling.

[15] The Sterling circuitry to support £sd math is even more complicated because shillings and pence are stored in a compressed form. The obvious representation is a two-digit field for pence (0 to 11) and a two-digit field for shillings (0 to 19). But to save precious memory and storage space, the BSI standard and incompatible IBM standard use one-digit fields and special characters. A knob on the control panel selects which standard to use. The Sterling hardware must perform arithmetic on this compressed representation, as well as handling the non-decimal bases of shillings and pence.

This knob on the control panel of the IBM 1401 computer selects the storage mode for pence and shillings.

This knob on the control panel of the IBM 1401 computer selects the storage mode for pence and shillings.

[16] If you want to write a program for the 1401, instructions on using the ROPE simulator are here. It's a simple IDE that lets you edit assembly code (which is called Autocoder), assemble it, and then run it on the simulator. Take a look at A guide to IBM 1401 Programming and Programming the 1401 if you want to understand how to program the 1401. The 1401 Reference Manual is also useful for understanding what the instructions do.

[17] Each card also has a four-digit sequence number in the last columns. This lets you re-sort the cards if you happen to drop the deck and scramble the program.

[18] The first picture of the Mandelbrot set appears in 1978 paper by Brooks and Matelski, prior to Mandelbrot's work. (Thanks to Robert Garner for pointing this out.) There's some controversy over who "really" discovered the Mandelbrot set. See Who Discovered the Mandelbrot Set? in Scientific American for a discussion.

A database of SMS cards: The technology inside IBM's 1960s mainframes

IBM's mainframes of the 1960s are based on an interesting technology - Standard Modular System cards or SMS cards. These cards, created in the early years of transistorized computing, each implement a simple circuit on a board about the size of a playing card. You probably think of silicon as the basis for computers, but SMS cards use earlier transistors based on a semiconductor called germanium[1]. Eventually, of course, silicon took over, but in the early 1960s it was germanium not silicon that ran computers.

SMS cards were originally created for the IBM 7030 Stretch supercomputer.[2] The idea of SMS cards was to have a few standard cards that formed the building blocks that could be combined to build a computer. Each card provides a basic function such as a flip flop or a few logic gates. The board below, for example, implements three AND gates[3], using a relatively slow type of logic called diode-transistor logic that only requires one transistor per gate. You can see the transistors (silver circles) and diodes on the right, while resistors and inductors[4] are on the left.

This SMS card is used in the IBM 1401 for arithmetic. This card is a JGVW triple AND gate.

This SMS card is a JGVW triple AND gate. Note the card code "JG VW" is embossed in the lower left corner of the card.

SMS cards became the basis of of IBM's systems and were used on computers such as the IBM 1401, IBM 1620, and IBM 7000 series, as well as in tape drivers, printers, and other peripheral devices. SMS cards continued to be used well into the 1970s, even after integrated circuits made discrete transistors obsolete: advanced mainframes of the 1970s such as the IBM 370 still used multiple SMS cards for power supply regulation.

The picture below shows the back of an SMS card. The 16 gold-plated contacts on the end plug into a socket in the computer, connecting the card. The circuit board pattern is considerably more complex than necessary; depending on the components installed, one circuit board can implement several different SMS cards, reducing manufacturing costs.

An IBM SMS card (type DGU) with a playing card for scale.

An IBM SMS card (type DGU) with a playing card for scale.

Since each SMS card is so basic, it takes thousands of them to build a computer. The picture below shows an IBM 1401 mainframe, a very popular business computer of the 1960s. One of the racks of cards (which IBM confusingly calls a "gate") is open, showing more than 125 SMS cards plugged in. The circuitry in this specific gate below implements timing and logic functions (such as addition and subtraction). For maintenance, each gate swings out of the computer simply by pulling on the handle above the fan. The 1401 has 24 gates like this full of SMS cards, each one implementing different functionality of the computer. In total, the IBM 1401 computer contains more than 3400 SMS cards.

The IBM 1401 computer with gate 01B3 open, showing the timing and logic circuitry.

The IBM 1401 computer with gate 01B3 open, showing the SMS cards that implement timing and logic circuitry.

The picture below is a closeup of the SMS cards plugged into a gate in the IBM 1401. At the top of the gate, wiring harnesses connect this circuitry to other parts of the computer. On the back of each gate, the SMS cards are connected together by wirewrapping.

SMS cards installed in the IBM 1401 computer

SMS cards installed in the IBM 1401 computer

Although the original idea of SMS cards was to standardize on a few types, the number of different cards exploded as time went one, resulting in thousands of different SMS card types. As well as logic gates, SMS cards can have an amazing variety of functions such as an oscillator, voltage regulator, core memory, fuses, printer hammer driver, disk speed detector or temperature switch. The 1401 computer alone uses 162 different types of SMS cards.

This DJB double-width SMS card provides core memory storage in the IBM 1443 printer

This double-width SMS card provides core memory storage in the IBM 1443 printer. Photo courtesy of maisorbus.

Information on particular SMS cards is surprisingly hard to find, so I made a database of SMS cards, collecting information on 900 different cards. Given the historical importance of SMS cards, I think information on this technology should be preserved. I pulled together information from scans of old IBM documents and a bunch of other sources (which was more work than I expected). You can access the database at righto.com/sms and see the wide variety of cards along with photos, descriptions, schematics, and the devices that use them.

Database of IBM SMS cards, click to access

In the 1960s, IBM made SMS cards by the millions, but most of them have been scrapped for the gold in their contacts.[5] You can still find a few SMS cards on places such as eBay[6], though, where they are collectables for about $15 each. If you want to see SMS cards in operation, the Computer History Museum in Mountain View has live demonstrations of the 1401 computer on Wednesdays and Saturdays (schedule). Check it out if you're in the area.[7]

In conclusion, SMS cards are an important part of computer history of the 1960s. It's hard to imagine computers before silicon, but SMS cards provide a window into that time. While the technology of today's computers is hidden away in microchips, the whole circuitry of older computers is exposed, letting you see how individual components make simple logic circuits, these boards are combined into functional units, and the computer is built from these units. And if technology had progressed slightly differently, this area might be known as Germanium Valley instead of Silicon Valley.

Notes and references

[1] For the first few years, transistors were made of germanium, not silicon. In 1954, silicon transistors were introduced and rapidly took over because they were more stable and had better operating characteristics. Germanium transistors are almost entirely obsolete now, but they still remain popular in fuzz and distortion pedals, where they are said to give a better sound than silicon transistors.

[2] The IBM 7030 Stretch supercomputer is described in detail in The engineering design of the Stretch Computer, 1959. The Stretch used 22,000 SMS cards, of which 4,000 were double cards. Stretch used just 42 different types of SMS cards (24 single-width and 18 double-width), with two types of cards making up more than half of the computer. In comparison, the IBM 1401 is a much smaller system with just 3047 total cards, but it has more than 120 different card types. Some cards can be modified by jumpers; counting the variants the 1401 has 162 different card types. (The 3047 figure is for the Computer History Museum's IBM 1401; the figure will vary for models with different optional features.)

[3] The JGVW SMS card has three two-input AND gates, running on -12V. Inputs are +6V or -6V, outputs are -12V or 0V. (SMS cards use a variety of voltages for their inputs and outputs.) Details on this SMS card are here.

[4] You don't see inductors in logic circuits very often these days, but inductors were commonly used on SMS cards to improve performance. The inductors filtered the output signals to make the signal transitions faster.

[5] To figure out the value of the gold in an SMS card, I measured the gold contacts as 2.26mm by 10.74mm with 1.62mm by 1.13 mm traces, which works out to 24.27mm^2 per contact. With 16 contacts and a thickness of 2.54 microns, that works out to just over 1 cubic mm of gold per SMS card. At the current gold price of $1200 per ounce, that's 79 cents of gold per board. In total, the 3047 SMS cards in an IBM 1401 contain $2400 worth of gold.

[6] On eBay, two sellers of SMS cards that I have found to be very helpful are rolyath and maisorbus.

[7] Thanks to the members of the 1401 restoration team, especially Randall Neff, Ed Thelen, Jay Jaeger and Tim Coslet for their help with my SMS card exploration.

How to display the Bitcoin symbol using a webfont

Bitcoin Wiki describes some alternative ways to display the symbol that are easier than my approach. You're probably better off reading that page than this article.
I couldn't find an easy way to display a Bitcoin symbol in text on a web page, so I created a small webfont with the Bitcoin symbol ฿. (Edit: I found out that Font Awesome already has a BTC font, so use that instead of mine.) By adding this webfont to a page, you can put Bitcoin symbols into your text. The following is an example of use with different text fonts:

This demonstrates the Bitcoin symbol ฿ used in text ฿123.
The Bitcoin symbol ฿ scales with the font like this ฿123.
Large text: ฿0.456.

Note that the symbol above is not an image, but an actual font character in the text. You can zoom the page or print the page, and the symbol will remain smooth. (If you see ฿ or a box instead of the Bitcoin symbol above, something went wrong.)

How to use it

  1. Download the font file here, unzip and put on your web server.
  2. Insert the following CSS into your web page:
    <style>
    @font-face {
        font-family: 'bitcoinregular';
        src: url('bitcoin-webfont.eot');
        src: url('bitcoin-webfont.eot?#iefix') format('embedded-opentype'),
             url('bitcoin-webfont.woff2') format('woff2'),
             url('bitcoin-webfont.woff') format('woff'),
             url('bitcoin-webfont.ttf') format('truetype');
        font-weight: normal;
        font-style: normal;
    
    }
    </style>
    
  3. Use style="font-family: bitcoinregular, arial, sans-serif" on your text.
  4. Insert the Bitcoin symbol in your text. You can use HTML entity &#x0e3f; or you can use the UTF-8 character ฿ directly.

How it works

The webfont defines two characters: Bitcoin symbol without serifs and Bitcoin symbol with serifs. These are mapped to the Unicode characters U+0243 and U+0e3f. So when you use the character Ƀ the font displays Ƀ and when you use ฿ the font displays ฿. The Bitcoin symbols could be assigned to any characters; I used these since many people already use these characters as a stand-in for the Bitcoin symbol.

Some browser still don't support webfonts. If you see square boxes or the wrong characters on this page, your browser probably doesn't support webfonts and this page will make no sense. Here's a screenshot of what you should see at the top of the page:


For an explanation of webfonts, see here or here.

Why do this?

Without an easy way to use the standard symbol for Bitcoin, people end up using substitutes such as Ƀ and ฿. Text would look nicer with the standard Bitcoin symbol ฿. And once the Bitcoin symbol is in common use in text, it will be much easier to get it added to Unicode and available automatically.

Technical notes

The page has been tested on Chrome (Windows/Mac), IE (Windows), Safari (Windows/Mac), and Firefox (Windows). If it's broken for you, let me know your browser and system. The font was generated from the Bitcoin logo with Inkscape, Font Forge, and Font Squirrel based on the icon webfont process here. Undoubtedly someone with font design skills could do much better. My webfonts originally failed to display with "Missing Cross-Origin Resource Sharing (CORS) Response Header" error because my webpage is at righto.com and the fonts are at static.righto.com (a different domain). I added the Cross-Origin header to fix this. If you want to view-source and see how it works, a simpler version of the page is at //righto.com/bitcoinfont.

Inside the Intel 1405: die photos of a shift register memory from 1970

In 1970, MOS memory chips were just becoming popular, but were still very expensive. Intel had released their first product the previous year, the 3101 RAM chip with 64 bits of storage.[1] For this chip (with enough storage to hold the word "aardvark") you'd pay $99.50.[2] To avoid these astronomical prices, some computers used the cheaper alternative of shift register memory. Intel's 1405 shift register provided 512 bits of storage — 8 times as much as their RAM chip — at a significantly lower price.[3][4] In a shift register memory, the bits go around and around in a circle, with one bit available at each step. The big disadvantage is that you need to wait for the bit you want to come around, which can take half a millisecond.

One computer that used shift register memory is the Datapoint 2200 computer. (This is a very interesting computer — the 8008 was created for it following the architecture specified by Datapoint — but that's a topic for another blog post.) In the Datapoint 2200, each memory board had 32 shift registers, providing 2K of storage. The processor board used a counter to keep track of the shift register position, and would stop processing until the right bits were available. (Kind of like a cache miss in modern processors.)

I got a display board from a Datapoint 2200[5], which uses Intel 1405 shift registers for the display storage. This board uses 14 shift registers and holds 896 bytes.[6] Shift-register memory was convenient for a video display board, since the circuitry needed to access each character in sequence to display it.

Intel 1405 shift registers provide memory storage for a Datapoint 2200 display.

Intel 1405 shift registers provide memory storage for a Datapoint 2200 display.

I opened up one of the shift register chips with a hacksaw and looked at it under a metallurgical microscope to get some die photos. Since the shift registers are in metal cans, they are easy to open up, unlike the plastic packages used by most chips. The following photo shows the die. The chip is fairly simple, with most of the chip taken up with the shift register cells. Around the outside of the chip you can see the nine pads with black wires connected.

The die shows some of the reasons that shift registers were cheaper than RAM chips. Unlike a RAM chip, the chip does not need to form a regular grid — the rows in the middle are shorter than the others because of the pin on the right. In addition, the chip doesn't need any address decoding logic. Thus, more bits can be fit onto a chip. Because there are no address lines, the chip has fewer pins than a RAM chip and can fit into a smaller package.

Die shot of the Intel 1405 MOS 512-bit shift register memory.

Die shot of the Intel 1405 MOS 512-bit shift register memory.

The diagram below shows the flow of bits through the shift register, in yellow. Bits enter through the input pin at the bottom. They zig-zag through the 20 rows of the shift register and exit at the top through the output pin. Bits recirculate back to the input along the left. The clock lines are at the right and are connected to each cell of the shift register.

Labeled die shot of the Intel 1405 MOS 512-bit shift register memory.

Labeled die shot of the Intel 1405 MOS 512-bit shift register memory.

In the lower left is the circuit to control input to the shift register, which consists of a few gates. Either a new bit can be written to the shift register each cycle, or the exiting bit can recirculate and re-enter the shift register. The photo below zooms in on this circuit. The four vertical wires at the left are the chip select 2, chip select 1, recirculated bit, and Vdd.

Input circuit of the 1405 shift register.

Input circuit of the 1405 shift register.

The image below shows the circuit to control the output from the shift register, which is in the upper left of the chip. The chip has two chip select inputs, which makes it convenient to arrange the shift registers in a grid with one set of lines enabling a row and a perpendicular set of lines enabling a column.

Output circuit of the 1405 shift register.

Output circuit of the 1405 shift register.
The image below shows the shift register cells at high magnification. On the left is the actual die photo, while the right labels the components of the die. Bits flow to the right through the bottom half of the picture, and then back to the left in the top half.

The large U shapes at the bottom are transistors (red T's) that form inverters (drawn in yellow). Between each inverter is a pass transistor that controls the flow of bits from inverter to inverter. The first T is connected to clock 1, allowing the bit to flow from the first inverter to the second when clock 1 is activated. The next T is connected to clock 2, passing the bit along another step on clock 2. As the clock lines are triggered in sequence, the bits pass step-by-step through the shift register.

The chip uses silicon-gate technology. This was an important innovation in chip design that was developed in 1968 at Fairchild by Federico Faggin (who also developed the Z80), and became a core technology at Intel. With this technology, polysilicon is used as the gates for transistors instead of aluminum metal, as previous MOS integrated circuits used. For various reasons, this made chips much faster and easier to manufacture.

In the picture below, polysilicon is indicated in blue. Where it overlaps the underlying doped silicon, a transistor is formed (red T). The horizontal gray lines are the metal layer, with the voltage supplies and the clocks. The circles show connections between the different layers.[7]

Close up of the cells in an Intel 1405 512-bit shift register memory. The actual photo is on the left, and the circuit is drawn on the right.

Close up of the cells in an Intel 1405 512-bit shift register memory. The actual photo is on the left, and the circuit is drawn on the right.

The clock driver

The display circuit board below has 14 shift registers in round metal cans. But there's a huge metal can at the right — what is this IC? That turns out to be the driver chip that provides the clock signals for the shift registers, and it's pretty interesting inside.

The shift registers require two alternating clock signals to shift. These signals must not overlap, or else the data will get messed up. In addition, the shift registers require up to 30 volts in the clock, due to their old technology. Finally, a lot of current (500mA) is needed in the clock signals to drive all the chips. To meet these requirements, a special clock driver chip is used to generate the clock signals. This is the Fairchild SH0013-C "Two phase MOS clock driver".[8]

1405 shift registers provide 896 bytes of storage on a Datapoint 2200 display card.

1405 shift registers provide 896 bytes of storage on a Datapoint 2200 display card.

I expected to find an IC with big transistors inside the clock driver chip, but opening it up revealed something entirely different. Inside is a hybrid integrated circuit made up of eight separate silicon dies mounted on a tiny circuit board and connected with gold traces and gold wires. In addition, there are thick film resistors printed onto the board — these are the black "E" shapes in the picture below.

Interactive viewer

The image and schematic[8] below are an interactive exploration of the SH0013 clock driver. Click a component to see its location on the board and in the schematic highlighted. The box below will give an explanation of the component.

Click image below for details.

Conclusion

While using shift registers as memory seems bizarre now, it was a cost-effective way to implement storage in 1970. Looking inside the shift register chips shows how they work and how they could be implemented more cheaply than RAM. Providing the high-power clock signals required a special driver chip, which turns out to be a hybrid circuit with tiny semiconductors and resistors on a circuit board in a large metal IC package.

Notes and references

[1] Intel didn't invent the memory chip, of course. There were many companies making memory chips in the 1960s. For instance, Texas Instruments announced the SN5481 bipolar memory chip in 1966 (Electronics, V39 #1, p151) and Transitron had the TMC 3162 and 3164 16-bit RAM (Electrical Design News, Volume 11, p14). In 1968, RCA made 72-bit CMOS memories for the Air Force (document, photo). Lee Boysel built 256-bit dynamic RAMs at Fairchild in 1968 and 1K dynamic RAMs at Four Phase Systems in 1969 (1970 — MOS Dynamic RAM Competes with Magnetic Core Memory on Price and Boysel presentation). For more information on the history of memory technology, see 1966 — Semiconductor RAMs Serve High-speed Storage Needs and History of Semiconductor Engineering, p215. Another source for memory history is To the Digital Age: Research Labs, Start-up Companies, and the Rise of MOS Technology, p193.

[2] Memory chips started out very expensive, but prices rapidly dropped. Computer Design Volume 9 page 28, 1970, announced a price drop of the 3101 from $99.50 to $40 in small volumes. Electrical Design News Volume 15, 1970 gave the initial price of the 1405 as $13.30 in quantities of 100. Ironically, the Intel 3101 is now a collector's item and costs much more than the original price on eBay — hundreds of dollars for the right package.

[3] The datasheet for the 1405A shift register is available at Intel-vintage.info or Intel's data catalog 1976 (at archive.org).

[4] Many companies made shift register memories. For instance, in 1969 Philco (an electronics manufacturer owned by Ford Motor Company) claimed to have the longest commercially available shift register at 256 bits (Electronic Design, Volume 17, p251). For lots more information on shift register memory, see Don Lancaster's December 1974 Radio-Electronics article, " How it works: IC MOS shift registers.

[5] I obtained the Datapoint display board on eBay from Zuigadrummer, who currently has other Datapoint boards for sale. She was very helpful to me and I recommend her.

[6] The Datapoint 2200's display provided 12 lines of 80 characters. The display memory held 1024 7-bit ASCII characters. A pair of shift registers provided 1024 bits of storage, with 7 pairs in total.

[7] For those who want to know more details of the layout... The resistor symbols are not actually resistors, but clocked precharge transistors that pull the inverter outputs high. A few years later, MOS chips would use depletion transistors instead.

The metal rectangles form connections between the silicon layer and the polysilicon layer. This technique was soon obsoleted by buried contacts which connected the two layers directly without using the metal layer. This made chip layout easier, since the metal layer could be used for interconnections without being interrupted by these connections.

The gray blobs show the undoped silicon, which can be considered non-conductive. The doped silicon is conductive, except where the polysilicon crosses it and forms a transistor. Doped and undoped silicon are hard to distinguish in the die photo, but the boundary between them is visible as a faint black line. The polysilicon is much more visible in the die photo; it is orange, or red when it forms a transistor. The colors are due to the thicknesses of the layers.

[8] A datasheet for the SH0013 clock driver is in the 1973 Fairchild Linear Integrated Circuits Data Catalog, page 6-126. A datasheet for the equivalent MH0013 is in the 1972 National MOS Integrated Circuits databook, page 123.