Reverse-engineering an electromechanical Central Air Data Computer

Determining the airspeed and altitude of a fighter plane is harder than you'd expect. At slower speeds, pressure measurements can give the altitude, air speed, and other "air data". But as planes approach the speed of sound, complicated equations are needed to accurately compute these values. The Bendix Central Air Data Computer (CADC) solved this problem for military planes such as the F-101 and the F-111 fighters, and the B-58 bomber.1 This electromechanical marvel was crammed full of 1955 technology: gears, cams, synchros, and magnetic amplifiers. In this blog post I look inside the CADC, describe the calculations it performed, and explain how it performed these calculations mechanically.

The Bendix MG-1A Central Air Data Computer with the case removed, showing the complex mechanisms inside. Click this image (or any other) for a larger version.

The Bendix MG-1A Central Air Data Computer with the case removed, showing the complex mechanisms inside. Click this image (or any other) for a larger version.

This analog computer performs calculations using rotating shafts and gears, where the angle of rotation indicates a numeric value. Differential gears perform addition and subtraction, while cams implement functions. The CADC is electromechanical, with magnetic amplifiers providing feedback signals and three-phase synchros providing electrical outputs. It is said to contain 46 synchros, 511 gears, 820 ball bearings, and a total of 2,781 major parts. The photo below shows a closeup of the gears.

A closeup of the complex gears inside the CADC,

A closeup of the complex gears inside the CADC,

What it does

For over a century, aircraft have determined airspeed from air pressure. A port in the side of the plane provides the static air pressure,2 which is the air pressure outside the aircraft. A pitot tube points forward and receives the "total" air pressure, a higher pressure due to the speed of the airplane forcing air into the tube. (In the photo below, you can see the long pitot tube sticking out from the nose of a F-101.) The airspeed can be determined from the ratio of these two pressures, while the altitude can be determined from the static pressure.

The F-101 "Voodoo", USAF photo.

The F-101 "Voodoo", USAF photo.

But as you approach the speed of sound, the fluid dynamics of air change and the calculations become very complicated. With the development of supersonic fighter planes in the 1950s, simple mechanical instruments were no longer sufficient. Instead, an analog computer to calculate the "air data" (airspeed, altitude, and so forth) from the pressure measurements. One option would be for each subsystem (instruments, weapons control, engine control, etc) to compute the air data separately. However, it was more efficient to have one central system perform the computation and provide the data electrically to all the subsystems that need it. This system was called a Central Air Data Computer or CADC.

The Bendix CADC has two pneumatic inputs through tubes: the static pressure3 and the total pressure. It also receives the total temperature from a platinum temperature probe. From these, it computes many outputs: true air speed, Mach number, log static pressure, differential pressure, air density, air density × the speed of sound, total temperature, and log true free air temperature.

The CADC implemented a surprisingly complex set of functions derived from fluid dynamics equations describing the behavior of air at various speeds and conditions. First, the Mach number is computed from the ratio of total pressure to static pressure. Different equations are required for subsonic and supersonic flight. Although this equation looks difficult to solve mathematically, fundamentally M is a function of one variable ($P_t / P_s$), and this function is encoded in the shape of a cam. (You are not expected to understand the equations below. They are just to illustrate the complexity of what the CADC does.)

\[M<1:\] \[~~~\frac{P_t}{P_s} = ( 1+.2M^2)^{3.5}\]

\[M > 1:\]

\[~~~\frac{P_t}{P_s} = \frac{166.9215M^7}{( 7M^2-1)^{2.5}}\]

Next, the temperature is determined from the Mach number and the temperature indicated by a temperature probe.

\[T = \frac{T_{ti}}{1 + .2 M^2} \]

The indicated airspeed and other outputs are computed in turn, but I won't go through all the equations. Although these equations may seem ad hoc, they can be derived from fluid dynamics principles. These equations were standardized in the 1950s by various government organizations including the National Bureau of Standards and NACA (the precursor of NASA). While the equations are complicated, they can be computed with mechanical means.

How it is implemented

The Air Data Computer is an analog computer that determines various functions of the static pressure, total pressure and temperature. An analog computer was selected for this application because the inputs are analog and the outputs are analog, so it seemed simplest to keep the computations analog and avoid conversions. The computer performs its computations mechanically, using the rotation angle of shafts to indicate values. For the most part, values are represented logarithmically, which allows multiplication and division to be implemented by adding and subtracting rotations. A differential gear mechanism provides the underlying implementation of addition and subtraction. Specially-shaped cams provide the logarithmic and exponential conversions as necessary. Other cams implement various arbitrary functions.

The diagram below, from patent 2,969,210, shows some of the operations. At the left, transducers convert the pressure and temperature inputs from physical quantities into shaft rotations, applying a log function in the process. Subtracting the two pressures with a differential gear mechanism (X-in-circle symbol) produces the log of the pressure ratios. Cam "CCD 12" generates the Mach number from this log pressure ratio, still expressed as a shaft rotation. A synchro transmitter converts the shaft rotation into a three-phase electrical output from the CADC. The remainder of the diagram uses more cams and differentials to produce the other outputs. Next, I'll discuss how these steps are implemented.

A diagram showing how values are computed by the CADC. Source: Patent 2969910A">Patent 2969910.

A diagram showing how values are computed by the CADC. Source: Patent 2969910.

The pressure transducer

The CADC receives the static and total pressure through tubes connected to the front of the CADC. (At the lower right, one of these tubes is visible.) Inside the CADC, two pressure transducers convert the pressures into rotational signals. The pressure transducers are the black domed cylinders at the top of the CADC.

The pressure transducers are the two black domes at the top. The circuit boards next to each pressure transducer are the amplifiers. The yellowish transformer-like devices with three windings are the magnetic amplifiers.

The pressure transducers are the two black domes at the top. The circuit boards next to each pressure transducer are the amplifiers. The yellowish transformer-like devices with three windings are the magnetic amplifiers.

Each pressure transducer contains a pair of bellows that expand and contract as the applied pressure changes. They are connected to opposite sides of a shaft so they cause small rotations of the shaft.

Inside the pressure transducer. The two disc-shaped bellows are connected to opposite sides of a shaft so the shaft rotates as the bellows expand or contract.

Inside the pressure transducer. The two disc-shaped bellows are connected to opposite sides of a shaft so the shaft rotates as the bellows expand or contract.

The pressure transducer has a tricky job: it must measure tiny pressure changes, but it must also provide a rotational signal that has enough torque to rotate all the gears in the CADC. To accomplish this, the pressure transducer uses a servo loop. The bellows produce a small shaft motion that is detected by an inductive pickup. This signal is amplified and drives a motor with enough power to move all the gears. The motor is also geared to counteract the movement of the bellows. This creates a feedback loop so the motor's rotation tracks the air pressure, but provides much more force. A cam is used so the output corresponds to the log of the input pressure.

This diagram shows the structure of the transducer. From "Air Data Computer Mechanization."

This diagram shows the structure of the transducer. From "Air Data Computer Mechanization."

Each transducer signal is amplified by three circuit boards centered around a magnetic amplifier, a transformer-like amplifier circuit that was popular before high-power transistors came along. The photo below shows how the amplifier boards are packed next to the transducers. The boards are complex, filled with resistors, capacitors, germanium transistors, diodes, relays, and other components.

This end-on view of the CADC shows the pressure transducers, the black cylinders. Next to each pressure transducer is a complex amplifier consisting of multiple boards with transistors and other components. The magnetic amplifiers are the yellowish transformer-like components.

This end-on view of the CADC shows the pressure transducers, the black cylinders. Next to each pressure transducer is a complex amplifier consisting of multiple boards with transistors and other components. The magnetic amplifiers are the yellowish transformer-like components.

Temperature

The external temperature is an important input to the CADC since it affects the air density. A platinum temperature probe provides a resistance4 that varies with temperature. The resistance is converted to rotation by an electromechanical transducer mechanism. Like the pressure transducer, the temperature transducer uses a servo mechanism with an amplifier and feedback loop. For the temperature transducer, though, the feedback signal is generated by a resistance bridge using a potentiometer driven by the motor. By balancing the potentiometer's resistance with the platinum probe's resistance, a shaft rotation is produced that corresponds to the temperature. The cam is configured to produce the log of the temperature as output.

This diagram shows the structure of the temperature transducer. From "Air Data Computer Mechanization."

This diagram shows the structure of the temperature transducer. From "Air Data Computer Mechanization."

The temperature transducer section of the CADC is shown below. The feedback potentiometer is the red cylinder at the lower right. Above it is a metal-plate adjustment cam, which will be discussed below. The CADC is designed in a somewhat modular way, with the temperature section implemented as a removable wedge-shaped unit, the lower two-thirds of the photo. The temperature transducer, like the pressure transducer, has three boards of electronics to implement the feedback amplifier and drive the motor.

The temperature transducer section of the CADC.

The temperature transducer section of the CADC.

The differential

The differential gear assembly is a key component of the CADC's calculations, as it performs addition or subtraction of rotations: the rotation of the output shaft is the sum or difference of the input shafts, depending on the direction of rotation.5 When rotations are expressed logarithmically, addition and subtraction correspond to multiplication and division. This differential is constructed as a spur-gear differential. It has inputs at the top and bottom, while the body of the differential rotates to produce the sum. The two visible gears in the body mesh with the internal input gears, which are not visible. The output is driven by the body through a concentric shaft.

A closeup of a differential mechanism.

A closeup of a differential mechanism.

The cams

The CADC uses cams to implement various functions. Most importantly, cams perform logarithms and exponentials. Cams also implement more complex functions of one variable such as ${M}/{\sqrt{1 + .2 M^2}}$. The photo below shows a cam (I think exponential) with the follower arm in front. As the cam rotates, the follower moves in and out according to the cam's radius, providing the function value.

A cam inside the CADC implements a function.

A cam inside the CADC implements a function.

The cams are combined with a differential in a clever way to make the cam shape more practical, as shown below.6 The input (23) drives the cam (30) and the differential (37-41). The follower (32) tracks the cam and provides a second input (35) to the differential. The sum from the differential produces the output (26).

This diagram, from Patent 2969910, shows how the cam and follower are connected to a differential.

This diagram, from Patent 2969910, shows how the cam and follower are connected to a differential.

The warped plate cam

Some functions are implemented by warped metal plates acting as cams. This type of cam can be adjusted by turning the 20 setscrews to change the shape of the plate. A follower rides on the surface of the cam and provides an input to a differential underneath the plate. The differential adds the cam position to the input rotation, producing a modified rotation, as with the solid cam. The pressure transducer, for instance, uses a cam to generate the desired output function from the bellows deflection. By using a cam, the bellows can be designed for good performance without worrying about its deflection function.

A closeup of a warped-plate cam.

A closeup of a warped-plate cam.

The synchro outputs

Most of the outputs from the CADC are synchro signals.7 A synchro is an interesting device that can transmit a rotational position electrically over three wires. In appearance, a synchro is similar to an electric motor, but its internal construction is different, as shown below. In use, two synchros have their stator windings connected together, while the rotor windings are driven with AC. Rotating the shaft of one synchro causes the other to rotate to the same position. I have a video showing synchros in action here.

Cross-section diagram of a synchro showing the rotor and stators.

Cross-section diagram of a synchro showing the rotor and stators.

Internally, a synchro has a moving rotor winding and three fixed stator windings. When AC is applied to the rotor, voltages are developed on the stator windings depending on the position of the rotor. These voltages produce a torque that rotates the synchros to the same position. In other words, the rotor receives power (26 V, 400 Hz in this case), while the three stator wires transmit the position. The diagram below shows how a synchro is represented schematically, with rotor and stator coils.

The schematic symbol for a synchro.

The schematic symbol for a synchro.

Before digital systems, synchros were very popular for transmitting signals electrically through an aircraft. For instance, a synchro could transmit an altitude reading to a cockpit display or a targeting system. For the CADC, most of the outputs are synchro signals, which convert the rotational values of the CADC to electrical signals. The three stator windings from the synchro inside the CADC are wired to an external synchro that receives the rotation. For improved resolution, many of these outputs use two synchros: a coarse synchro and a fine synchro. The two synchros are typically geared in an 11:1 ratio, so the fine synchro rotates 11 times as fast as the coarse synchro. Over the output range, the coarse synchro may turn 180°, providing the approximate output, while the fine synchro spins multiple times to provide more accuracy.

The front of the CADC has multiple output synchros with anti-backlash springs.

The front of the CADC has multiple output synchros with anti-backlash springs.

The air data system

The CADC is one of several units in the system, as shown in the block diagram below.8 The outputs of the CADC go to another box called the Air Data Converter, which is the interface between the CADC and the aircraft systems that require the air data values: fire control, engine control, navigation system, cockpit display instruments, and so forth. The motivation for this separation is that different aircraft types have different requirements for signals: the CADC remains the same and only the converter needed to be customized. Some aircraft required "up to 43 outputs including potentiometers, synchros, digitizers, and switches."

This block diagram shows how the Air Data Computer integrates with sensors and other systems. The unlabeled box on the right is the converter. From MIL-C-25653C(USAF).

This block diagram shows how the Air Data Computer integrates with sensors and other systems. The unlabeled box on the right is the converter. From MIL-C-25653C(USAF).

The CADC was also connected to a cylindrical unit called the "Static pressure and angle of attack compensator." This unit compensates for errors in static pressure measurements due to the shape of the aircraft by producing the "position error correction". Since the compensation factor depended on the specific aircraft type, the compensation was computed outside the Central Air Data Computer, again keeping the CADC generic. This correction factor depends on the Mach number and angle of attack, and was implemented as a three-dimensional cam. The cam's shape (and thus the correction function) was determined empirically, rather than from fundamental equations.

The CADC was wired to other components through five electrical connectors as shown in the photo below.9 At the bottom are the pneumatic connections for static pressure and total pressure. At the upper right is a small elapsed time meter.

The front of the CADC has many mil-spec round connectors.

The front of the CADC has many mil-spec round connectors.

Conclusions

The Bendix MG-1A Central Air Data Computer is an amazingly complex piece of electromechanical hardware. It's hard to believe that this system of tiny gears was able to perform reliable computations in the hostile environment of a jet plane, subjected to jolts, accelerations, and vibrations. But it was the best way to solve the problem at the time,10 showing the ingenuity of the engineers who developed it.

The CADC inside its case. From the outside, its mechanical marvels are hidden.

The CADC inside its case. From the outside, its mechanical marvels are hidden.

I plan to continue reverse-engineering the Bendix CADC and hope to get it operational,11 so follow me on Twitter @kenshirriff or RSS for updates. I've also started experimenting with Mastodon recently as @oldbytes.space@kenshirriff. Until then, you can check out CuriousMarc's video below to see more of the CADC. Thanks to Joe for providing the CADC. Thanks to Nancy Chen for obtaining a hard-to-find document for me.

Notes and references

  1. I haven't found a definitive list of which planes used this CADC. Based on various sources, I believe it was used in the F-86, F-101, F-104, F-105, F-106, and F-111, and the B-58 bomber. 

  2. The static air pressure can also be provided by holes in the side of the pitot tube. I couldn't find information indicating exactly how these planes received static pressure. 

  3. The CADC also has an input for the "position error correction". This provides a correction factor because the measured static pressure may not exactly match the real static pressure. The problem is that the static pressure is measured from a port on the aircraft. Distortions in the airflow may cause errors in this measurement. A separate box, the "compensator", determines the correction factor based on the angle of attack. 

  4. The platinum temperature probe is type MA-1, defined by specification MIL-P-25726. It apparently has a resistance of 50 Ω at 0 °C. 

  5. Strictly speaking, the output of the differential is the sum of the inputs divided by two. I'm ignoring the factor of 2 because the gear ratios can easily cancel it out. 

  6. Cams are extensively used in the CADC to implement functions of one variable, including exponentiation and logarithms. The straightforward way to use a cam is to read the value of the function off the cam directly, with the radius of the cam at each angle representing the value. This approach encounters a problem when the cam wraps around, since the cam's profile will suddenly jump from one value to another. This poses a problem for the cam follower, which may get stuck on this part of the cam unless there is a smooth transition zone. Another problem is that the cam may have a large range between the minimum and maximum outputs. (Consider an exponential output, for instance.) Scaling the cam to a reasonable size will lose accuracy in the small values. The cam will also have a steep slope for the large values, making it harder to track the profile.

    The solution is to record the difference between the input and the output in the cam. A differential then adds the input value to the cam value to produce the desired value. The clever part is that by scaling the input so it matches the output at the start and end of the range, the difference function drops to zero at both ends. Thus, the cam profile matches when the angle wraps around, avoiding the sudden transition. Moreover, the difference between the input and the output is much smaller than the raw output, so the cam values can be more accurate. (This only works because the output functions are increasing functions; this approach wouldn't work for a sine function, for instance.)

    This diagram, from Patent 2969910, shows how a cam implements a complex function.

    This diagram, from Patent 2969910, shows how a cam implements a complex function.

    The diagram above shows how this works in practice. The input is \(log~ dP/P_s\) and the output is \(log~M / \sqrt{1+.2KM^2}\). (This is a function of Mach number used for the temperature computation; K is 1.) The small humped curve at the bottom is the cam correction. Although the input and output functions cover a wide range, the difference that is encoded in the cam is much smaller and drops to zero at both ends. 

  7. The US Navy made heavy use of synchros for transmitting signals throughout ships. The synchro diagrams are from two US Navy publications: US Navy Synchros (1944) and Principles of Synchros, Servos, and Gyros (2012). These are good documents if you want to learn more about synchros. The diagram below shows how synchros could be used on a ship.

    A Navy diagram illustrating synchros controlling a gun on a battleship.

    A Navy diagram illustrating synchros controlling a gun on a battleship.

     

  8. To summarize the symbols, the outputs are: log TFAT: true free air temperature (the ambient temperature without friction and compression); log Ps: static pressure; M: Mach number; Qc: differential pressure; ρ: air density; ρa: air density times the speed of sound; Vt: true airspeed. Tt: total temperature (higher due to compression of the air). Inputs are: TT: total temperature (higher due to compression of the air). Pti: indicated total pressure (higher due to velocity); Psi: indicated static pressure; log Psi/Ps: the position error correction from the compensator. The compensator uses input αi: angle of attack; and produces αT: true angle of attack; aT: speed of sound. 

  9. The electrical connectors on the CADC have the following functions: J614: outputs to the converter, J601: outputs to the converter, J603: AC power (115 V, 400 Hz), J602: to/from the compensator, and J604: input from the temperature probe. 

  10. An interesting manual way to calculate air data was with a circular slide rule, designed for navigation and air data calculation. It gave answers for various combinations of pressure, temperature, Mach number, true airspeed, and so forth. See the MB-2A Air Navigation Computer instructions for details. Also see patent 2528518. I'll also point out that from the late 1800s through the 1940s and on, the term "computer" was used for any sort of device that computed a value, from an adding machine to a slide rule (or even a person). The meaning is very different from the modern usage of "computer". 

  11. It was very difficult to find information about the CADC. The official military specification is MIL-C-25653C(USAF). After searching everywhere, I was finally able to get a copy from the Technical Reports & Standards unit of the Library of Congress. The other useful document was in an obscure conference proceedings from 1958: "Air Data Computer Mechanization" (Hazen), Symposium on the USAF Flight Control Data Integration Program, Wright Air Dev Center US Air Force, Feb 3-4, 1958, pp 171-194. 

Silicon reverse-engineering: the Intel 8086 processor's flag circuitry

Status flags are a key part of most processors, indicating if an arithmetic result is negative, zero, or has a carry, for instance. In this post, I take a close look at the flag circuitry in the Intel 8086 processor (1978), the chip that launched the PC revolution.1 Looking at the silicon die of the 8086 reveals how its flags are implemented. The 8086's flag circuitry is surprisingly complicated, full of corner cases and special handling. Moreover, I found an undocumented zero register that is used by the microcode.

The die photo below shows the 8086 microprocessor under a microscope. The metal layer on top of the chip is visible, with the silicon and polysilicon mostly hidden underneath. Around the edges of the die, bond wires connect pads to the chip's 40 external pins. I've labeled the key functional blocks; the ones that are important to this discussion are darker and will be discussed in detail below. The Arithmetic/Logic Unit (ALU, lower left) is split in two. The circuitry for the flags is in the middle, giving it access to the ALU's results for the low byte and the high byte. I've marked each flag latch in red in the diagram below. They appear to be randomly scattered, but there are reasons for this layout.

The 8086 die under a microscope, with main functional blocks labeled. This photo shows the chip's single metal layer; the polysilicon and silicon are underneath. Click on this image (or any other) for a larger version.

The 8086 die under a microscope, with main functional blocks labeled. This photo shows the chip's single metal layer; the polysilicon and silicon are underneath. Click on this image (or any other) for a larger version.

Flags and arithmetic operations

The 8086 supports three types of arithmetic: unsigned arithmetic, signed arithmetic, and BCD (Binary-Coded Decimal) and this is a key to understanding the flags. Unsigned arithmetic uses standard binary values: a byte holds an integer value from 0 to 255, while a 16-bit word holds a value from 0 to 65535. When adding, a carry indicates that the result is too big to fit in a byte or word. (I'll use byte operations to keep the examples small; operations on words are similar.) For instance, suppose you add hex 0x60 + 0x30. The result, 0x90, fits in a byte so there is no carry. But adding 0x90 + 0x90 yields 0x120. This result doesn't fit in a byte, so the result is 0x20 with the carry flag set to 1. The carry allows additions to be chained together, like doing long decimal addition on paper. For subtraction, the carry bit indicates a borrow.

The second type of arithmetic is 2's complement, which supports negative numbers. In a signed byte, 0x00 to 0x7f represent 0 to 127, while 0x80 to 0xff represent -128 to -1. If the top bit of a signed value is set, the value is negative; this is what the sign flag indicates. The clever thing about 2's complement arithmetic is that the same instructions are used for unsigned arithmetic and 2's complement arithmetic. The only thing that changes is the interpretation. As an example of signed arithmetic, 0xff + 0x05 = 0x04 corresponds to -1 + 5 = 4. Signed arithmetic can result in overflow, though. For example, suppose you add 112 + 112: 0x70 + 0x70 = 0xe0. Although that is fine in unsigned arithmetic, in signed arithmetic that result is unexpectedly -32. The problem is that the result doesn't fit in a single signed byte. In this case, the overflow flag is set to indicate that the result overflowed. In other words, the carry flag indicates that an unsigned result doesn't fit in a byte or word, while the overflow flag indicates that a signed result doesn't fit.

The third type of arithmetic is BCD (Binary-Coded Decimal), which stores a decimal digit as a 4-bit binary value. Thus, two digits can be packed into a byte. For instance, adding 12 + 34 = 46 corresponds to 0x12 + 0x34 = 0x46 with BCD. After adding or subtracting BCD values, a special instruction is needed to perform any necessary adjustment.2 This instruction needs to know if there was a carry from the lower digit to the upper digit, i.e. a carry from bit 4 to bit 3. Many systems call this a half-carry, since it is the carry out of a half-byte, but Intel calls it the auxiliary carry flag.

The diagram below summarizes the 8086's flags. The overflow, sign, auxiliary carry, and carry flags were discussed above. The zero flag simply indicates that the result of an operation was zero. The parity flag counts the number of 1 bits in a result byte and the flag is set if the number of 1 bits is even. At the left are the three control flags. The trap flag turns on single-stepping mode. The direction flag controls the direction of string operations. Finally, the interrupt flag enables interrupts.

The control and status flags in the 8086. Diagram from iAPX 86/88 Users Manual fig 2.9.

The control and status flags in the 8086. Diagram from iAPX 86/88 Users Manual fig 2.9.

The status flags are often used with the CMP (Compare) instruction, which performs a subtraction without storing the result. Although this may seem pointless, the status flags show the relationship between the values. For instance, the zero flag will be set if the two values are equal. Other flag combinations indicate "less than", "greater than", and other useful conditions. Loops and if statements use conditional jump instructions that test these flags. (I wrote more about 8086 conditional jumps here.)

Microcode and flags

Most people think of machine instructions as the basic steps that a computer performs. However, many processors (including the 8086) have another layer of software underneath: microcode. Instead of building the processor's control logic out of flip-flops and gates, microcode replaces much of the control logic with code. To execute a machine instruction, the computer internally executes several simpler micro-instructions, specified by the microcode. The main advantage of microcode is that it turns the design of control circuitry into a programming task instead of a difficult logic design task.

An 8086 micro-instruction is encoded into 21 bits as shown below. Every micro-instruction contains a move from a source register to a destination register, each specified with 5 bits. The meaning of the remaining bits depends on the type field, which is two or three bits long. For the current discussion, the most relevant part of the microcode is the Flag bit F at the end, which indicates that the micro-instruction will update the flags.3

The encoding of a micro-instruction into 21 bits. Based on NEC v. Intel: Will Hardware Be Drawn into the Black Hole of Copyright?

The encoding of a micro-instruction into 21 bits. Based on NEC v. Intel: Will Hardware Be Drawn into the Black Hole of Copyright?

As an example, the microcode below implements the INC (increment) and DEC (decrement) instructions. The first micro-instruction moves a word from the register specified by the instruction (indicated by M) to the ALU's temporary B register. It sets up the ALU to perform the operation specified by the instruction (indicated by XI), and indicates that the next micro-instruction (NX) is the last for this machine instruction. The second micro-instruction moves the ALU result (Σ) to the specified register (M), tells the system to run the next instruction RNI, and causes the flags (F) to be updated from the ALU result. Thus, the flags are updated with the results of the increment or decrement.

   move       action
1 M→tmpb   XI    tmpb, NX
2 Σ→M      RNI   F

This microcode is rather generic: it doesn't explicitly specify the register or the ALU operation. Instead, the gate logic determines them from the machine instruction. This illustrates the 8086's hybrid approach: although the 8086 uses microcode, the microcode is parameterized and much of the instruction functionality is implemented with gate logic. When the microcode specifies a generic Arithmetic/Logic Unit (ALU) operation, the gate logic determines from the instruction which ALU (Arithmetic/Logic Unit) operation to perform (in this case, increment or decrement). The gate logic also determines from the instruction bits which register to modify. Finally, the microcode says to update the flags, but the ALU determines how to update the flags. This hybrid approach kept the microcode small enough for 1978 technology; the microcode above supports 16 different increment and decrement instructions.

Microcode can also read or write the flags as a whole, treating the flags as a register. The first micro-instruction below stores the flags to memory (via the OPerand Register), while the second micro-instruction below loads the flags from memory. The first micro-instruction is part of the microcode for PUSHF (push flags to the stack) and interrupt handling. The second micro-instruction is used for POPF (pop flags from the stack), the interrupt return code, and the reset code. Similar micro-instructions are used for LAHF (Load AH from Flags) and SAHF (Store AH to Flags).

  F→OPR
  OPR→F

Microcode can also modify some flags directly with the micro-operations CCOF (Clear Carry and Overflow Flags), SCOF (Set Carry and Overflow Flags), and CITF (Clear Interrupt and Trap Flags). The first two are used in the microcode for multiplication and division, while the third is used in the interrupt handler.

Finally, some machine instructions are implemented directly in logic and do not use microcode at all. The CMC (Complement Carry), CLC (Clear Carry), STC (Set Carry), CLI (Clear Interrupt), STI (Set Interrupt), CLD (Clear Direction), and STD (Set Direction) instructions modify the flags directly without running any microcode. (During instruction decoding, the Group Decode ROM indicates that these instructions are implemented with logic, not microcode.)

The latch circuit that stores flags

Each flag is stored in a latch circuit that holds the flag's value until it is updated. A typical flag latch has two inputs for updates: the flag value generated by the ALU, and a value from the bus when storing to all the flags. The latch also has a "hold" input to keep the existing value. (Some flags, such as carry, have more inputs, as will be described below.) A multiplexer (built from pass transistors) selects one of the inputs for the latch.

A typical latch to hold a flag. The latch is constructed from NMOS transistors and inverters. A "1" input turns on a transistor, letting its input pass through it.

A typical latch to hold a flag. The latch is constructed from NMOS transistors and inverters. A "1" input turns on a transistor, letting its input pass through it.

The latch is based on pass transistors and two inverters forming a loop. To see how it works, suppose select 1 is high. This turns on the transistor letting the in 1 value flow through the transistor and the first inverter. When clk' is high, the signal will flow through the second inverter and the output. While hold is high, the output is fed back from the output to the input, causing the latch to "remember" its value. The latch is controlled by the CPU's clock and it will only update the output when clk' is high. While clk' is low, the output will remain unchanged; the capacitance of the wire is enough to provide an input to the second inverter, a bit like dynamic RAM.4

The diagram below shows how one of these latches looks on the die. The pinkish regions are doped silicon, while the brownish lines are polysilicon. A transistor gate is formed where polysilicon crosses over doped silicon. Each inverter consists of two transistors. The signal flows through the latch in roughly a counterclockwise circle, starting with one of the inputs on the right.

The latch for the Sign Flag. The metal layer was removed for this image.

The latch for the Sign Flag. The metal layer was removed for this image.

Implementation of the flags

In this section, I'll discuss each flag in detail. But first, I'll explain the circuitry common to all the flags. As explained above, microcode can treat the flags as a register, reading or writing all the flags in parallel. When the microcode specifies flags as the destination for a move, a signal is generated that I call flags-load. This signal enables the multiplexer inputs (described above) that connect the ALU bus to the flag latches, loading the bits into the latches. Conversely, when microcode specifies the flags as the source for a move, a signal is generated that I call flags-read.5 This signal connects the outputs of the flag latches to the ALU bus through pass transistors, loading the value of the flags onto the bus.

Sign flag

The sign flag is pretty simple: it stores the top bit of the ALU result, indicating a negative result. For a byte operation, this is bit 7 and for a word operation, bit 15, so some logic selects the right bit based on the instruction. (This is another example of how logic circuitry looks after the details that microcode ignores.) The output from the sign flag goes to the condition evaluation circuitry to support conditional jumps, as do the other arithmetic flags. I wrote about that recently, so I won't go into details here.

The six arithmetic status flags are updated by arithmetic operations when the microcode F bit is set. This bit generates a signal that I call arith-flag-load, indicating that the flags should be updated based on the ALU result. This signal enables the multiplexer inputs between the ALU circuitry and the flag latches. There is an inconvenient special case: rotate instructions only update the overflow and carry flags for compatibility with the 8080 processor.6 To support this, a rotate instruction blocks the arith-flag-load signal for the sign, parity, zero, and auxiliary carry flags. Again, this is handled by gates, rather than microcode.

Zero flag

The zero flag is also straightforward. It indicates that the result byte or word is all zeros, for a byte or word operation respectively. An 8-input NOR gate at the top of the flags circuitry determines if the lower byte is all zeros, while an 8-input NOR gate at the bottom of the flags circuitry tests the upper byte. These NOR gates are spread out and span the width of the ALU, essentially a wire that is pulled low by any result bits that are high. The zero flag is set based on the low byte or the whole word, for a byte instruction or word instruction respectively.

There is a second zero flag, hidden from the programmer. This zero flag always tests the full 16-bit result, so I'll call it Z16. The other key difference is that the Z16 flag is updated on every ALU micro-operation, rather than under the control of the F bit. Thus, the Z16 flag can be updated without interfering with the programmer-visible zero flag. This makes it useful for internal microcode operations, such as loops.

Parity flag

The parity flag is conceptually simple, but it is fairly expensive to implement in hardware as it requires exclusive-oring the eight bits of the result byte together. This is implemented with seven XOR circuits.7 Since each XOR circuit is implemented with two logic gates, the raw parity calculation requires 14 gates. Only 8-bit parity is supported, even if a word operation is performed.8

The schematic below shows how an XOR circuit is implemented. It uses two gates; due to the properties of NMOS transistors, the AND-NOR gate is implemented as a single gate. To see how it works, suppose A and B are 0. The first NOR gate will output 1, forcing the output to 0. If A and B are both 1, the AND gate will force the output to 0. Otherwise the output is 1, providing the XOR function. The key point is that XOR is fairly costly compared to other logic functions.

Schematic of an XOR circuit.

Schematic of an XOR circuit.

Auxiliary carry flag

The auxiliary carry starts off simple, but is complicated by the decimal adjust instructions. In most cases, the auxiliary carry is carry-out from bit 3 of the ALU (i.e. the half-carry). For subtraction, the flag must be inverted to indicate a borrow, so the half-carry is exclusive-or'd with a subtraction signal.

However, the decimal adjust instructions (DAA, DAS, AAA, AAS) use the auxiliary carry and also modify the auxiliary carry when performing a decimal adjust. After an addition or subtraction, the decimal adjust instructions produce a correction value if necessary. If the lower digit is more than 9 or the auxiliary carry is set, the value 6 is added (or subtracted) from the accumulator.9 The DAA and AAA instructions also test if a correction of 0x60 is needed for the upper digit. The correction signals are wired to the ALU bus to generate the correction factor of 0x06, 0x60, or 0x66 for an adjustment ALU operation. The correction signal for the low digit is stored as the auxiliary carry flag.

Carry flag

The carry flag is surprisingly complex, with five inputs to the carry flag input multiplexer. The first input is the carry value for an ALU operation:10 the top bit of the ALU result (bit 7 or 15 for a byte or word operation). However, for a subtraction the carry is inverted to form the borrow. But for a DAA or DAS decimal adjust operation, the carry comes from the high-digit correction signal. And for an AAA or AAS ASCII adjust operation, the carry comes from the low-digit correction signal. These cases are determined with logic gates and fed into a single multiplexer input.

Another multiplexer input supports the CMC (Complement Carry) instruction by feeding in the current flag value but inverted. The STC and CLC (Set Carry and Clear Carry) instructions are implemented by feeding the low bit of the instruction into a different multiplexer input. This input also supports the micro-instructions SCOF (Set Carry, Overflow Flags), CCOF (Clear Carry, Overflow Flags), and RCY (Reset Carry).

The rotate and shift instructions have complex interactions with the carry flag, since bits are shifted in and out of the carry flag. For a shift or rotate, a separate multiplexer input provides the bit for the carry flag latch. For a right shift or rotate, the lowest bit of the ALU argument is fed into the carry flag. For a left shift or rotate, the carry out of bit 15 or bit 7 is fed into the carry flag; this was the highest bit for a word or byte operation respectively.

The output from the carry flag is fed into the ALU's carry-in for the ADC (Add with Carry), SBB (Subtract with Borrow), and RCL (Rotate through Carry, Left) instructions; the carry is inverted for SBB to form the borrow. For an RCR (Rotate through Carry, Right), the carry is fed into the ALU's output bit 7 or 15 (for a byte or word operation respectively).

Overflow flag

The circuitry for the overflow flag is fairly complicated, as there are multiple cases. For an arithmetic operation, the overflow flag indicates a signed overflow. The overflow is computed as the exclusive-or of the carry-in to the top bit and the carry-out from the top bit, selected for a byte or word operation. (I explained the mathematics behind this earlier.)

For a shift or rotate, however, the overflow flag indicates that the shifted value changed sign. The ALU implements left shifts and rotates by passing bits as carries so the old sign bit is the carry-out from the top bit, while the new sign bit is the carry-in to the top bit. Thus, the standard arithmetic overflow circuit also handles left shifts and rotates. On the other hand, for a shift or rotate right, the top two bits of the result are exclusive-or'd together to see if they are different: bits 6 and 7 for a byte shift and bits 14 and 15 for a word shift. (The second-from-the-top bit was the sign bit before the shift.)

Finally, two micro-instructions affect the flag: CCOF (Clear Carry and Overflow Flags) and SCOF (Set Carry and Overflow Flags). All these different sources for the overflow flag are combined in logic gates, rather than a complex multiplexer like the carry flag used.

Direction flag

The three remaining flags are "control" flags: rather than storing the status of an ALU operation, these flags control the CPU's behavior. The direction flag controls the direction of string operations that scan through memory: auto-incrementing or auto-decrementing. This is implemented by feeding the direction flag into the Constant ROM to determine the increment value applied to the SI and DI registers. The direction flag is set or cleared through the STD and CLD instructions (Set Direction and Clear Direction). For these instructions, the low bit of the instruction is passed into the flag to set or clear it as appropriate.

Interrupt flag

The output from the interrupt flag goes to the interrupt handling circuitry to enable or disable interrupts. This flag is set or cleared by a programmer through the STI and CLI instructions. For the STI and CLI instructions, the low bit of the instruction is passed into the flag to set or clear it as appropriate. Microcode can clear the interrupt flag and the trap flag (discussed below) with the CITF (Clear Interrupt and Trap Flag) micro-instruction. This is used in the interrupt handler to disable subsequent interrupts and traps. The CITF micro-instruction is implemented with a separate input to the flag latch.

Trap flag

The trap flag turns on single-stepping for debugging. With the trap flag on, every instruction generates an interrupt. This flag doesn't have machine instructions to modify it directly. Instead, the programmer must mess around with the PUSHF and POPF instructions to put all the flags on the stack and modify the flag bit there (details). Like the interrupt flag, the trap flag has an input to clear it if the CITF micro-instruction is active.

Layout of the flag circuitry

The diagram below shows the circuitry for the flags on the die, with the approximate location of each flag indicated. ALU bits 7 through 0 are above this circuitry and ALU bits 15 through 8 are below. The zero gates stretch the length of the ALU at the top and bottom, while the parity gates are near the low byte of the ALU. The flag circuitry appears highly irregular on the die because each flag has different circuitry. However, the circuitry for a flag is generally near the appropriate bit that receives the flag, so the layout is not as arbitrary as it may seem. For instance, the sign flag is affected by bit 7 or 15 of the ALU result and is loaded or stored to bit 7, so it is at the left. The trap and interrupt flags are outside the ALU, to the right of this image.

Closeup of the circuitry on the die that implements the flags. The metal layer has been removed to show the polysilicon and silicon underneath.

Closeup of the circuitry on the die that implements the flags. The metal layer has been removed to show the polysilicon and silicon underneath.

The history behind the 8086 flags11

The Datapoint 2200 (1970) is a desktop computer that was sold as a "programmable terminal". Although mostly forgotten now, the Datapoint 2200 is one of the most influential computers ever, as it led to the 8086 processor and thus the modern x86 architecture. For flags, the Datapoint 2200 had four "control flip-flops": carry/borrow,12 zero, sign, and parity. These were not bits in a register and could not be accessed directly. Instead, conditional jumps, subroutine calls, or subroutine returns could be performed based on the status of one of these flip-flops. Because the Datapoint 2200 was used as a terminal, and terminal protocols often used parity, implementing parity in hardware was a useful feature.

But how did the Datapoint 2200 lead to the 8086? The Datapoint 2200 was created before the microprocessor, so its processor was a large board of TTL chips. Datapoint asked Intel and Texas Instruments if they could replace this TTL processor with a single chip. Texas Instruments created the TMX 1795, the first 8-bit microprocessor. Intel created the 8008 shortly after. Both chips copied the instruction set and architecture of the 2200. Datapoint didn't like either chip and stuck with TTL. Texas Instruments couldn't find a customer for the TMX 1795 and abandoned it. Intel, on the other hand, marketed the 8008 as a general-purpose microprocessor, essentially creating the microprocessor industry. Since the 8008 copied the Datapoint 2200, it kept the four status flip-flops.

In 1974, Intel created the 8080 microprocessor, an improvement of the 8008. The 8080 kept the flags from the 8008 and added the auxiliary carry. Moreover, the flags could be accessed as a byte, making the flags appear like a register. The 8080 defined specific values for the unused flag bits. These decisions have persisted into the modern x86 architecture.13

Structure of the 8080 flags when saved on the stack. From 8080 Assembly Language Programming Manual.

Structure of the 8080 flags when saved on the stack. From 8080 Assembly Language Programming Manual.

The 8086 was designed to be backward compatible with the 8080, at least at the assembly language level.14 To support this, the 8086 kept the 8080's flag byte unchanged, putting additional flags in the high byte, as shown below. Thus, the selection, layout, and behavior of the 8086 flags (and thus x86) are largely historical accidents going back to the 8080, 8008, and Datapoint 2200 processors.

Arrangement of the 8086 flags in the word. The shaded flags match the 8080/8085 flags. Diagram from iAPX 86/88 Users Manual fig 2.10.

Arrangement of the 8086 flags in the word. The shaded flags match the 8080/8085 flags. Diagram from iAPX 86/88 Users Manual fig 2.10.

Conclusions

You might expect flags to be a simple part of a CPU, but the 8086's flags are surprisingly complex. About 1/3 of the ALU is devoted to flag computation and storage. Each flag is implemented with completely different circuitry. The 8086 is a CISC processor (Complex Instruction Set Computer), where the instruction set is designed to be powerful and to minimize the gap between machine language and high-level languages.15 This can be seen in the implementation of the flags, which are full of special cases to increase their utility with different instructions.16 In contrast, a RISC (Reduced Instruction Set Computer) simplifies the instruction set to make each instruction faster. This philosophy also affects the flags: for example, the ARM-1 processor (1985) has four arithmetic flags compared to the 8086's six flags. The behavior of the ARM flags is simpler, and the ARM doesn't deal with byte versus word operations. It also doesn't have instructions like decimal adjust that have complex flag behavior. This simplicity is reflected in the simpler and more regular circuitry of the ARM-1 flags, which I reverse-engineered here.

I've written multiple posts on the 8086 so far and plan to continue reverse-engineering the 8086 die so follow me on Twitter @kenshirriff or RSS for updates. I've also started experimenting with Mastodon recently as @oldbytes.space@kenshirriff.

Notes and references

  1. Strictly speaking, the Intel 8088 launched the PC revolution as it was the processor in the first IBM PC. But internally the 8086 and 8088 are almost identical, so everything in this post applies to the 8088 as well. (The 8088 has an 8-bit bus compared to the 8086's 16-bit bus. As a result, the bus interface circuitry is different. The 8088 has a 4-byte prefetch queue compared to the 8086's 6-byte prefetch queue. And there are a few microcode changes. Apart from these changes, the dies are essentially identical.) 

  2. Since BCD arithmetic is performed using the binary addition and subtraction instructions, an adjustment may be required. For instance, consider adding 19 + 18 = 37 using BCD: 0x19 + 0x18 = 0x31 rather than the desired 0x37. Adding an adjustment factor of 6 yields the desired answer, taking into account the carry from the low digit. The BCD adjustment instructions are DAA (Decimal Adjust after Addition), DAS (Decimal Adjust after Subtraction), AAA (ASCII Adjust after Addition), and AAS (ASCII Adjust after Subtraction). (I wrote about the DAA instruction in detail here.) 

  3. Unlike other arithmetic and logic instructions, the NOT instruction does not change any of the flags. The designer of the 8086 states that this was an oversight. (See page 98 in "The 8086/8088 Primer".) Looking at the microcode shows that the microcode F bit was omitted in the implementation of NOT. I think that this "goof" also prevented the NOT and NEG microcode from being merged, wasting four micro-instructions. 

  4. Most of the latches in the 8086 have two pass transistors: one driven by clk and one driven by clk'. This makes the circuit function like an edge-triggered flip-flop, only transitioning on the edge of the clock. The flag latches, on the other hand, gate the multiplexer input controls so they are only active when clk is high. Thus, the two inverters are connected alternately during clk and clk'

  5. The connection from flag outputs to the ALU bus is more complex than simple pass transistors. For performance reasons, the ALU bus is charged high during the clock' phase of the clock. Then, any bits that should be 0 are pulled low during the high clock phase. (The motivation is that NMOS transistors can pull a line low faster than they can pull it high.) To support this, each inverted flag output drives a transistor connected to ground, and the output from this transistor is connected to the ALU bus through a pass transistor. 

  6. The 8080 processor has four rotate instructions, while the 8086 adds three shift instructions. The new shift instructions update the arithmetic flags according to the result. However, the 8080's rotate instructions only updated the carry flag, leaving the other flags unchanged. For backward compatibility, the 8086 preserves this behavior for the rotate instructions, not modifying the other flags inherited from the 8080. Since the 8086's overflow flag didn't exist in the 8080, the 8086 can update the overflow flag for rotate instructions without breaking compatibility, even though it's not obvious what "overflow" means in the case of a rotate. (The 8080's behavior of only updating the carry flag for shifts dates back to the Datapoint 2200.)

    Curiously, The 8086 Family User's Manual shows SHR and SAL/SHL as updating just the overflow and carry flags (pages 2-265 and 2-66), contradicting the text (page 2-39). 

  7. The 8086 implements the parity computation by XORing pairs of bits. The pairs are then combined in sequence: (((bit0⊕bit1)⊕(bit2⊕bit3))⊕(bit4⊕bit5))⊕(bit6⊕bit7). Combining the terms in a tree-like arrangement would have saved gate delays, but apparently wasn't necessary. 

  8. The parity flag only examines the low byte of the result, even for a 16-bit operation, making it unusual compared to the other flags. The motivation is probably that the parity flag was only supported for backward compatibility and not considered particularly useful. Even in modern 64-bit Intel processors, the parity flag only examines the least-significant byte. 

  9. The decimal adjust circuitry uses a gate circuit to test if the lower digit is greater than nine. Specifically, it uses the expression: bit3•(bit2+bit1). In other words, if the ALU input has 8 and either 4 or 2 or both.

    The logic to determine if the upper digit needs a correction is more complex: carry+bit7•(bit6+bit5+bit4•af9), where af9 indicates that AF is not set and the lower digit is more than 9. This tests if the upper digit is greater than nine, but also handles the case where the upper digit is 9 and adjusting the lower digit will increase it.

    The DAA instruction on the 8086 has slightly different behavior from the DAA instruction on x86 in a few cases. For example, 0x9a + 0x02 = 0x9c; DAA converts this to 0xa2 on the 8086, but 0x02 on x86. Since 0x9a is not a valid BCD value, this is technically an undefined case, but it is interesting that there is a difference. Perhaps this behavior was inherited from the 8080; if anyone has an 8080 available, perhaps they can test this case. (I wrote about the x86 DAA behavior in detail here.) 

  10. One special case is that the increment and decrement instructions affect all the arithmetic flags except for carry. This is implemented by blocking the carry-flag update for an increment or decrement instruction. The motivation is to allow a loop counter to be updated without disturbing the carry flag. This behavior was first implemented in the 8008 processor. 

  11. The book "Computer Architecture", Blaauw and Brooks, contains a detailed discussion of different approaches for condition flags, pages 353-358. Some processors, such as the IBM 704 (1954), don't explicitly store flags, but test and branch in a single instruction. Storing conditions as 1-bit values (as in the 8086) is called an "indicator". An alternative is the "condition code", which encodes mutually-exclusive condition values into a smaller number of bits, as in System/360 (1964). For example, addition stores four conditions (zero, negative, positive, or overflow) encoded into two bits, rather than separate zero, sign, and overflow flags. Other alternatives are where to store the conditions: in "working store" (i.e. a regular register), in memory, in a unique indicator (i.e. a flags register), or in a shared condition register (e.g. System/360). The point is that while the typical microprocessor approach of storing flags in a flag register may seem natural, many alternatives have been tried in different systems. 

  12. For subtraction, a borrow flag can be defined in different ways. The Datapoint 2200 and descendants store the borrow bit in the carry flag. This approach was also used by the 6800 and 68000 processors. The alternative is to store the complement of the borrow bit in the carry flag, since this maps more naturally onto twos-complement arithmetic. This approach was used by the IBM System/360 mainframe and the 6502 and ARM processors. 

  13. The positions of the 8080's flags in the byte are not arbitrary but have some logic. When performing multi-byte additions, the carry flag gets added into the low bit of the next byte, so it makes sense to put the carry flag in bit 0. Likewise, the auxiliary carry flag is in bit 4, since that is the bit it is added into. The sign bit is bit 7 of the result, so it makes sense to put the sign bit in bit 7 of the flags. As for the zero and parity flags, and the values of the unused flag bits, I don't have an explanation for those. 

  14. The 8086 was designed to provide an upgrade path from the 8080, so it inherited many instructions and architectural features along with the change from 8 bits to 16 bits. The two processors were not binary compatible or even directly compatible at the assembly code level. Instead, assembly code for the 8080 could be converted to 8086 assembly via a program called CONV-86, which would usually require manual cleanup afterward. Many of the early programs for the 8086 were conversions of 8080 programs. 

  15. The terms RISC and CISC are vague, and there are many different definitions. I'm not looking to debate definitions. 

  16. The motivation behind how 8086 instructions affect the flags is given in The 8086/8088 Primer, by Stephen Morse, the creator of the 8086 instruction set. It turns out that there are good reasons for the flags to have special-case behavior for various instructions. 

Understanding the x86's Decimal Adjust after Addition (DAA) instruction

I've been looking at the DAA machine instruction on x86 processors, a special instruction for binary-coded decimal arithmetic. Intel's manuals document each instruction in detail, but the DAA description doesn't make much sense. I ran an extensive assembly-language test of DAA on a real machine to determine exactly how the instruction behaves. In this blog post, I explain how the instruction works, in case anyone wants a better understanding.

The DAA instruction

The DAA (Decimal Adjust AL1 after Addition) instruction is designed for use with packed BCD (Binary-Coded Decimal) numbers. The idea behind BCD is to store decimal numbers in groups of four bits, with each group encoding a digit 0-9 in binary. You can fit two decimal digits in a byte; this format is called packed BCD. For instance, the decimal number 23 would be stored as hex 0x23 (which turns out to be decimal 35).

The 8086 doesn't implement BCD addition directly. Instead, you use regular binary addition and then DAA fixes the result. For instance, suppose you're adding decimal 23 and 45. In BCD these are 0x23 and 0x45 with the binary sum 0x68, so everything seems straightforward. But, there's a problem with carries. For instance, suppose you add decimal 26 and 45 in BCD. Now, 0x26 + 0x45 = 0x6b, which doesn't match the desired answer of 0x71. The problem is that a 4-bit value has a carry at 16, while a decimal digit has a carry at 10. The solution is to add a correction factor of the difference, 6, to get the correct BCD result: 0x6b + 6 = 0x71.

Thus, if a sum has a digit greater than 9, it needs to be corrected by adding 6. However, there's another problem. Consider adding decimal 28 and decimal 49 in BCD: 0x28 + 0x49 = 0x71. Although this looks like a valid BCD result, it is 6 short of the correct answer, 77, and needs a correction factor. The problem is the carry out of the low digit caused the value to wrap around. The solution is for the processor to track the carry out of the low digit, and add a correction if a carry happens. This flag is usually called a half-carry, although Intel calls it the Auxiliary Carry Flag.2

For a packed BCD value, a similar correction must be done for the upper digit. This is accomplished by the DAA (Decimal Adjust AL after Addition) instruction. Thus, to add a packed BCD value, you perform an ADD instruction followed by a DAA instruction.

Intel's explanation

The Intel Software Developer's Manuals. These are from 2004, back when Intel would send out manuals on request.

The Intel Software Developer's Manuals. These are from 2004, back when Intel would send out manuals on request.

The Intel 64 and IA-32 Architectures Software Developer Manuals provide detailed pseudocode specifying exactly what each machine instruction does. However, in the case of DAA, the pseudocode is confusing and the description is ambiguous. To verify the operation of the DAA instruction on actual hardware, I wrote a short assembly program to perform DAA on all input values (0-255) and all four combinations of the carry and auxiliary flags.3 I tested the pseudocode against this test output. I determined that Intel's description is technically correct, but can be significantly simplified.

The manual gives the following pseudocode; my comments are in green.

IF 64-Bit Mode
  THEN
    #UD;  Undefined opcode in 64-bit mode
  ELSE
    old_AL := AL; AL holds input value
    old_CF := CF; CF is the carry flag
    CF := 0;
    IF (((AL AND 0FH) > 9) or AF = 1) AF is the auxiliary flag
      THEN
        AL := AL + 6;
        CF := old_CF or (Carry from AL := AL + 6); dead code
        AF := 1;
      ELSE
        AF := 0;
      FI;
    IF ((old_AL > 99H) or (old_CF = 1))
      THEN
        AL := AL + 60H;
        CF := 1;
      ELSE
        CF := 0;
    FI;
FI;

Removing the unnecessary code yields the version below, which makes it much clearer what is going on. The low digit is corrected if it exceeds 9 or if the auxiliary flag is set on entry. The high digit is corrected if it exceeds 9 or if the carry flag is set on entry.4 At completion, the auxiliary and carry flags are set if an adjustment happened to the corresponding digit.5 (Because these flags force a correction, the operation never clears them if they were set at entry.)

IF 64-Bit Mode
  THEN
    #UD;
  ELSE
    old_AL := AL;
    IF (((AL AND 0FH) > 9) or AF = 1)
      THEN
        AL := AL + 6;
        AF := 1;
      FI;
    IF ((old_AL > 99H) or CF = 1)
      THEN
        AL := AL + 60H;
        CF := 1;
    FI;
FI;

History of BCD

The use of binary-coded decimal may seem strange from the modern perspective, but it makes more sense looking at some history. In 1928, IBM introduced the 80-column punch card, which became very popular for business data processing. These cards store one decimal digit per column, with each digit indicated by a single hole in row 0 through 9.6 Even before digital computers, businesses could perform fairly complex operations on punch-card data using electromechanical equipment such as sorters and collators. Tabulators, programmed by wiring panels, performed arithmetic on punch cards using electromechanical counting wheels and printed business reports.

Example card, from IBM 29 Card Punch Reference Manual.

These calculations were performed in decimal. Decimal fields were read off punch cards, added with decimal counting wheels, and printed as decimal digits. Numbers were not represented in binary, or even binary-coded decimal. Instead, digits were represented by the position of the hole in the card, which controlled the timing of pulses inside the machinery. These pulses rotated counting wheels, which stored their totals as angular rotations, a bit like an odometer.

A counter unit from an IBM accounting machine (tabulator). The two wheels held two digits. The electromagnets (white) engaged and disengaged the clutch so the wheel would advance the desired number of positions.

A counter unit from an IBM accounting machine (tabulator). The two wheels held two digits. The electromagnets (white) engaged and disengaged the clutch so the wheel would advance the desired number of positions.

With the rise of electronic digital computers in the 1950s, you might expect binary to take over. Scientific computers used binary for their calculations, such as the IBM 701 (1952). However, business computers such as the IBM 702 (1955) and the IBM 1401 (1959) operated on decimal digits, typically stored as binary-coded decimal in 6-bit characters. Unlike the scientific computers, these business computers performed arithmetic operation in decimal.

The main advantage of decimal arithmetic was compatibility with decimal fields stored in punch cards. Second, decimal arithmetic avoided time-consuming conversions between binary and decimal, a benefit for applications that were primarily input and output rather than computation. Finally, decimal arithmetic avoided the rounding and truncation problems that can happen if you use floating-point numbers for accounting calculations.

The importance of decimal arithmetic to business can be seen in its influence on the COBOL programming language, very popular for business applications. A data field was specified with the PICTURE clause, which specified exactly how many decimal digits each field contained. For instance PICTURE S999V99 specified a five-digit number (five 9's) with a sign (S) and implied decimal point (V). (Binary fields were an optional feature.)

In 1964, IBM introduced the System/360 line of computers, designed for both scientific and business use, the whole 360° of applications. The System/360 architecture was based on 32-bit binary words. But to support business applications, it also provided decimal data structures. Packed decimal provided variable-length decimal fields by putting two binary-coded decimal digits per byte. A special set of arithmetic instructions supported addition, subtraction, multiplication, and division of decimal values.

The System/360 Model 50 in a datacenter. The console and processor are at the left. An IBM 1442 card reader/punch is behind the IBM 1052 printer-keyboard that the operator is using. At the back, another operator is loading a tape onto an IBM 2401 tape drive. Photo from IBM.

The System/360 Model 50 in a datacenter. The console and processor are at the left. An IBM 1442 card reader/punch is behind the IBM 1052 printer-keyboard that the operator is using. At the back, another operator is loading a tape onto an IBM 2401 tape drive. Photo from IBM.

With the introduction of microprocessors, binary-coded decimal remained important. The Intel 4004 microprocessor (1971) was designed for a calculator, so it needed decimal arithmetic, provided by Decimal Adjust Accumulator (DAA) instruction. Intel implemented BCD in the Intel 8080 (1974).7 This processor implemented an Auxiliary Carry (or half carry) flag and a DAA instruction. This was the source of the 8086's DAA instruction, since the 8086 was designed to be somewhat compatible with the 8080.8 The Motorola 6800 (1974) has a similar DAA instruction, while the 68000 had several BCD instructions. The MOS 6502 (1975), however, took a more convenient approach: its decimal mode flag automatically performed BCD corrections. This on-the-fly correction approach was patented, which may explain why it didn't appear in other processors.9

The use of BCD in microprocessors was probably motivated by applications that interacted with the user in decimal, from scales to video games. These motivations also applied to microcontrollers. The popular Texas Instruments TMS-1000 (1974) didn't support BCD directly, but it had special case instructions like A6AAC (Add 6 to accumulator) to make BCD arithmetic easier. The Intel 8051 microcontroller (1980) has a DAA instruction. The Atmel AVR (1997, used in Arduinos) has a half-carry flag to assist with BCD.

Binary-coded decimal has lost popularity in newer microprocessors, probably because the conversion time between binary and decimal is now insignificant. The ill-fated Itanium, for instance, didn't support decimal arithmetic. RISC processors, with their reduced instruction sets, cast aside less-important instructions such as decimal arithmetic; examples are ARM 1985), MIPS (1985), SPARC (1987), PowerPC (1992), and RISC-V (2010). Even Intel's x86 processors are moving away from the DAA instruction; it generates an invalid opcode exception in x86-64 mode. Rather than BCD, IBM's POWER6 processor (2007) supports decimal floating point for business applications that use decimal arithmetic.

Conclusions

The DAA instruction is complicated and confusing as described in Intel documentation. Hopefully the simplified code and explanation in this post make the instruction a bit easier to understand.

Follow me on Twitter @kenshirriff or RSS for updates. I've also started experimenting with Mastodon recently as @oldbytes.space@kenshirriff. I wrote about the 8085's decimal adjust circuitry in this blog post.

Notes and references

  1. The AL register is the low byte of the processor's AX register. The DAA instruction only operates on a byte; there are no 16-bit or 32-bit versions. 

  2. The AAA (ASCII Adjust after Addition) and AAS (ASCII Adjust after Subtraction) instructions perform corrections for unpacked BCD: a single digit per byte. Dealing with a single digit, these instructions are considerably simpler. These operations don't have much to do with ASCII except that they ignore and clear the upper 4 bits. Since ASCII represents the characters 0 through 9 with the values 0x30 through 0x39, ASCII characters can be used as input and the result will be a BCD digit.

    The DAS (Decimal Adjust AL after Subtraction) instruction is similar to DAA except that it applies the correction after subtraction, subtracting the correction. I'm going to focus on DAA in this article since the other instructions are similar. 

  3. My test code and results are on GitHub. The results should be the same on any x86 processor, but I did the test on a Pentium Dual-Core E5300 CPU.

    My DAA test cases include values that couldn't result from a "real" BCD addition. For example, the input 0x04 with AF set can't be generated by adding two BCD numbers because even 9+9 doesn't get the result up to carry + 4. Not surprisingly, DAA doesn't return a valid BCD result in this case, yielding 0x0a. 

  4. You might wonder why the code tests if old_AL>99H, rather than simply checking the upper digit. The reason is that the low digit can cause a half-carry during correction, messing up the upper digit. This half-carry can only happen if the lower digit is greater than nine. The upper digit would only become too big if it were 9. Thus, this case only happens if the old AL is more than 0x99. 

  5. The carry flag value produced by DAA may seem arbitrary, but it is the value necessary for performing multi-byte additions, where the carry from one addition is added to the next addition. (This is just like handling carries when performing long addition by hand.) Specifically, you want the carry set if the result has a carry-out (result > 99). This happens if the original addition produces a carry, or if the DAA operation generates a result > 99. The latter case corresponds to an adjustment of the upper digit. 

  6. Punch cards were introduced in the late 1800s for the US Census and went through various formats until most companies standardized on the 80-column card. Support for alphanumeric values was added around 1932, but I'm not going to go into that. 

  7. The earlier Intel 8008 microprocessor didn't have decimal arithmetic support because its instruction set and architecture copied the Datapoint 2200 desktop computer (1971), which did not provide decimal arithmetic. Since the Datapoint 2200 was designed as a "programmable terminal", it primarily dealt with characters and BCD was irrelevant to it. 

  8. The 8086 was designed to provide an upgrade path from the 8080, so it inherited many instructions and architectural features along with the change from 8 bits to 16 bits. The two processors were not binary compatible or even directly compatible at the assembly code level. Instead, assembly code for the 8080 could be converted to 8086 assembly via a program called CONV-86, which would usually require manual cleanup afterward. Many of the early programs for the 8086 were conversions of 8080 programs. ↩ 

  9. The Ricoh 2A03 (1983) was a microprocessor created for the NES video game system. It was a clone of the 6502 except that it omitted the decimal adjust feature, presumably to avoid patent infringement.