# EDN Access--12.05.96 A minus B=A+NOT(B)+1(Part 1

Design FeaturesDecember 5, 1996 |

## A minus B=A+NOT(B)+1(Part 1)

### Clive "Max" Maxfield, Intergraph Computer Systems

** Upon examining an ALU, it is surprising to find a 1's complementor glaring balefully at you,
instead of the 2's complementor you expected to be there.**

Following the joys of the "Microprocessor 101'' courses you endured in college, most of you are reasonably confident that you understand the way in which computers add and subtract binary numbers. Thereafter, you bask in the rosy glow that assembly-level instructions, such as add without carry (ADD), add with carry (ADDC), subtract without carry/borrow (SUB), and subtract with carry/borrow (SUBC), don't scare you anymore.

The years race by as if they are in a desperate hurry to get somewhere, until you arrive at that grim and fateful day when someone asks you to replicate an adder/subtractor function, either as a chunk of logic for a design or possibly using a hardware description language to create a model for simulation and synthesis. The first port of call may well be to blow the cobwebs off your microprocessor course notes, only to find that there's more to this than you seem to recall. The description of the ADD instruction, which doesn't require a carry-in, looks simple enough and says something such as: a[7:0] plus b[7:0]=a[7:0]+b[7:0] (assuming an 8-bit datapath). It's when you turn your attention to the SUB, whose definition may be along the lines of a[7:0] minus b[7:0]=a[7:0]+NOT(b[7:0])+1, that you realize that perhaps you should have paid just a tad more attention in Professor Gonzo Dribbler's Monday morning lectures—possibly to the extent of actually staying awake.

Light begins to dawn after a few moments, as you recall that you can perform these calculations using 2's complement arithmetic. So, it comes as something of a shock when you peer at the block diagram of the ALU looking for a 2's complementor but instead find a humble 1's complementor glaring back at you as though it has every right to be there. "Holy socks, Batman, how can this be?" Obviously, you need to go back to first principles in order to figure this out…

#### Complement techniques

## There are two complement forms associated with every number system: the "radix complement" and the "diminished radix complement," where the term "radix" refers to the base of the number system. Under the decimal (base-10) system, the radix complement is also known as the "10's complement," and the diminished radix complement is known as the "9's complement." First, consider a decimal subtraction you can perform using the 9's complement technique—a process also known as "casting out the nines" (Figure 1).

The standard way to perform the operation is to subtract the subtrahend (283) from the minuend (647), which, as in Figure 1, may require the use of one or more borrow operations. To perform the equivalent operation using a 9's complement technique, you first subtract each of the digits of the subtrahend from 9. You add the resulting 9's complement value to the minuend and then perform an "end-around-carry" operation. The advantage of the 9's complement technique is that it is never necessary to perform a borrow operation (hence, its attraction to those of limited numerical ability in the days of yore).

## Now, consider the same subtraction performed using the 10's complement technique (Figure 2). The advantage of the 10's complement is that it is not necessary to perform an end-around carry, because you simply drop any carry-out resulting from the addition of the most-significant digits from the final result. The disadvantage is that, during creation of the 10's complement, it is necessary to perform a borrow operation for every nonzero digit in the subtrahend. (You can overcome this problem by first taking the 9's complement of the subtrahend, adding one to the result, and then performing the remaining operations, as in the example for the 10's complement.)

## You can employ similar techniques with any number system including binary (base-2), in which the radix complement is known as the 2's complement and the diminished radix complement is known as the 1's complement. First, consider a binary subtraction that you perform using the 1's complement technique on 8-bit unsigned binary values, where you can use such values to represent positive integers in the range 010 to 25510 (Figure 3).

## The traditional way to perform the operation is to subtract the subtrahend (000111102) from the minuend (001110012), which may require the use of one or more borrow operations. (Don't beat your head against a wall trying to understand the standard binary subtraction—simply take my word as to the result.) To perform the equivalent operation in 1's complement, first subtract each of the digits of the subtrahend from a 1. Add the resulting 1's complement value to the minuend and then perform an end-around-carry operation. As for the 9's complement process, the advantage of the 1's complement technique is that it is never necessary to perform a borrow operation. In fact, it isn't even necessary to perform a subtraction operation because you can generate the 1's complement of a binary number simply by inverting all of its bits, that is, by exchanging all the 0s with 1s and vice versa. This means that, even if you stopped here, you already know how to perform a simple binary subtraction using only inversion and addition without any actual subtraction. Now, you can perform the same binary subtraction using the 2's complement technique (Figure 4).

As with the 10's complement technique, the advantage of the 2's complement is that it is not necessary to perform an end-around carry, because you simply drop any carry-out resulting from the addition of the two MSBs from the final result. The disadvantage is that, during the creation of the 2's complement, it is necessary to perform a borrow operation for every nonzero digit in the subtrahend. You can overcome this problem by first taking the 1's complement of the subtrahend, adding one to the result, and then performing the remaining operations, as in the example for the 2's complement.

## As fate would have it, there is also a short-cut available to generate the 2's complement of a binary number. Starting with the LSB of the value to be complemented, directly copy each bit up to and including the first 1, then invert the remaining bits (Figure 5).

Unfortunately, both the 1's and 2's complement techniques return incorrect results if you use unsigned binary representations and if you subtract a larger from a smaller value; that is, for these techniques to work, the final result must be greater than or equal to zero. The reason for this is that subtracting a larger number from a smaller number results in a negative value, but you are using unsigned binary numbers, which, by definition, can only be used to represent positive values. It is impractical to only ever perform calculations that have positive results, so you must have some way of representing negative values. One solution is to use signed binary numbers.

**Signed binary numbers**

In standard decimal arithmetic, negative numbers are typically represented in "sign-magnitude form" (using a minus sign as a prefix). For example, you represent a value of plus twenty-seven as +27 (or just 27 for short), and a value of minus twenty-seven is indicated as –27 (where the "+" or "–" is the sign and the "27'' is the magnitude, hence, the "sign-magnitude" designation).

## You can replicate the sign-magnitude form in binary by simply using the MSB to represent the sign of the number (0=positive, 1=negative). However, computers rarely employ the sign-magnitude form but instead use a format known as signed binary. You can use signed binary numbers to represent both positive and negative values, and they do this in a rather cunning way. To illustrate the differences between the sign-magnitude and signed binary formats, consider the decimal sign-magnitude representations of plus and minus twenty-seven, along with the same values represented as signed binary numbers (Figure 6).

Unlike the decimal values, the bit patterns of the two binary numbers are very different because you form the signed binary equivalent of –2710 by combining –12810 with +10110. That is, the LSBs continue to represent the same positive quantities as for unsigned binary numbers, and you use the MSB to represent an actual negative quantity rather than a simple plus or minus. In the case of a signed 8-bit number, a "1'' in the sign bit represents –27 (= –12810), and you use the remaining bits to represent positive values in the range 010 through +12710. Thus, you can use an 8-bit signed binary number to represent values in the range –12810 through +12710.

At first glance, signed binary numbers appear to be an outrageously complex solution to a fairly simple problem. In addition to representing an asymmetrical range of negative and positive numbers (–12810 through +12710, in the case of an 8-bit value), the way in which you form these values is, to put it mildly, alien to the way you usually think of numbers. Why then, you may ask, don't you use the MSB bit to represent the sign of the number and leave it at that?

Well, as you may suspect, there's reason behind the madness. First, if you use the MSB to represent only the sign of the number, then such numbers accommodate both +0 and –0 values. Although this may not seem like a particularly hairy stumbling block, computers are essentially dumb, and having positive and negative representations of zero introduces complications in recognizing whether a given value is less than zero or equal to zero (or whether +0 is greater than or equal to –0). But, there's a lot more to signed binary numbers than this. (Now, pay attention, because this is the clever part.) Closer investigation of the two binary values in Figure 6 reveals that each bit pattern is, in fact, the 2's complement of the other. To put it another way, taking the 2's complement of a positive-signed binary value returns its negative equivalent and vice versa (the only problem being that, because of the asymmetrical range, you cannot negate the largest negative number; for example, in an 8-bit number, you cannot negate –12810 to get +12810 because the maximum positive value supported is +12710).

## The result of this rigmarole is that using signed binary numbers (which are also commonly referred to as "2's-complement numbers") greatly reduces the complexity of the operations within a computer. To illustrate why this is so, consider one of the simplest operations—addition. You can compare the additions of positive and negative decimal values in sign-magnitude form with their signed binary counterparts (Figure 7).

First, examine the standard decimal calculations. The calculation at the top of Figure 7 is easy to understand because it's a straightforward addition of two positive values. However, even though we are familiar with decimal addition, the other three problems aren't quite as simple because you must decide exactly what to do with the negative values. In comparison, the signed binary calculations on the right side of Figure 7 are all simple additions, irrespective of whether the individual values are positive or negative.

If you force a computer to use a binary version of the sign-magnitude form to perform additions, then, instead of performing its calculations effortlessly and quickly, it has to perform a painful sequence of operations. First, the computer has to compare the signs of the two numbers. If the signs are the same, then the computer simply adds the two values (excluding the sign bits themselves) because, in this case, the result always has the same sign as the original numbers. However, if the signs are different, the computer has to subtract the smaller value from the larger value and then ensure that the correct sign was appended to the result.

As well as being time-consuming, performing these operations requires a substantial number of logic gates. Thus, the advantage of the signed binary format for addition operations is apparent: You can always directly add together signed binary operations to provide the correct result in a single operation, irrespective of whether they represent positive or negative values. That is, you perform the operations a+b, a+(–b), (–a)+b, and (–a)+(–b) in exactly the same way, by simply adding the two values together. This results in adders that are fast because you can construct them using a minimum number of logic gates.

Now consider subtraction. You know that 10–3=7 in decimal arithmetic and that you can obtain the same result by negating the right-hand value and inverting the operation; that is, 10+(–3)=7. This technique is also true for signed binary arithmetic, although you perform the negation of the right-hand value by taking its 2's complement rather than by changing its sign. For example, consider a generic signed binary subtraction represented by a–b. Generating the 2's complement of b results in –b, allowing you to perform the required operation: a+(–b). Similarly, you perform equivalent operations a–(–b), (–a) –b, and (–a)– (–b) in exactly the same way, by simply taking the 2's complement of b and adding the result to a, irrespective of whether a or b represent positive or negative values. This means that computers do not require two different blocks of logic (one to add numbers and another to subtract them); instead, they only require an adder and some logic to generate the 2's complement of a number, which tends to make life a lot easier.

**So, where's the 2's complementor?**

Early digital computers were often based on 1's complement arithmetic for a variety of reasons, including the fact that 2's complement techniques were not well understood. However, designers quickly migrated to the 2's complement approach because of the inherent advantages it provides.

Unfortunately, the problem mentioned at the beginning of this article remains: When you examine a computer's ALU, there isn't a 2's complementor in sight; instead, a humble 1's complementor glares balefully at you from its nest of logic gates. So, where is the 2's complementor? Is this part of some nefarious scheme to deprive you of the benefits of 2's complement arithmetic? Fear not my braves, because all will be revealed in the next exciting installment…

Author's biographyClive "Max" Maxfield is a member of the technical staff at Intergraph Computer Systems (Huntsville, AL), (800) 763-0242, where he gets to play with the company's high-performance graphics workstations. In addition to numerous technical articles and papers, Maxfield is also the author of Bebop to the Boolean Boogie: An Unconventional Guide to Electronics (ISBN 1-878707-22-1). To order, phone (800) 247-6553. You can reach Maxfield via e-mail at crmaxfie@ingr.com. |

| EDN Access | feedback | subscribe to EDN! |

Understanding SSD over-provisioning

Equations and Impacts of Setup and Hold Time

ARM flexes from servers to wearables

Embedded Systems Architecture, Device Drivers - Part 1: Interrupt Handling

Single-cycle logarithms & antilogs

Automotive System & Software Development Challenges – Part 1

How FPGAs and multicore CPUs are changing embedded design

Memory Hierarchy Design - Part 1. Basics of Memory Hierarchies

The future of computers - Part 1: Multicore and the Memory Wall

Currently no items