Booths algorithm

21
Chapter 11 Booth’s Algorithm In this chapter, we look not at a hardware component or technique, as we have previously, but instead at an algorithm that has been implemented in hardware to provide a basic function of computers, multiplication. An algorithm is a set of instructions to solve a problem. 1 algorithm There is nothing intrinsically computer-related about algorithms. You have followed many algorithms in your lifetime. If you cook, you almost certainly have followed a recipe, which is a kind of algorithm for solving cooking problem; if you have put furniture or toys together, the instructions were a kind of algorithm. When you learned to multiply multi-digit numbers, when you learned to do long division, and, in large part, when you learned algebraic manipulations, you were learning algorithms. While algorithms for humans are expressed in natural language (e.g., English, Chinese, French, etc.), algorithms for computers must ultimately be specified in a language that they can understand. In the next chap- ter, for instance, we will see one kind of language for expressing computer- understandable algorithms, machine language. Algorithms can also be di- rectly implemented by combinational and sequential circuits, as well. Regardless of their ultimate form, however, algorithms are generally first specified in a natural language or in pseudocode —a specification that is some- pseudocode thing between a high-level programming language and natural language. This will be the case when discussing the algorithm for multiplication in this chapter. The multiplication algorithm we will look at here is called Booth’s Al- gorithm , named after Andrew Booth, who created it in 1951. We look at Booth’s Algorithm it here for three reasons. First, this will give you an example of an algo- 1 Yes, there’s more to it than that—isn’t there always?—but that general definition will do for now. 157

Transcript of Booths algorithm

Chapter 11

Booth’s Algorithm

In this chapter, we look not at a hardware component or technique, aswe have previously, but instead at an algorithm that has been implementedin hardware to provide a basic function of computers, multiplication. Analgorithm is a set of instructions to solve a problem.1 algorithm

There is nothing intrinsically computer-related about algorithms. Youhave followed many algorithms in your lifetime. If you cook, you almostcertainly have followed a recipe, which is a kind of algorithm for solvingcooking problem; if you have put furniture or toys together, the instructionswere a kind of algorithm. When you learned to multiply multi-digit numbers,when you learned to do long division, and, in large part, when you learnedalgebraic manipulations, you were learning algorithms.

While algorithms for humans are expressed in natural language (e.g.,English, Chinese, French, etc.), algorithms for computers must ultimatelybe specified in a language that they can understand. In the next chap-ter, for instance, we will see one kind of language for expressing computer-understandable algorithms, machine language. Algorithms can also be di-rectly implemented by combinational and sequential circuits, as well.

Regardless of their ultimate form, however, algorithms are generally firstspecified in a natural language or in pseudocode—a specification that is some- pseudocode

thing between a high-level programming language and natural language.This will be the case when discussing the algorithm for multiplication inthis chapter.

The multiplication algorithm we will look at here is called Booth’s Al-gorithm, named after Andrew Booth, who created it in 1951. We look at Booth’s

Algorithmit here for three reasons. First, this will give you an example of an algo-

1Yes, there’s more to it than that—isn’t there always?—but that general definition willdo for now.

157

rithm. Second, it will show you one way a computer can do multiplication,given only the kinds of functional units we have so far seen. And third,Booth’s Algorithm is an example of how insights from mathematics can leadto e�ciency, albeit at the expense of some increase in complexity.

Before discussing Booth’s Algorithm, however, we must first digress fora moment to talk about how numbers are represented in computers. Thesubject is considerably more complex than is presented here; for instance,we do not talk about representing floating point numbers. However, thisshould be a good introduction to the subject.

11.1 Number Representation

So far, we have seen how to represent numbers in binary, that is, in base2. However, we have not yet talked in any detail about how to representnumbers in the computer so that they can be stored and operated upon.This may at first seem trivial: just decide how many bits to use to representthe number, and then directly record the binary number in those bits, asshown in Figure 11.1. In essence, this is how it is done for non-negativeintegers. Note that any extra space in the high-order (most-significant) bitsis padded with 0’s.padding

62 (base 10)

111110 (base 2)

0 0 1 1 1 1 1 0

most-significant bit least-significant bit

Figure 11.1: Representing a non-negative binary integer as an 8-bit num-ber. The size of the number representation is generally a power of two (bigsurprise!).

However, the real problem begins when we realize that we also have torepresent negative integers. (The even larger problem of how to representreal numbers is beyond the scope of this course.) There two ways this isdone in general: sign-magnitude representation and two’s complement rep-resentation.

11.1.1 Sign-magnitude representation

Sign-magnitude representation is the simplest way to represent numberssign-magnituderepresentation

158

in the computer. As with all representations, we first decide how manybits to use to represent numbers. Typically, a given computer can representseveral di↵erent sizes, and programming languages may allow even more.2

Numbers that are 8, 16, or 32 bits in length are common.

sign bit magnitude

}Figure 11.2: Sign-magnitude number representation. The example shown isfor an 8-bit number.

Once that is decided, we divide the representation into two fields, as fields

shown in Figure 11.2. The first field is a single bit in width, the high-order(leftmost) bit. This is called the sign bit . The second field is the magnitude, sign bit

magnitudeand it consists of the remainder of the bits in the number representation.The magnitude is the absolute value of the number to be represented, inbinary. The sign bit is 0 if the number is positive and 1 if it is negative.

For example, suppose that we wish to represent the number 11710 as asign-magnitude binary number. The binary form of this number is 11101012—this will be the magnitude portion of the number. Since it is a positivenumber, the sign bit will be 0, so the resulting sign-magnitude form will be01110101.

If we wish to represent �11710, the sign bit will be 1, and so the result is11110101 Notice that it is important to know that this is a sign-magnitudenumber, not a binary number! The unsigned binary number 111101012 =24510.

Since not all number representations in a computer are the same size,we need to know how to convert a number from one size to another. Toconvert from a longer number representation to a shorter, the number that isrepresented should obviously be representable in the shorter form, otherwiseinformation will be lost. For example, we cannot store the number 6510 inless than 6 bits.

Assuming that we have enough bits in the new magnitude field to repre-sent the number, the magnitude field can simply be copied, with any extraleading 0s discarded. The sign bit is similarly copied. For example, supposewe have �11710 stored in a 16-bit number and want to copy it into an 8-bit

2As an example of the latter, Lisp allows arbitrarily-large integers.

159

number. This would happen as follows:

1000000001110101 =) 11110101

Going the other way is easy, as well: we simply copy the magnitude fieldto the larger representation, then pad the field to the left with 0s. Then wecopy the sign bit. So 6310, copied from an 8-bit to a 16-bit number wouldbe:

00111111 =) 0000000000111111

Unfortunately, though it is a simple representation scheme, there aresome serious problems with sign-magnitude representation. One is odd:there are two ways to represent 0. For an 8-bit representation, both 00000000and 10000000 represent 0. This can introduce some complications (andconfusion) in arithmetic.

A second problem is that we have to treat the sign bit di↵erently from themagnitude bits, which again complicates things for the computer’s arithmeticunit. Or, more generally, the computer has to be aware of whether thenumber is positive or negative in order to do arithmetic. For example, toadd 2010 and �510, the computer will have to first determine that the secondnumber is negative by looking at its sign bit, then subtract rather than add.

11.1.2 Two’s complement representation

A number representation scheme called two’s complement representationtwo’scomplementrepresentation takes care of both of these problems with sign-magnitude representation, at

the cost of being a bit harder for humans to understand and deal with.In two’s complement representation, a positive number looks exactly like

its sign-magnitude form. Negative numbers, however, are represented inwhat is known as the two’s complement of the number’s absolute value.two’s

complement The two’s complement of a number is formed by first taking the one’scomplement . A complement a binary digit is the negation of that digit.one’s

complement The one’s complement of a binary number, then, is formed by taking thecomplement of each bit in the number. The one’s complement of a numberis the same length as the number So, for example, the one’s complement ofthe 8-bit binary number 00111111 (6310) is 11000000.

The two’s complement of the number is formed by adding 1 to the one’scomplement. So the two’s complement of 00111111 would be 1+11000000 =11000001. One way of thinking about two’s complement is that a numberplus its two’s complement add up to 0; so in that sense, the two numbersare complements of each other. For example, the two’s complement of 0110

160

is 1010; adding them gives 10000, which if we truncate to the original 4 bitsis 0000.

So a positive number, such as 3710, would be represented in two’s comple-ment representation as 00100101, while a negative number, such as �3710,would be represented as the two’s complement of its absolute value, or11011010 + 1 = 11011011. Note that the sign bit has the same meaningas in sign-magnitude: if it is a 1, then the number is negative, and if it is 0,then positive.

What about 0? In sign-magnitude, there were two representations, es-sentially for ±0. In two’s complement representation, there is only one rep-resentation: 00000000 (for an 8-bit number). To see this, suppose we wantedto represent the nonsensical value �0 that sign-magnitude allows. We wouldfirst complement 0, to give 11111111, then add 1, which gives us 100000000.However, this is 9 bits, and our numbers (in this example) are 8-bit num-bers, so the high-order bit is lost (truncated), giving us 00000000 as therepresentation. This makes sense if we think about complements as addingto 0, since the only number that can be added to 0 to yield 0 is in fact 0.

Suppose we want to know what number a two’s complement representa-tion actually represents—how do we do that? If the high-order bit is 0, thenthe representation is just the binary form of the number. If the high-orderbit is 1, however, we must take the two’s complement to find the absolutevalue, then negate that. In other words, the two’s complement of a number’stwo’s complement is the number itself.

Let’s see if this works. Suppose we are given the two’s complement rep-resentation of the number �2910, that is, 11100011. The two’s complementof this is 00011100 + 1 = 00011101, which is the binary form of 2910. Sincethis is the absolute value, we negate it to get �2910.

Extending a two’s complement number to a longer representation issimple: just pad the new leftmost bits with the sign bit. So the 8-bittwo’s complement number 00011100 (2810) would become the 16-bit number0000000000011100, and the 8-bit number 11100100 (�2810) would become1111111111100100. It’s obvious that the former is valid, but what about thelatter? Well, let’s see. That number is negative, so we take the two’s comple-ment to find its absolute value: the one’s complement is 0000000000011011,and adding 1, we get the absolute value of 0000000000011100; so the numberwas �2810, as promised.

Shrinking the representation is trickier, since we have to make sure thenew representation size is big enough to store the number. It turns out that

161

we can figure this out pretty easily.

The largest positive number that can be represented in a two’s comple-ment representation is has all 1s except for the leading bit. For n bits, thisnumber is 2n�1 � 1 To see this, ignoring two’s complement for a moment,consider that if we add 1 to this number, it will become a 1 followed by all0s, that is, it will be 2n�1, so the largest number is one less, or 2n�1 � 1.

The smallest negative number, however, is not the number representedwith all 1s, which is what we might first expect. For example, consider the8-bit two’s complement number 11111111. To find out what number thatis, take the two’s complement: 00000000 + 1 = 100000001, and drop thehigh-order 9th bit, leaving 1 as the absolute value. Since this is a nega-tive number, the actual number represented is �1—that is, it is the largestnegative number!

So what is the smallest? It turns out that the smallest is the numberrepresented by a 1 in the high-order bit and all 0s elsewhere. So for an8-bit representation, the most negative number that can be represented is10000000. The two’s complement of this is 01111111 + 1 = 10000000, or27 = 12810. In general, the smallest negative number that can be representedin an n-bit two’s complement scheme is �2n�1.

In general, then, an n-bit two’s complement representation can representnumbers from �2n�1to + 2n�1 � 1 So if we have a large representation andwant to shrink it to n bits, all we have to do is figure out if the number iswithin the range that can be represented by the smaller representation.

There is a simpler way, too. If the number is positive, then we are safeshrinking the number to any size by removing leading 0s as long as we leaveone of them. So 0001010, for example, can safely be stored as 001010 or01010. (What would happen if we truncated it by one more bit?) Similarly,for a negative number, we can remove leading 1s as long as we leave one ofthem. So 1110110 can be shrunken to 110110 or 10110.

The real beauty of two’s complement representation has to do with howaddition is done with it. The computer now does not have to be concernedwith the signs of either of the numbers. Furthermore, with two’s complementrepresentation, subtraction is not needed at all. All we have to do is to takethe two’s complement of the number to be subtracted (the subtrahend) andsubtrahend

add it to the other number (the minuend).minuend

Let’s look at some examples. Suppose we are using 8-bit representationsand want to compute:

25 + 100

162

This is done directly by adding the two’s complement representations ofthese numbers:

00011001

+01100100

01111101

which is 12510.What about negative numbers? Suppose we want to compute:

25 + (�100)

This is also done by adding the two’s complement representations of thenumbers:

00011001

+10011100

10110101

Is that correct? Well, to see what 10110101 “really” is, first take the twos’complement: 01001010 + 1 = 01001011, which gives 7510 as the absolutevalue of the answer; since the high-order bit is 1, then the number is �7510.

Subtraction is, as we have said, just adding the negative of the numberto be subtracted. So if our problem is:

73� 50

we would convert this into the equivalent form:

73 + (�50)

which gives:

01001001

+11001110

00010111

163

Note that we lost a carry out of the high-order bit; we’ll have more to sayabout that in a moment. What matters here is that the sum (ignoring whatwould have been the 9th bit) is correct, 2310.

It is possible with arithmetic using any fixed-size representation of num-bers to have overflow : that is, for the result of the arithmetic operation to beoverflow

larger than the representation can hold. For sign-magnitude representation,the largest positive number that can be stored in an n-bit representation is2n�1 � 1, and the smallest negative number is �2n�1 � 1. So a 16-bit sign-magnitude representation can hold numbers from �32, 767�+32, 767. Anyaddition or subtraction yielding a number outside this range is an arithmeticoverflow.

As we have seen, an n-bit two’s complement representation can any in-teger x as long as �2n�1 x 2n�1 � 1. We could detect overflow bycomparing the result of any operation with the maximum and minimumnumber that the representation can hold. If the result is larger than thelargest, or smaller than the smallest, then there has been an overflow.

However, we can make use of our knowledge of numbers to do this moreeasily than requiring two comparisons per operation, which would be verycostly in terms of time.

With sign-magnitude representation, overflow can be detected easily whenthere is a carry-out from the highest-order non-sign bit of the operation. Forexample, suppose we have an 8-bit sign-magnitude representation and areadding the two numbers 7310 and 7810, which should yield an overflow (since15110 > 27 � 1 = 127):

01001001

+01001110

10010111

We can detect an overflow here, since there is a carry-out from bit 7 into bit8, yielding the nonsensical result that the addition of two positive numbersyields a negative number. This is also an overflow for adding the same twonumbers if they are represented in 8-bit two’s complement form.

For two negative numbers, it is similar. Consider (�7310) + (�7810). In

164

sign-magnitude form, this would be:

11001001

+11001110

00010111

with a carry out of the high-order bit. Again, there has been a carry intothe sign bit, yielding a positive number from adding two negative numbers,so this is obviously an overflow.

Two’s complement is similar:

10110111

+10110010

01101001

with a carry out of the high-order bit. Here, too, adding two negative num-bers gives rise to a positive number, so there was overflow.

In general, then, overflow can be detected whenever adding two positivenumbers gives a negative result or adding two negative numbers gives apositive number.

What about the case where the inputs are neither both positive nor bothnegative? In this case, no overflow is possible, since it is the case of addinga positive and a negative number.

We should note here that we must be careful with the term “two’s com-plement”. When we refer to a two’s complement representation, we mean anumber that is represented in two’s complement form. However, when wetalk about the two’s complement of a number, or taking the two’s comple-ment of a number, we are talking about the result of performing the two’scomplement operation or the operation itself, not the representation of theresult.

11.2 Multiplication

Before turning to computer-based multiplication, let’s first look at howmultiplication itself is done, at least how we learned to do long-hand multi-

165

Sidebar 20 Another way of looking at two’s complement.Another way of looking at two’s complement representation is that we are really

representing numbers by something very much like their distance from 0.The natural numbers (1, 2, 3...) are represented in their normal binary form (1, 10,

11, ...). Negative numbers, however, need to be represented some way that allows their“distance” from 0 to be indicated. Think of an odometer (mileage gauge) on a car:what’s the number that’s just before 0? The answer is all 9’s. Same here: the numberjust prior to 0 is all 1’s. Let’s let that represent -1. the number just before that, i.e., 2prior to 0, is (for 8 bits) 11111110.

Now think about addition using the number line:

-5 -4 -3 -2 -1 0 1 2 3 4 5

result

We can think of having each number as the length of a line segment on the numberline, and the sum being computed by taking the 0-point of one line segment and puttingit on the non-0-point of the other. Thus, we’d move the zero point of the line segmentrepresenting 3 to 2, and then we can read out the sum from the non-zero point of thatline segment, or 5.

If we think of negative numbers as the lengths of line segments going the otherway from 0, it works the same way:

-5 -4 -3 -2 -1 0 1 2 3 4 5

result

Here, we would take the 0-point of one of the numbers, in this case 3, and moveit to the non-zero point of the other, in this case, at �2. Reading the non-0 end nowgives us 1, as expected. (It would work had we switched which one we moved, too.)For two negative numbers, it works the same way:

-5 -4 -3 -2 -1 0 1 2 3 4 5

result

Well, this is pretty much what we’re doing with 2’s complement representation. Werepresent the numbers, both positive and negative, in a form that tells us how far theyare from 0; we just do this in a form that allows addition to work for either positive ornegative numbers.

(CONTINUED)

plication in elementary school.First, the terminology that we have all forgotten by now: In the multi-

plication n⇥m = p, the first number (n) is called the multiplicand , and themultiplicand

second (m) is called the multiplier . The result (p) is the product .multiplier

product One way to do multiplication is simply to start with the product equal to

166

Sidebar 21 Another way of looking at two’s complement (continued).Another, and maybe better, way to think about it is as follows. For n bits, we divide

the numbers that a given number of bits can represent into two three parts: 0, thosenumbers from 1 to 2n�1 � 1, and those numbers from 2n�1 to 2n � 1. For four-bitnumbers, this means we have 0, the numbers from 1–7, and the numbers from 8–15.We will do modulo arithmetic using these numbers—think again of the odometer on acar or of a clock. When we add two numbers together and the result is larger than thenumber that can be represented using n bits (i.e., 2n � 1), we want the bits to “rollover” or “wrap around”. So if we were to add 0001 and 1111 (1 and 15), we’d get0000 with a carry out. If we ignore the carry out, we are doing modulo arithmetic. Thetrick is now to assign numbers to the bit patterns so that negative or positive numbersadded together will give us the right result using modulo arithmetic.

This is precisely what 2’s complement does. We use the numbers in the range 2n�1

to 2n � 1 for negative numbers and assign numbers to these bit patterns such that thesmaller negative numbers are assigned the larger bit patterns. That is, for a positivenumber i, �i will be represented as 2n � i.

Now, when we add two positive numbers, then if the sum is less than 2n�1, theresult is still a positive number. When we add a b-bit positive number p and a b-bitnegative number n, there are three cases:

1. p = n: Let the magnitudes of both numbers be i. Then the 2’s com-plement representation of p will just be i, and the 2’s complement of nwill be 2b � i. The addition will yield 2b, which is not representable inb bits,3 so all the bits will be 0 with a carry out. In other words, theresult is 0, which is what you’d expect.

2. p > n: Let the magnitude of n be i, and the magnitude of p be i + j.Then adding the 2’s complements of these would yield:

p + n = (i + j) + (2b � i) = 2b + j

This is just j. To convince yourself of that, think about what the carryout really means for a b-bit number: it means that there is 2b extrabeyond what is contained in the b-bits. This is what we’d expect: thesum is the di↵erence between the magnitudes of p and n.

3. p < n: Let the magnitude of p be i and the magnitude of n be i + j.Now:

p + n = i + (2n � (i + j)) = 2n � j

Now, 2n� j is the 2’s complement representation of �j, as we’d expect.

Pretty cool, huh?

0, then add to it the multiplicand m times, where m is the multiplier. Thisis often how children initially learn to multiply. This obviously requires m

additions.Long-hand multiplication does something di↵erent. It assumes that we

already know the multiplication table up through 9 ⇥ 9. We create partialproducts based on the digits in the multiplier. We will multiply each digit partial products

in the multiplier by the multiplicand to form these partial products, thenadd them. However, we do not simply add them, but rather, we shift themover one column (equivalent to multiplying by 10) each time we move to adi↵erent digit in the multiplier, starting at the least significant. This method

167

requires only k additions, where k is the number of digits in the multiplier;note that almost always, k << m. However, we do have to know how toshift numbers and we do have to know the multiplication table.

For binary multiplication, this is even easier: the multiplication table istrivial. So, multiplication of two binary numbers, say 183⇥ 178 would looklike the following:

10110111

⇥10110010

0

10110111

0

0

10110111

10110111

0

10110111

0111111100111110

or 32,574. Note that the representation needed for product is twice as longas that needed for the multiplicand or multiplier.

This is a good algorithm, and one that is doable by repeated shifting andaddition. The problem is, every operation is at a premium in a computer,especially if we are implementing such a common operation as multiplication.This algorithm requires in the worst case n�1 additions and n�1 shifts foreach multiplication of two n bit numbers. We would like to cut down thenumber of additions needed, if at all possible.

11.3 Booth’s Algorithm

We are finally in a position to discuss Booth’s algorithm itself. Thisalgorithm uses a small number of additions and shift operations to do thework of multiplication.

168

11.3.1 The basic insight

Booth noticed something about binary numbers that provided the basicinsight behind his algorithm. A binary number is composed of 1s and 0s, ofcourse, and often there will be blocks of adjacent 1s within the number. Agiven block of k 1s, starting at bit n of the number, is equal to:

2n + 2n�1 + 2n�2 + ... + 2n�k+1

This is true because of the definition of binary numbers: a 1 at bit j signifiesthat the number contains 2j . For example, suppose we have the number011100112 This number contains two blocks of 1s, one of size k = 3 beginningat bit n = 6 and another at bit n = 1 of size k = 2. The first block is equalto:

26 + 25 + 24 = 112

and the second:

21 + 20 = 3

and so the entire number is 112 + 3 = 115.This way of computing the value of binary numbers itself does us little

good. However, an important insight does: A block of k 1s starting at bit n

is also equal to 2n+1 � 2n�k+1. Intuitively, you can see this: if you subtract1 from a number composed of a 1 followed by k 0s, you get all 1s except inthe high-order bit. Here, the high-order bit corresponds to 2n+1, and the 1being subtracted corresponds to 2n�k+1.

To see it another way, consider the following sum:

00111100

+00000100

01000000

Here, the top number contains just a block of k = 4 1s for which wewould like to find an equivalent expression, starting at n = 5. The bottom

169

number is just 2n�k+1. If we let the top number be x, and the sum be y,then we see that:

y = x + 2n�k+1

) x = y � 2n�k+1

) x = 2n+1 � 2n�k+1

since y = 2n+1.Let’s look again at our example, 011100112. The first set of 1s of course,

equal to the binary number 011100002:

011100002 = 26+1 � 26�3+1 = 27 � 24 = 112

The second group of 1s, 112, is equal to 22 � 20 = 3, which when added tothe value of the first gives us the correct answer, 11510.

What this means is that if we wish to find the value of a binary number,all we have to do is to scan across the number, using this insight when goinginto or out of blocks of 1s.

So, what is the advantage of this way of determining the value of a binarynumber? Suppose we have the number 01111100111100002. The standardway of converting this to decimal, as we learned back in Chapter 6, wouldhave us compute the sum:

value =214 + 213 + 212 + 211 + 210 + 27 + 26 + 25 + 24

=16, 384 + 8192 + 4096 + 2048 + 1024 + 128 + 64 + 32 + 16

=31, 984

Using the insight behind Booth’s algorithm, however, we have:

value =(215 � 210) + (28 � 24)

=(32, 768� 1024) + (256� 16)

=31, 984

There are three additions using the new way (if we consider subtraction anaddition of a negation), compared to eight in the standard way.

Your question at this point is likely, what on earth does this have todo with multiplication? For the answer, consider rewriting a multiplication

170

so that the multiplier or multiplicand is in the form we just discussed. Forexample, 3⇥ 14 can be written as:

0011⇥ 1110 = 0011⇥ (24 � 21)

Now we distribute it:

0011⇥ (24 � 21) =(0011⇥ 24)� (0011⇥ 21)

Now we have the multiplication in a very convenient form. In order tomultiply by a power of two, all we have to do is to shift the multiplicand shift

to the left by a number of digits equal to the multiplier’s exponent (tothe right—i.e., divide—if the exponent is negative). So the above examplebecomes:

(0011⇥ 24)� (0011⇥ 21) = 00110000� 00110

= 101010

= 4210

So to summarize, the basic insight gives us the ability to cast multipli-cation in terms of shifting and addition.

11.3.2 The algorithm

Booth’s algorithm uses the basic insight from the preceding section to domultiplication. We could directly implement the method of multiplicationsketched above as an algorithm such as shown in Figure 11.3. Here, we startfrom the high-order bit of the multiplier and scan toward the right. When weenter a group of 1s starting at bit n, we add 2n+1 times the multiplicand tothe product, and when we leave a group of 1s that stop at k, we subtract 2k

times the multiplicand. At the end, we have to check to see if the last bit (bit0) was a 1; if so, then we didn’t have a chance to subtract the correspondingpartial product, so we do that now.

There is nothing wrong with this algorithm, but it does require somebookkeeping (keeping track of the current bit position) and quite a few shiftsof the multiplicand. There are also more shifts required than that, and thanmight be apparent at first glance. In order to determine whether we areentering or leaving a string of 1s, the algorithm needs a way to examine asingle bit (or really, two: the one we just looked at, and the current one).

171

Variables:MP: multiplierMC: multiplicandProd: productPos: position of the bit we are looking atN: size of MP or MC

Begin:For every bit of the multiplier, starting at bit N:

Pos = current bit position.If this is the first 1 in a block, then

Add 2Pos+1⇥ MC to Prod.Else, if this is the last 1 in a block of 1s:

Subtract 2Pos⇥ MC from Prod.If bit 0 was a 1, then

Subtract MC (i.e., 20⇥ MC) from Prod.Return Prod.

end.

Figure 11.3: A multiplication algorithm.

Unless we are extremely lucky and the processor we are using provides aninstruction to do this, we will have to do this by masking the other bits usingan AND. This mask will need to be shifted each time we look at a di↵erentbit.

Booth had a better way, one that makes ingenious use of registers toavoid so many shifts. Let’s see in general what the algorithm requires andhow it works, then we will present the more formal statement of it.

The overall idea is that instead of shifting the multiplicand to the left anda bit mask to the right multiple times, we will instead store the multiplierand accumulating product in such a way that we can shift both together.Instead of shifting the multiplicand to the left to add (or subtract) it, we willinstead shift the product to the right. To do this, we need a 2n-bit registerto store the product, and we will start with the product (initially 0) beingin the topmost n bits. We will always add or subtract from these topmostn bits; as we shift the product to the right, this has the same e↵ect as if wehad shifted the multiplicand to the left. The product will finally occupy theentire 2n-bit register. The topmost n bits of this register are called the A

register, and the bottommost n bits, the Q register.Note that initially the bottommost n bits of this register are unused. So

why not put them to use by storing the multiplier there? Now, instead of

172

Register Size DescriptionA n High-order n bits of product.Q n Initially holds multiplier, ultimately holds

low-order n bits of product.Q-1 1 Hold previous bit 0 of Q.M n Multiplicand.-M n Holds the two’s complement of M.

Count � log2n Number of bits in the multiplicand (or multi-plier).

Figure 11.4: Registers used by Booth’s algorithm. n is the size of the itemsbeing multiplied.

looking at each of the bits in the multiplier from the left, we can look at themfrom the right, as they are shifted out of the combined register. This meansthat we will subtract when entering a group of 1s and add when leaving.By storing the multiplier here, we can shift both it and the product with asingle shift.

We need some way to determine when we are entering or leaving a blockof 1s. For this, we use a 1-bit register, called Q-1, that is part of the shiftof the A and Q registers: the low-order bit of the large register is shiftedinto Q-1. Now, we can compare this bit with bit 0 of the combined register.When Q-1= 0 and bit 0 = 1, then we are entering a block of 1s; when theopposite is true, we are leaving a block. If the two bits are the same, we areeither in a block of 0s or a block of 1s.

Arithmetic shift: Fast machine operation that moves all bits overone position, repeating the sign bit if shifting to the right.

Compare: Fast machine instruction to see if two bits, bytes, words,etc., are the same.

Add: Two’s complement addition.Complement: Fast machine operation to complement each bit in

a set of bits. Some machines may provide two’s complementdirectly, in which case, use that. Otherwise, the result of thecomplement will need to be incremented by one (either usingan increment operation or by adding 1).

Figure 11.5: Operations needed by Booth’s algorithm.

An additional optimization is to avoid the use of subtraction by storingthe two’s complement of the multiplicand. This trades storage for time,since otherwise we would have to compute the two’s complement each timewe need to subtract the multiplicand. This is a very common trade-o↵ incomputer science.

Finally, we need a register to keep track of how many shifts in general

173

we need. We will call this Count, and initially it will be set to the length ofthe multiplicand or multiplier.

The algorithm needs only a few operations, all of which should be avail-able as fast hardware instructions. It needs a way to compute two’s comple-ments, which may be directly available on some machines, or it may have tobe done with a complement instruction followed by an increment. Second,we need the ability to compare two things to see if they are the same. Third,we need an add instruction.

Finally, we need a way to shift bit strings. What we need here is anarithmetic shift , which preserves the sign bit when shifting to the right. Forarithmetic shift

example, if we have the two’s complement number 10110110 (= �7410), anarithmetic shift to the right would yield 11011011 (= �3710). This shift, asdiscussed above, will treat the A, Q, and Q-1 registers as one big registerthat is shifted together.

1. Initialize registers. Count = number of bits in the multiplieror multiplicand; A = 0, Q = the multiplier, and Q-1 = 0.

2. Compare bit 0 of Q with Q-1 to see if entering/leaving a blockof 1s:

(a) If bit 0 = 1 and Q-1 = 0, then we are enteringa block. Subtract the multiplicand from A byadding its two’s complement.

(b) If bit 0 = 0 and Q-1 = 1, then we are leaving ablock. Add the multiplicand to A.

3. Prepare for the next bit.(a) Arithmetic shift right A, Q, and Q-1 as a single

(2n + 1)-bit register.(b) Reduce Count by 1.

4. If Count is not 0, then go to step 2. Otherwise, we are done,and the result is in the combined AQ register.

Figure 11.6: Booth’s algorithm.

Figure 11.4 summarizes the registers needed. Figure 11.5 summarizesthe operations needed. Booth’s algorithm is shown in Figure 11.6.

Let’s see how the algorithm works by looking at an example, shownin Figure 11.7: multiplying 6310 ⇥ 11010. We start storing the multiplicand(00111111) in the register M and the multiplier (01101110) in the Q register.We zero the A register, the Q-1 register. We set a new register, �M , to bethe two’s complement of the multiplicand (11000001), and we set the Countregister to be 8, the width of the operands.

We start by comparing bit 0 of Q with Q-1; they are the same, so we donot have to add anything to the product. We shift the combined A/Q/Q-1register and decrement Count.

174

0 0 0 0 0 0 0 0 0 1 1 0 1 1 1 0 0 1 0 0 0 Initial; just shift

0 0 0 0 0 0 0 0 0 0 1 1 0 1 1 1 0 0 1 1 1 Entering block; add -M

1 1 0 0 0 0 0 1 0 0 1 1 0 1 1 1 0 0 1 1 1 Shift

1 1 1 0 0 0 0 0 1 0 0 1 1 0 1 1 1 0 1 1 0 In block; just shift

1 1 1 1 0 0 0 0 0 1 0 0 1 1 0 1 1 0 1 0 1 In block; just shift

1 1 1 1 1 0 0 0 0 0 1 0 0 1 1 0 1 0 1 0 0 Exiting block; add M

0 0 1 1 0 1 1 1 0 0 1 0 0 1 1 0 1 0 1 0 0 Shift

0 0 0 1 1 0 1 1 1 0 0 1 0 0 1 1 0 0 0 1 1 Entering block; add -M

1 1 0 1 1 1 0 0 1 0 0 1 0 0 1 1 0 0 0 1 1 Shift

1 1 1 0 1 1 1 0 0 1 0 0 1 0 0 1 1 0 0 1 0 In block; just shift

1 1 1 1 0 1 1 1 0 0 1 0 0 1 0 0 1 0 0 0 1 Exiting block; add M

0 0 1 1 0 1 1 0 0 0 1 0 0 1 0 0 1 0 0 0 1 Shift

0 0 0 1 1 0 1 1 0 0 0 1 0 0 1 0 0 0 0 0 0 Done

0 0 0 0 0 0 0 0 0 1 1 0 1 1 1 0 0 1 0 0 0A Q Q-1 Count

M -M0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 1

Multiply: 63 x 110

Figure 11.7: Using Booth’s algorithm to multiply 6310 ⇥ 11010.

Next, we see that we are about to enter a block of 1s, since bit 0 of Q

is 1 and Q-1 is 0. We subtract M from A by adding �M , and then shift.The next two steps just shift everything, since we are within the 1s block.However, the following step requires us to add M to A, since we are leavingthe block of 1s. We then shift. Immediately, we discover that we are about toenter another block, and so we add �M to A and shift. The next two stepsrequire us to shift, then add M and shift. At this point, the Count registeris 0, signaling that we are done. The result is contained in the combinedA/Q registers: 0001101100010010, which is 693010, the product of 63⇥ 110.

To see how this works when one of the numbers is negative, Figure 11.8shows the algorithm computing �63⇥ 110.

175

0 0 0 0 0 0 0 0 0 1 1 0 1 1 1 0 0 1 0 0 0A Q Q-1 Count

M -M1 1 0 0 0 0 0 1 0 0 1 1 1 1 1 1

Multiply: -63 x 110

0 0 0 0 0 0 0 0 0 1 1 0 1 1 1 0 0 1 0 0 0 Initial; just shift

0 0 0 0 0 0 0 0 0 0 1 1 0 1 1 1 0 0 1 1 1 Entering block; add -M

0 0 1 1 1 1 1 1 0 0 1 1 0 1 1 1 0 0 1 1 1 Shift

0 0 0 1 1 1 1 1 1 0 0 1 1 0 1 1 1 0 1 1 0 In block; just shift

0 0 0 0 1 1 1 1 1 1 0 0 1 1 0 1 1 0 1 0 1 In block; just shift

0 0 0 0 0 1 1 1 1 1 1 0 0 1 1 0 1 0 1 0 0 Exiting block; add M

1 1 0 0 1 0 0 0 1 1 1 0 0 1 1 0 1 0 1 0 0 Shift

1 1 1 0 0 1 0 0 0 1 1 1 0 0 1 1 0 0 0 1 1 Entering block; add -M

0 0 1 0 0 0 1 1 0 1 1 1 0 0 1 1 0 0 0 1 1 Shift

0 0 0 1 0 0 0 1 1 0 1 1 1 0 0 1 1 0 0 1 0 In block; just shift

0 0 0 0 1 0 0 0 1 1 0 1 1 1 0 0 1 0 0 0 1 Exiting block; add M

1 1 0 0 1 0 0 1 1 1 0 1 1 1 0 0 1 0 0 0 1 Shift

1 1 1 0 0 1 0 0 1 1 1 0 1 1 1 0 0 0 0 0 0 Done

Figure 11.8: Using Booth’s algorithm to multiply �6310 ⇥ 11010.

11.4 Conclusion

11.5 Further Reading

Sources:

11.6 Exercises

1. Give the two’s complement representation of the following num-bers (use 8 bits):

(a) 9(b) -15(c) 68(d) -79(e) -152

2. Perform the following calculations, using two’s complementrepresentation. To maximize your chance for partial credit,(neatly) show your work. (Use 8-bit numbers.)

(a) 9 + 68(b) 125 - 79(c) 9 - 4(d) 68 + 4(e) 125 - 127

3. The overflow rule for addition in two’s complement only is ap-plicable when both numbers have the same sign. Why don’tyou have to worry about cases where the signs of the numbersare di↵erent?

4. For the following, perform Booth’s algorithm with 4-bit num-bers. Show your work. To check your work, convert the resultsto decimal and see if it is correct.

(a) 2⇥ 3(b) �2⇥ 3(c) 4⇥ 2(d) 4⇥ 7(e) 5⇥�4

5. 2’s-complement representation is a specific case of what wemight call n’s-complement representation. This works for ourusual decimal notation, where what we have is 10’s-complementrepresentation. Here, though, instead of taking a 1’s-complement,we’ll first take what might be considered the 9’s-complement,then add 1. So positive 12, for four digit 10’s-complement rep-resentation, would just be 0012. But to get the 10’s-complementof �12, we would first take 0012 and determine what we wouldneed to add to it to produce 9999. This number, 9987, is the9’s complement of 12. Adding 1 to this, we get 9988, or �12.

(a) What is the 4-digit 10’s-complement of �23?(b) Does addition work for 10’s-complement as it does

for 2’s complement? I.e., can we just add numberswithout worrying about their sign, and it all worksout? Try it by adding 23 and �12.

(c) Can you figure out why this works? HINT: Lookin your book; it talks about this sort of thing bydiscussing odometers.

177