Home | | Design and Analysis of Algorithms | Multiplication of Large Integers

Chapter: Introduction to the Design and Analysis of Algorithms : Divide and Conquer

Multiplication of Large Integers

Some applications, notably modern cryptography, require manipulation of inte-gers that are over 100 decimal digits long. Since such integers are too long to fit in a single word of a modern computer, they require special treatment.


Multiplication of Large Integers and Strassen’s Matrix Multiplication

In this section, we examine two surprising algorithms for seemingly straightfor-ward tasks: multiplying two integers and multiplying two square matrices. Both achieve a better asymptotic efficiency by ingenious application of the divide-and-conquer technique.



Multiplication of Large Integers

 

Some applications, notably modern cryptography, require manipulation of inte-gers that are over 100 decimal digits long. Since such integers are too long to fit in a single word of a modern computer, they require special treatment. This practi-cal need supports investigations of algorithms for efficient manipulation of large integers. In this section, we outline an interesting algorithm for multiplying such numbers. Obviously, if we use the conventional pen-and-pencil algorithm for mul-tiplying two n-digit integers, each of the n digits of the first number is multiplied by each of the n digits of the second number for the total of n2 digit multiplications. (If one of the numbers has fewer digits than the other, we can pad the shorter number with leading zeros to equalize their lengths.) Though it might appear that it would be impossible to design an algorithm with fewer than n2 digit multiplica-tions, this turns out not to be the case. The miracle of divide-and-conquer comes to the rescue to accomplish this feat.

 

To demonstrate the basic idea of the algorithm, let us start with a case of two-digit integers, say, 23 and 14. These numbers can be represented as follows:

 

23 = 2 . 101 + 3 . 100                   and 14 = 1 . 101 + 4 . 100.

 

Now let us multiply them:

 

23 14 = (2 . 101 + 3 . 100) (1 . 101 + 4 . 100)

 

= (2 1)102 + (2 4 + 3 1)101 + (3 4)100.

 

The last formula yields the correct answer of 322, of course, but it uses the same four digit multiplications as the pen-and-pencil algorithm. Fortunately, we can compute the middle term with just one digit multiplication by taking advantage of the products 2 1 and 3 4 that need to be computed anyway:

 

2 4 + 3 1 = (2 + 3) (1 + 4) 2 1 3 4.

 

Of course, there is nothing special about the numbers we just multiplied. For any pair of two-digit numbers a = a1a0 and b = b1b0, their product c can be computed by the formula

 

c = a b = c2102 + c1101 + c0,

 

where

 

c2 = a1 b1 is the product of their first digits,

 

c0 = a0 b0 is the product of their second digits,

 

c1 = (a1 + a0) (b1 + b0) (c2 + c0) is the product of the sum of the a’s digits and the sum of the b’s digits minus the sum of c2 and c0.

 

Now we apply this trick to multiplying two n-digit integers a and b where n is a positive even number. Let us divide both numbers in the middle—after all, we promised to take advantage of the divide-and-conquer technique. We denote the first half of the a’s digits by a1 and the second half by a0; for b, the notations are b1 and b0, respectively. In these notations, a = a1a0 implies that a = a110n/2 + a0 and b = b1b0 implies that b = b110n/2 + b0. Therefore, taking advantage of the same trick we used for two-digit numbers, we get

 

   = a b = (a110n/2 + a0) (b110n/2 + b0)

 

     (a1 b1)10n + (a1 b0 + a0 b1)10n/2 + (a0 b0)

 

     c210n + c110n/2 + c0,

 

where

 

c2 = a1 b1 is the product of their first halves,

 

c0 = a0 b0 is the product of their second halves,

c1 = (a1 + a0) (b1 + b0) (c2 + c0) is the product of the sum of the

a’s halves and the sum of the b’s halves minus the sum of c2 and c0.

 

If n/2 is even, we can apply the same method for computing the products c2, c0, and c1. Thus, if n is a power of 2, we have a recursive algorithm for computing the product of two n-digit integers. In its pure form, the recursion is stopped when n becomes 1. It can also be stopped when we deem n small enough to multiply the numbers of that size directly.

 

How many digit multiplications does this algorithm make? Since multiplica-tion of n-digit numbers requires three multiplications of n/2-digit numbers, the recurrence for the number of multiplications M(n) is

 

M(n) = 3M(n/2)                                          for n > 1,    M(1) = 1.

 

Solving it by backward substitutions for n = 2k yields

 

M(2k) = 3M(2k1) = 3[3M(2k 2)] = 32M(2k2) = . . . = 3iM(2ki) = . . . = 3kM(2kk) = 3k.

 

Since k = log2 n,

 

M(n) = 3log2 n = nlog2 3 n1.585.

 

(On the last step, we took advantage of the following property of logarithms:

alogb c = clogb a.)

 

But what about additions and subtractions? Have we not decreased the num-ber of multiplications by requiring more of those operations? Let A(n) be the number of digit additions and subtractions executed by the above algorithm in multiplying two n-digit decimal integers. Besides 3A(n/2) of these operations needed to compute the three products of n/2-digit numbers, the above formulas

 

5.4  Multiplication of Large Integers and Strassen’s Matrix Multiplication

 

require five additions and one subtraction. Hence, we have the recurrence

 

A(n) = 3A(n/2) + cn                     for n > 1,       A(1) = 1.

 

Applying the Master Theorem, which was stated in the beginning of the chapter, we obtain A(n)  (nlog2 3), which means that the total number of additions and subtractions have the same asymptotic order of growth as the number of multipli-cations.

 

The asymptotic advantage of this algorithm notwithstanding, how practical is it? The answer depends, of course, on the computer system and program quality implementing the algorithm, which might explain the rather wide disparity of reported results. On some machines, the divide-and-conquer algorithm has been reported to outperform the conventional method on numbers only 8 decimal digits long and to run more than twice faster with numbers over 300 decimal digits long—the area of particular importance for modern cryptography. Whatever this outperformance “crossover point” happens to be on a particular machine, it is worth switching to the conventional algorithm after the multiplicands become smaller than the crossover point. Finally, if you program in an object-oriented language such as Java, C++, or Smalltalk, you should also be aware that these languages have special classes for dealing with large integers.

 

Discovered by 23-year-old Russian mathematician Anatoly Karatsuba in 1960, the divide-and-conquer algorithm proved wrong the then-prevailing opinion that the time efficiency of any integer multiplication algorithm must be in  (n2). The discovery encouraged researchers to look for even (asymptotically) faster algorithms for this and other algebraic problems. We will see such an algorithm in the next section.


Study Material, Lecturing Notes, Assignment, Reference, Wiki description explanation, brief detail
Introduction to the Design and Analysis of Algorithms : Divide and Conquer : Multiplication of Large Integers |


Privacy Policy, Terms and Conditions, DMCA Policy and Compliant

Copyright © 2018-2024 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.