Some applications, notably modern cryptography, require manipulation of inte-gers that are over 100 decimal digits long. Since such integers are too long to fit in a single word of a modern computer, they require special treatment.

**Multiplication of Large Integers and ****Strassen’s Matrix Multiplication**

In this section, we examine two surprising algorithms for seemingly straightfor-ward tasks: multiplying two integers and multiplying two square matrices. Both achieve a better asymptotic efficiency by ingenious application of the divide-and-conquer technique.

**Multiplication
of Large Integers**

Some
applications, notably modern cryptography, require manipulation of inte-gers
that are over 100 decimal digits long. Since such integers are too long to fit
in a single word of a modern computer, they require special treatment. This
practi-cal need supports investigations of algorithms for efficient
manipulation of large integers. In this section, we outline an interesting
algorithm for multiplying such numbers. Obviously, if we use the conventional
pen-and-pencil algorithm for mul-tiplying two ** n**-digit
integers, each of the

To
demonstrate the basic idea of the algorithm, let us start with a case of
two-digit integers, say, 23 and 14. These numbers can be represented as
follows:

23 = 2 **^{.}** 10

Now let
us multiply them:

23 ∗ 14 = ** (**2

= ** (**2 ∗ 1

The last
formula yields the correct answer of 322, of course, but it uses the same four
digit multiplications as the pen-and-pencil algorithm. Fortunately, we can
compute the middle term with just one digit multiplication by taking advantage
of the products 2 ∗ 1 and 3 ∗ 4 that need to be computed anyway:

2 ∗ 4 + 3 ∗ 1 = ** (**2 + 3

Of
course, there is nothing special about the numbers we just multiplied. For any
pair of two-digit numbers ** a** =

** c **=

where

*c*_{2}** **=

*c*_{0}** **=

*c*_{1}** **=

Now we
apply this trick to multiplying two ** n**-digit
integers

** **=

*(a*_{1}** **∗

*c*_{2}10^{n}**
**+

where

*c*_{2}** **=

*c*_{0}** **=

*c*_{1}** **=

** a**’s halves and the sum of the

If ** n/**2 is even, we can apply the same
method for computing the products

How many
digit multiplications does this algorithm make? Since multiplica-tion of ** n**-digit numbers requires three
multiplications of

** M(n) **=

Solving
it by backward substitutions for ** n** = 2

** M(**2

Since ** k** = log

_{M(n)}_{=} _{3}^{log}2 ^{n}_{=} _{n}^{log}2 ^{3}
_{≈} _{n}^{1}^{.}^{585}_{.}

(On the
last step, we took advantage of the following property of logarithms:

_{a}^{log}** b ^{c} **=

But what
about additions and subtractions? Have we not decreased the num-ber of
multiplications by requiring more of those operations? Let ** A(n)** be the number of digit additions
and subtractions executed by the above algorithm in multiplying two

**5.4 **Multiplication
of Large Integers and Strassen’s Matrix Multiplication

require
five additions and one subtraction. Hence, we have the recurrence

** A(n) **=

Applying
the Master Theorem, which was stated in the beginning of the chapter, we obtain
** A(n)** ∈

The asymptotic
advantage of this algorithm notwithstanding, how practical is it? The answer
depends, of course, on the computer system and program quality implementing the
algorithm, which might explain the rather wide disparity of reported results.
On some machines, the divide-and-conquer algorithm has been reported to
outperform the conventional method on numbers only 8 decimal digits long and to
run more than twice faster with numbers over 300 decimal digits long—the area
of particular importance for modern cryptography. Whatever this outperformance
“crossover point” happens to be on a particular machine, it is worth switching
to the conventional algorithm after the multiplicands become smaller than the
crossover point. Finally, if you program in an object-oriented language such as
Java, C++, or Smalltalk, you should also be aware that these languages have
special classes for dealing with large integers.

Discovered
by 23-year-old Russian mathematician Anatoly Karatsuba in 1960, the
divide-and-conquer algorithm proved wrong the then-prevailing opinion that the
time efficiency of any integer multiplication algorithm must be in * (n*^{2}** ).** The discovery encouraged
researchers to look for even (asymptotically) faster algorithms for this and
other algebraic problems. We will see such an algorithm in the next section.

Study Material, Lecturing Notes, Assignment, Reference, Wiki description explanation, brief detail

Introduction to the Design and Analysis of Algorithms : Divide and Conquer : Multiplication of Large Integers |

**Related Topics **

Privacy Policy, Terms and Conditions, DMCA Policy and Compliant

Copyright © 2018-2024 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.