Numerical analysis is usually described as the branch of computer science con-cerned with algorithms for solving mathematical problems.

**Challenges of Numerical Algorithms**

** Numerical analysis **is usually described as the branch of computer
science con-cerned with algorithms for solving mathematical problems. This
description needs an important clarification: the problems in question are
problems of “continuous” mathematics—solving equations and systems of
equations, evaluating such func-tions as sin

We are not going to discuss the variety of difficulties posed by
modeling, the task of describing a real-life phenomenon in mathematical terms.
Assuming that this has already been done, what principal obstacles to solving a
mathematical problem do we face? The first major obstacle is the fact that most
numerical analy-sis problems cannot be solved exactly.^{4} They have to be solved
approximately, and this is usually done by replacing an infinite object by a
finite approximation. For example, the value of *e***^{x}** at a given point

To give another example, the definite integral of a function can be
approximated by a finite weighted sum of its values, as in the ** composite
trapezoidal rule** that you might remember from your calculus class:

where ** h** =

The errors of such approximations are called ** truncation errors**. One of
the major tasks in numerical analysis is to estimate the magnitudes of
truncation

errors. This is typically done by using calculus tools, from
elementary to quite advanced. For example, for approximation (11.6) we have

where ** M** = max

For example, if we want to compute *e*^{0}^{.}^{5} by formula (11.6) and
guarantee the truncation error to be smaller than 10^{−}^{4}** ,** we can proceed as follows. First, we estimate

to see that the smallest
value of ** n** for which this inequality
holds is 5.

Similarly, for approximation (11.7), the standard bound of the
truncation error is given by the inequality

where *M*_{2} = max |** f (x)**| on the interval

The other type of errors, called ** round-off errors**, are
caused by the limited accuracy with which we can represent real numbers in a
digital computer. These errors arise not only for all irrational numbers
(which, by definition, require an infinite number of digits for their exact
representation) but for many rational numbers as well. In the overwhelming
majority of situations, real numbers are represented as floating-point numbers,

where ** B** is the number base, usually 2 or 16 (or, for
unsophisticated calculators,

10); *d*_{1}*, d*_{2}_{,}*.
. . , d***_{p}** are digits

its ** mantissa**; and

The accuracy of the
floating-point representation depends on the number of *significant digits*** p** in representation (11.10). Most computers
permit two or even three levels of precision:

As with an approximation of any kind, it is important to
distinguish between the ** absolute error** and the

(The relative error is undefined if *α*^{∗} = 0** .**)

Very large and very small numbers cannot be represented in
floating-point arithmetic because of the phenomena called overflow and
underflow, respec-tively. An overflow happens when an arithmetic operation
yields a result out-side the range of the computer’s floating-point numbers.
Typical examples of overflow arise from the multiplication of large numbers or
division by a very small number. Sometimes we can eliminate this problem by
making a simple change in the order in which an

Underflow occurs when the
result of an operation is a nonzero fraction of such a small magnitude that it
cannot be represented as a nonzero floating-point number. Usually, underflow
numbers are replaced by zero, but a special signal is generated by hardware to
indicate such an event has occurred.

It is important to remember
that, in addition to inaccurate representation of numbers, the arithmetic
operations performed in a computer are not always exact, either. In particular,
subtracting two nearly equal floating-point numbers may cause a large increase
in relative error. This phenomenon is called *subtractive*** cancellation**.

**EXAMPLE
1** Consider two irrational numbers

*α*^{∗}** **=

represented by floating-point numbers ** α** = 0

which is very large for a relative error despite quite accurate
approximations for both ** α** and

Note that we may get a
significant magnification of round-off error if a low-accuracy difference is
used as a divisor. (We already encountered this problem in discussing Gaussian
elimination in Section 6.2. Our solution there was to use partial pivoting.)
Many numerical algorithms involve thousands or even millions of arithmetic
operations for typical inputs. For such algorithms, the propagation of
round-off errors becomes a major concern from both the practical and
theoretical standpoints. For some algorithms, round-off errors can propagate
through the algorithm’s operations with increasing effect. This highly
undesirable property of a numerical algorithm is called ** instability**. Some
problems exhibit such a high level of sensitivity to changes in their input
that it is all but impossible to design a stable algorithm to solve them. Such
problems are called

**EXAMPLE
2 **Consider the following system of two linear equations in two** **unknowns:

1** .**001

0** .**999

Its only solution is ** x** = 1

1** .**001

The only solution to this
system is ** x** = 2

We conclude with a well-known
problem of finding real roots of the quadratic equation

(The case of ** b** = 0 can be considered with either of the other
two cases.)

There are several other
obstacles to applying formula (11.14), which are re-lated to limitations of
floating-point arithmetic: if ** a** is very small, division by

Hopefully, this brief
overview has piqued your interest enough for you to seek more information in
the many books devoted exclusively to numerical algorithms. In this book, we
discuss one more topic in the next chapter: three classic methods for solving
equations in one unknown.

**Exercises 11.4**

Some textbooks define the
number of significant digits in the approximation of number *α*^{∗} by number ** α** as the largest nonnegative integer

According to this definition,
how many significant digits are there in the approximation of ** π** by

**
**3.1415?**b. **3.1417?

**
**If ** α** = 1

**
**the range of possible values of *α*^{∗}*.*

the range of the relative
errors of these approximations.

**10.** Apply four iterations of Newton’s method to
compute 3 and estimate the absolute and relative errors of this approximation.

**SUMMARY**

Given a class of algorithms
for solving a particular problem, a *lower
bound* indicates the best possible efficiency *any* algorithm from this class can have.

A *trivial lower bound* is based on counting the number of items in the
problem’s input that must be processed and the number of output items that need
to be produced.

An *information-theoretic lower bound* is usually obtained through a
mecha-nism of *decision trees*. This
technique is particularly useful for comparison-based algorithms for sorting
and searching. Specifically:

Any general comparison-based
sorting algorithm must perform at least log_{2} ** n**! ≈

Any general comparison-based
algorithm for searching a sorted array must perform at least log_{2}** (n** + 1

The *adversary method* for establishing lower bounds is based on
following the logic of a malevolent adversary who forces the algorithm into the
most time-consuming path.

A lower bound can also be
established by *reduction*, i.e., by
reducing a problem with a known lower bound to the problem in question.

* Complexity theory *seeks to classify problems
according to their computational* *complexity.
The principal split is between *tractable*
and *intractable* problems

problems that can and cannot
be solved in polynomial time, respectively. For purely technical reasons,
complexity theory concentrates on *decision*
*problems*, which are problems with
yes/no answers.

The *halting problem* is an example of an *undecidable* decision problem; i.e., it cannot be solved by any
algorithm.

** P **is the class of all decision problems that can
be solved in polynomial time.

Many important problems in *NP* (such as the Hamiltonian circuit
problem) are known to be *NP*-complete:
all other problems in *NP* are
reducible to such a problem in polynomial time. The first proof of a problem’s *NP*-completeness was published by S. Cook
for the *CNF-satisfiability problem*.

It is not known whether ** P** =

*Numerical analysis *is a branch of computer science dealing with
solving* *continuous mathematical
problems. Two types of errors occur in solving a majority of such problems:
truncation error and round-off error. *Truncation*
*errors *stem from replacing infinite
objects by their finite approximations.

*Round-off errors *are due to inaccuracies of representing numbers
in a digital* *computer.

*Subtractive cancellation *happens as a result of subtracting two
near-equal* *floating-point numbers. It
may lead to a sharp increase in the relative round-off error and therefore
should be avoided (by either changing the expression’s form or by using a
higher precision in computing such a difference).

Writing a general computer program for solving
quadratic equations *ax*^{2} + ** bx
**+

Study Material, Lecturing Notes, Assignment, Reference, Wiki description explanation, brief detail

Introduction to the Design and Analysis of Algorithms : Limitations of Algorithm Power : Challenges of Numerical Algorithms |

**Related Topics **

Privacy Policy, Terms and Conditions, DMCA Policy and Compliant

Copyright © 2018-2024 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.