Home | | **Introduction to The Design and Analysis of Algorithms** | | **Compiler Design** | | **Design and Analysis of Algorithms** | | **Compiler Design** | Algorithms for Solving Nonlinear Equations

In this section, we discuss several algorithms for solving nonlinear equations in one unknown,

**Algorithms for Solving Nonlinear Equations**

In this section, we discuss
several algorithms for solving nonlinear equations in one unknown,

There are several reasons for
this choice among subareas of numerical analysis. First of all, this is an
extremely important problem from both a practical and the-oretical point of
view. It arises as a mathematical model of numerous phenomena in the sciences
and engineering, both directly and indirectly. (Recall, for example, that the
standard calculus technique for finding extremum points of a function ** f (x) **is based on finding its critical points, which
are the roots of the equation

Let us start with dispelling
a misconception you might have about solving equations. Your experience with
equation solving from middle school to calculus courses might have led you to
believe that we can solve equations by “factoring” or by applying a readily
available formula. Sorry to break it to you, but you have been deceived (with
the best of educational intentions, of course): you were able to solve all
those equations only because they had been carefully selected to make it
possible. In general, we cannot solve equations exactly and need approximation
algorithms to do so.

This is true even for solving
the quadratic equation

*ax*^{2}** **+

because the standard formula
for its roots

requires computing the square
root, which can be done only approximately for most positive numbers. In
addition, as we discussed in Section 11.4, this canonical formula needs to be
modified to avoid the possibility of low-accuracy solutions.

What about formulas for roots
of polynomials of degrees higher than two? Such formulas for third- and
fourth-degree polynomials exist, but they are too cumbersome to be of practical
value. For polynomials of degrees higher than four, there can be no general
formula for their roots that would involve only the polynomial’s coefficients,
arithmetical operations, and radicals (taking roots). This remarkable result
was published first by the Italian mathematician and physician Paolo Ruffini
(1765–1822) in 1799 and rediscovered a quarter century later by the Norwegian
mathematician Niels Abel (1802–1829); it was developed further by the French
mathematician Evariste Galois (1811–1832).^{4}

The impossibility of such a
formula can hardly be considered a great disap-pointment. As the great German
mathematician Carl Friedrich Gauss (1777–1855) put it in his thesis of 1801,
the algebraic solution of an equation was no better than devising a symbol for
the root of the equation and then saying that the equation had a root equal to
the symbol [OCo98].

We can interpret solutions to
equation (12.4) as points at which the graph of the function ** f (x)** intersects with the

**Bisection
Method**

This algorithm is based on an
observation that the graph of a continuous function must intersect with the ** x**-axis between two points

The validity of this
observation is proved as a theorem in calculus courses, and we take it for
granted here. It serves as the basis of the following algorithm, called the ** bisection
method**, for solving equation (12.4). Starting with an interval [

** f (x) **at the middle point

on [*x*_{mid}** ,
b**], depending on which of the two halves the values of

Since we cannot expect the
bisection algorithm to stumble on the exact value of the equation’s root and
stop, we need a different criterion for stopping the algo-

rithm.
We can stop the algorithm after the interval [*a*_{n}*, b***_{n}**] bracketing some root

This inequality implies that
the sequence of approximations {*x***_{n}**} can be made as close to root

Here is pseudocode of the
bisection method.

**ALGORITHM** *Bisection**(f (x), a, b, eps, N )*

//Implements the bisection
method for finding a root of ** f
(x)** = 0 //Input: Two real numbers

a continuous function ** f
(x)** on [

an upper bound on the absolute error ** eps >** 0

an upper bound on the number
of iterations *N*

//Output: An approximate (or
exact) value ** x** of a root in

//or an interval bracketing
the root if the iteration number limit is reached ** n **←

**while ***n*** **≤** ***N*** do**

** x **←

**if ***x*** **−** ***a
< eps*** return ***x*** ***fval *←* **f
(x)*

**if ***fval*** **=** **0** return ***x*** if ***fval*** **∗*f
(a) <*** **0

** b **←

**return **“iteration limit”,** ***a, b*

Note that we can use
inequality (12.7) to find in advance the number of iterations that should
suffice, at least in theory, to achieve a preselected accuracy level. Indeed,
choosing the number of iterations ** n** large enough to satisfy

It has one real root. (See
Figure 12.18 for the graph of ** f
(x)** =

Figure 12.19 contains a trace
of the first eight iterations of the bisection method applied to equation
(12.9).

Thus, we obtained *x*_{8} = 1** .**3203125 as an approximate
value for the root

|1** .**3203125 −

Moreover, if we take into
account the signs of the function ** f
(x)** at

The principal weakness of the
bisection method as a general algorithm for solving equations is its slow rate
of convergence compared with other known methods. It is for this reason that
the method is rarely used. Also, it cannot be extended to solving more general
equations and systems of equations. But it does have several strong points. It
always converges to a root whenever we start with an

interval whose properties are
very easy to check. And it does not use derivatives of the function ** f (x)** as some faster methods do.

What important algorithm does
the method of bisection remind you of? If you have found it to closely resemble
binary search, you are correct. Both of them solve variations of the searching
problem, and they are both divide-by-half algorithms. The principal difference
lies in the problem’s domain: discrete for binary search and continuous for the
bisection method. Also note that while binary search requires its input array
to be sorted, the bisection method does not require its function to be
nondecreasing or nonincreasing. Finally, whereas binary search is very fast,
the bisection method is relatively slow.

**Method
of False Position**

The ** method of false position**
(also known by its name in Latin,

You are asked in the
exercises to show that the formula for this ** x**-intercept can be written as

**EXAMPLE
2 **Figure 12.21 contains the results of the first eight iterations of
this** **method for solving equation (12.9).

Although for this example the
method of false position does not perform as well as the bisection method, for
many instances it yields a faster converging sequence.

** Newton’s method**, also called the

The analytical formula for
the elements of the approximation sequence turns out to be

In most cases, Newton’s
algorithm guarantees convergence of sequence (12.11) if an initial approximation
*x*_{0} is chosen “close enough” to
the root. (Precisely defined prescriptions for choosing *x*_{0} can be found in numerical analysis textbooks.)
It may converge for initial approximations far from the root as well, but this
is not always true.

which is exactly the formula
we used in Section 11.4 for computing approximate values of square roots.

**EXAMPLE
4 **Let us apply Newton’s method to equation (12.9), which we
previ-ously solved with the bisection method and the method of false position.
Formula (12.11) for this case becomes

As an initial element of the
approximation sequence, we take, say, *x*_{0} = 2** .** Fig-ure 12.23 contains the
results of the first five iterations of Newton’s method.

You cannot fail to notice how
much faster Newton’s approximation sequence converges to the root than the
approximation sequences of both the bisection method and the method of false
position. This very fast convergence is typical of Newton’s method if an
initial approximation is close to the equation’s root. Note, however, that on
each iteration of this method we need to evaluate new values of the function
and its derivative, whereas the previous two methods require only one new value
of the function itself. Also, Newton’s method does not bracket a root as these
two methods do. Moreover, for an arbitrary function and arbitrarily chosen
initial approximation, its approximation sequence may diverge. And, because
formula (12.11) has the function’s derivative in the denominator, the method
may break down if it is equal to zero. In fact, Newton’s method is most
effective when ** f
(x) **is bounded away from zero near root

|** f (x)**| ≥

on the interval between *x***_{n}** and

*f (x*_{n}** ) **−

where ** c** is some point between

Formula (12.12) can be used
as a criterion for stopping Newton’s algorithm when its right-hand side becomes
smaller than a preselected accuracy level ** ε**. Other possible stopping criteria are

where ** ε** is a small positive number. Since the last two
criteria do not necessarily imply closeness of

The shortcomings of Newton’s
method should not overshadow its principal strengths: fast convergence for an
appropriately chosen initial approximation and applicability to much more
general types of equations and systems of equations.

**Exercises 12.4**

**
****a. **Find on the Internet or in
your library a procedure for finding a real root** **of the general cubic equation *ax*^{3} + *bx*^{2} + ** cx** +

**b. **What general algorithm design technique is it based on?

**
**Indicate how many roots each of the following equations has:

**a. ***xe*^{x}** **−** **1** **=** **0 **b. ***x*** **−** **ln** ***x*** **=** **0 **c. ***x*** **sin** ***x*** **−** **1** **=** **0

**
****a. **Prove that if** ***p(x)*** **is a polynomial of an odd degree,
then it must have at** **least one real
root.

**
**Prove that if *x*_{0} is a root of an ** n**-degree polynomial

** p(x) **=

where ** q(x)** is a polynomial of degree

**c. **Prove that if** ***x*_{0}** **is a root of an** **** n**-degree polynomial

*p (x*_{0}** ) **=

where ** q(x)** is the quotient of the division of

**
**Prove inequality (12.7).

**
**Apply the bisection method to find the root of the equation

*x*^{3}** **+

with an absolute error
smaller than 10^{−}^{2}*.*

**
**Derive formula (12.10) underlying the method of false position.

**
**Apply the method of false position to find the root of the equation

*x*^{3}** **+

with an absolute error
smaller than 10^{−}^{2}*.*

**
**Derive formula (12.11) underlying Newton’s method.

**
**Apply Newton’s method to find the root of the equation

*x*^{3}** **+

with an absolute error
smaller than 10^{−}^{2}*.*

**
**Give an example that shows that the approximation sequence of
Newton’s method may diverge.** **

**
***Gobbling goat *There is a grassy field in
the shape of a circle with a radius of* *100
feet. A goat is attached by a rope to a hook at a fixed point on the field’s
border. How long should the rope be to let the goat reach only half of the
grass in the field?

**
**

**
**

**
** **SUMMARY**

*Backtracking *and* branch-and-bound *are
two algorithm design techniques for* *solving
problems in which the number of choices grows at least exponentially with their
instance size. Both techniques construct a solution one component at a time,
trying to terminate the process as soon as one can ascertain that no solution
can be obtained as a result of the choices already made. This approach makes it
possible to solve many large instances of *NP-*hard
problems in an acceptable amount of time.

Both backtracking and
branch-and-bound employ, as their principal mech-anism, a *state-space tree*—a rooted tree whose nodes represent partially
constructed solutions to the problem in question. Both techniques terminate a
node as soon as it can be guaranteed that no solution to the problem can be
obtained by considering choices that correspond to the node’s descendants.

*Backtracking *constructs its state-space tree in the depth-first-search fashion* *in the majority of its applications. If
the sequence of choices represented by a current node of the state-space tree
can be developed further without violating the problem’s constraints, it is
done by considering the first remaining legitimate option for the next component.
Otherwise, the method backtracks by undoing the last component of the partially
built solution and replaces it by the next alternative.

*Branch-and-bound *is an algorithm design technique that enhances
the idea* *of generating a state-space
tree with the idea of estimating the best value obtainable from a current node
of the decision tree: if such an estimate is not superior to the best solution
seen up to that point in the processing, the node is eliminated from further
consideration.

Approximation algorithms are
often used to find approximate solutions to difficult problems of combinatorial
optimization. The *performance ratio*
is the principal metric for measuring the accuracy of such approximation
algorithms.

The *nearest-neighbor* and *multifragment
heuristic* are two simple greedy algorithms for approximating a solution to
the traveling salesman problem. The performance ratios of these algorithms are
unbounded above, even for the important subset of *Euclidean graphs*.

The *twice-around-the-tree* and *Christofides*
algorithms exploit the graph’s minimum spanning tree to construct an Eulerian
circuit and then transform it into a Hamiltonian circuit (an approximate
solution to the TSP) by shortcuts. For Euclidean graphs, the performance ratios
of these algorithms are 2 and 1.5, respectively.

*Local search heuristics—*the*
2-opt, 3-opt*, and* Lin-Kernighan *algorithms—* *work by replacing a few edges in the
current tour to find a shorter one until no such replacement can be found.
These algorithms are capable of finding in seconds a tour that is within a few
percent of optimum for large Euclidean instances of the traveling salesman
problem.

A sensible greedy algorithm
for the knapsack problem is based on processing an input’s items in descending
order of their value-to-weight ratios. For the continuous version of the
problem, this algorithm always yields an exact optimal solution.

*Polynomial approximation schemes *for the knapsack problem are polynomial-time
parametric algorithms that approximate solutions with any predefined accuracy
level.

Solving nonlinear equations
is one of the most important areas of numerical analysis. Although there are no
formulas for roots of nonlinear equations (with a few exceptions), several
algorithms can solve them approximately.

The *bisection method* and the *method
of false position* are continuous analogues of binary search and
interpolation search, respectively. Their principal advantage lies in
bracketing a root on each iteration of the algorithm.

Study Material, Lecturing Notes, Assignment, Reference, Wiki description explanation, brief detail

**Related Topics **

Copyright © 2018-2021 BrainKart.com; All Rights Reserved. (BS) Developed by Therithal info, Chennai.