1. LU Decomposition
2. Computing a Matrix Inverse
3. Computing a Determinant

You are certainly familiar with systems of two
linear equations in two unknowns:

*a*_{11}** x **+

* a*_{21}** x **+

Recall that unless the coefficients of one
equation are proportional to the coef-ficients of the other, the system has a
unique solution. The standard method for finding this solution is to use either
equation to express one of the variables as a function of the other and then
substitute the result into the other equation, yield-ing a linear equation
whose solution is then used to find the value of the second variable.

In many applications, we need to solve a system
of ** n** equations in

where ** n** is a large number. Theoretically, we can solve
such a system by general-izing the substitution method for solving systems of
two linear equations (what general design technique would such a method be
based upon?); however, the resulting algorithm would be extremely cumbersome.

Fortunately, there is a much more elegant
algorithm for solving systems of linear equations called *Gaussian elimination.*^{2}
The idea of Gaussian elimination is to transform a system of ** n** linear equations in

(We added primes to the matrix elements and
right-hand sides of the new system to stress the point that their values differ
from their counterparts in the original system.)

Why is the system with the upper-triangular
coefficient matrix better than a system with an arbitrary coefficient matrix?
Because we can easily solve the system with an upper-triangular coefficient
matrix by back substitutions as follows. First, we can immediately find the
value of ** x_{n}** from the last equation; then we can substitute
this value into the next to last equation to get

So how can we get from a system with an
arbitrary coefficient matrix ** A** to an equivalent system with an
upper-triangular coefficient matrix

exchanging two equations of the system
replacing an equation with its nonzero multiple replacing an equation with a
sum or difference of this equation and some multiple of another equation

Since no elementary operation can change a
solution to a system, any system that is obtained through a series of such
operations will have the same solution as the original one.

Let us see how we can get to a system with an
upper-triangular matrix. First, we use *a*_{11} as a ** pivot** to make all

Before we look at an example of Gaussian
elimination, let us note that we can operate with just a system’s coefficient
matrix augmented, as its ** (n** + 1

**EXAMPLE 1** Solve
the system by Gaussian elimination.

Here is pseudocode of the first stage, called ** forward
elimination**, of the algorithm.

**ALGORITHM** *ForwardElimination*** (A**[1

//Applies Gaussian elimination to matrix ** A** of a system’s coefficients, //augmented with vector

//Output: An equivalent upper-triangular matrix
in place of ** A** with the //corresponding right-hand side values in the (

**for ***i*** **←** **1** to ***n*** do **** A**[

**for ***j*** **←** ***i*** **+** **1** to ***n*** do**

**for ***k*** **←** ***i*** to ***n*** **+** **1** do**

** A**[

There are two important observations to make
about this pseudocode. First, it is not always correct: if ** A**[

Since we have to be prepared for the
possibility of row exchanges anyway, we can take care of another potential
difficulty: the possibility that ** A**[

The second observation is the fact that the
innermost loop is written with a glaring inefficiency. Can you find it before
checking the following pseudocode, which both incorporates partial pivoting and
eliminates this inefficiency?

**ALGORITHM** *BetterForwardElimination*** (A**[1

//Implements Gaussian elimination with partial
pivoting //Input: Matrix ** A**[1

//Output: An equivalent upper-triangular matrix
in place of ** A** and the //corresponding right-hand side values in place of the (

** pivot row **←

**for ***j*** **←** ***i*** **+** **1** to ***n*** do**

**if **|** A**[

** swap(A**[

** t emp **←

** A**[

Let us find the time efficiency of this
algorithm. Its innermost loop consists of a single line,

** A**[

which contains one multiplication and one
subtraction. On most computers, multi-plication is unquestionably more
expensive than addition/subtraction, and hence it is multiplication that is
usually quoted as the algorithm’s basic operation.^{4} The standard
summation formulas and rules reviewed in Section 2.3 (see also Appen-dix A) are
very helpful in the following derivation:

Since the second (** back substitution**) stage
of Gaussian elimination is in

Theoretically, Gaussian elimination always
either yields an exact solution to a system of linear equations when the system
has a unique solution or discovers that no such solution exists. In the latter
case, the system will have either no solutions or infinitely many of them. In
practice, solving systems of significant size on a computer by this method is
not nearly so straightforward as the method would lead us to believe. The
principal difficulty lies in preventing an accumulation of round-off errors (see
Section 11.4). Consult textbooks on numerical analysis that analyze this and
other implementation issues in great detail.

*LU ***Decomposition**

Gaussian elimination has an interesting and
very useful byproduct called ** LU de-composition **of the coefficient
matrix. In fact, modern commercial implementa-tions of Gaussian elimination are
based on such a decomposition rather than on the basic algorithm outlined
above.

**EXAMPLE **Let us return to the example in the beginning
of this section, where** **we applied Gaussian elimination to the matrix

It turns out that the product ** LU** of these matrices is equal to matrix

Therefore, solving the system ** Ax** =

Note that once we have the ** LU** decomposition of matrix

**Computing a Matrix Inverse**

Gaussian elimination is a very useful algorithm
that tackles one of the most important problems of applied mathematics: solving
systems of linear equations. In fact, Gaussian elimination can also be applied
to several other problems of linear algebra, such as computing a matrix ** inverse**.
The inverse of an

*AA*^{−}^{1}** **=

where ** I** is the

Theoretically, inverse matrices are very
important because they play the role of reciprocals in matrix algebra,
overcoming the absence of the explicit division operation for matrices. For
example, in a complete analogy with a linear equation in one unknown ** ax** =

According to the definition of the inverse
matrix for a nonsingular ** n** ×

We can find the unknowns by solving ** n** systems of linear equations that have the same coefficient matrix

** (**1

** Ax^{j} **=

We can solve these systems by applying Gaussian
elimination to matrix ** A** aug-mented by the

**Computing a Determinant**

Another problem that can be solved by Gaussian
elimination is computing a determinant. The ** determinant** of an

where ** s_{j}** is +1 if

In particular, for a 2 × 2 matrix, the definition implies a formula that is easy to
remember:

In other words, the determinant of a 2 × 2 matrix is simply equal to the difference between the products of
its diagonal elements.

For a 3 × 3 matrix, we get

Incidentally, this formula is very handy in a
variety of applications. In particular, we used it twice already in Section 5.5
as a part of the quickhull algorithm.

But what if we need to compute a determinant of
a large matrix? Although this is a task that is rarely needed in practice, it
is worth discussing nevertheless. Using the recursive definition can be of
little help because it implies computing the sum of ** n**! terms. Here, Gaussian elimination comes to
the rescue again. The central point is the fact that the determinant of an
upper-triangular matrix is equal to the product of elements on its main
diagonal, and it is easy to see how elementary operations employed by the
elimination algorithm influence the determinant’s value. (Basically, it either
remains unchanged or changes a sign or is multiplied by the constant used by
the elimination algorithm.) As a result, we can compute the determinant of an

Determinants play an important role in the
theory of systems of linear equa-tions. Specifically, a system of ** n** linear equations in

not equal to zero. Moreover, this solution can
be found by the formulas called

** Cramer’s rule**,

where det ** A_{j}** is the determinant of the matrix obtained by
replacing the

**Exercises 6.2**

Q. Solve the following system by Gaussian
elimination:

*x*_{1}** **+

Q. **
****a. **Solve the system of the previous question by the** ***LU*** **decomposition** **method.

**
**From the
standpoint of general algorithm design techniques, how would you classify the ** LU** decomposition method?

** **Q. Solve
the system of Problem 1 by computing the inverse of its coefficient matrix and
then multiplying it by the vector on the right-hand side.

Q. Would it be correct to get the efficiency class
of the forward elimination stage of Gaussian elimination as follows?

**
**Write
pseudocode for the back-substitution stage of Gaussian elimination and show
that its running time is in * (n*^{2}*).*

Assuming that division of two numbers takes
three times longer than their multiplication, estimate how much faster *BetterForwardElimination* is than *ForwardElimination*. (Of course, you
should also assume that a compiler is* *not
going to eliminate the inefficiency in *ForwardElimination*.)

** **Q. **a. **Give an example of a system of two linear equations in two unknowns
that** **has a unique solution and solve
it by Gaussian elimination.

**
**Give an
example of a system of two linear equations in two unknowns that has no
solution and apply Gaussian elimination to it.

**
**Give an
example of a system of two linear equations in two unknowns that has infinitely
many solutions and apply Gaussian elimination to it.

** **Q. The ** Gauss-Jordan
elimination** method differs from Gaussian elimination in that the
elements above the main diagonal of the coefficient matrix are made zero at the
same time and by the same use of a pivot row as the elements below the main
diagonal.

**
**Apply
the Gauss-Jordan method to the system of Problem 1 of these exercises.

**
**What
general design strategy is this algorithm based on?

**
**In
general, how many multiplications are made by this method in solving a system
of ** n** equations in

** **Q. A system
** Ax** =

** **Q. **a. **Apply Cramer’s rule to solve the system of
Problem 1 of these exercises.

**
**Estimate
how many times longer it will take to solve a system of ** n** linear equations in

Q. ** ***Lights out *This one-person game is played on an* **n** *×* **n** *board composed* *of 1 × 1 light panels. Each panel has a switch that can be turned on and
off, thereby toggling the on/off state of this and four vertically and
horizontally adjacent panels. (Of course, toggling a corner square affects a
total of three panels, and toggling a noncorner panel on the board’s border
affects a total of four squares.) Given an initial subset of lighted squares,
the goal is to turn all the lights off.

**
**Show
that an answer can be found by solving a system of linear equations with 0/1
coefficients and right-hand sides using the modulo 2 arithmetic.

**
**Use
Gaussian elimination to solve the 2 × 2 “all-ones” instance of this problem, where
all the panels of the 2 × 2 board are initially lit.

Use Gaussian elimination to solve the 3 × 3 “all-ones” instance of this problem, where all the panels of the
3 × 3 board are initially lit.

Study Material, Lecturing Notes, Assignment, Reference, Wiki description explanation, brief detail

Introduction to the Design and Analysis of Algorithms : Transform and Conquer : Gaussian Elimination |

**Related Topics **

Privacy Policy, Terms and Conditions, DMCA Policy and Compliant

Copyright © 2018-2023 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.