Chapter10
Iterative
Improvement
The most successful men in the end are those whose success is the
result of steady accretion.
—Alexander
Graham Bell (1835–1910)
The greedy strategy,
considered in the preceding chapter, constructs a solution to an optimization
problem piece by piece, always adding a locally optimal piece to a partially
constructed solution. In this chapter, we discuss a different approach to
designing algorithms for optimization problems. It starts with some feasible
solution (a solution that satisfies all the constraints of the problem) and
proceeds to improve it by repeated applications of some simple step. This step
typically involves a small, localized change yielding a feasible solution with
an improved value of the objective function. When no such change improves the
value of the objective function, the algorithm returns the last feasible
solution as
optimal and stops.
There can be several
obstacles to the successful implementation of this idea. First, we need an
initial feasible solution. For some problems, we can always start with a
trivial solution or use an approximate solution obtained by some other (e.g.,
greedy) algorithm. But for others, finding an initial solution may require as
much effort as solving the problem after a feasible solution has been
identified. Second, it is not always clear what changes should be allowed in a
feasible solution so that we can check efficiently whether the current solution
is locally optimal and, if not, replace it with a better one. Third—and this is
the most fundamental difficulty— is an issue of local versus global extremum
(maximum or minimum). Think about the problem of finding the highest point in a
hilly area with no map on a foggy day. A logical thing to do would be to start
walking “up the hill” from the point you are at until it becomes impossible to
do so because no direction would lead up. You will have reached a local highest
point, but because of a limited feasibility, there will be no simple way to
tell whether the point is the highest (global maximum you are after) in the
entire area.
Fortunately, there are
important problems that can be solved by iterative-improvement algorithms. The
most important of them is linear programming.
We have already encountered
this topic in Section 6.6. Here, in Section 10.1, we introduce the simplex
method, the classic algorithm for linear programming. Discovered by the U.S.
mathematician George B. Dantzig in 1947, this algorithm has proved to be one of
the most consequential achievements in the history of algorithmics.
In Section 10.2, we consider
the important problem of maximizing the amount of flow that can be sent through
a network with links of limited capacities. This problem is a special case of
linear programming. However, its special structure makes it possible to solve
the problem by algorithms that are more efficient than the simplex method. We
outline the classic iterative-improvement algorithm for this problem,
discovered by the American mathematicians L. R. Ford, Jr., and D. R. Fulkerson
in the 1950s.
The last two sections of the
chapter deal with bipartite matching. This is the problem of finding an optimal
pairing of elements taken from two disjoint sets. Examples include matching
workers and jobs, high school graduates and colleges, and men and women for
marriage. Section 10.3 deals with the problem of maximizing the number of
matched pairs; Section 10.4 is concerned with the matching stability.
We also discuss several
iterative-improvement algorithms in Section 12.3, where we consider
approximation algorithms for the traveling salesman and knap-sack problems.
Other examples of iterative-improvement algorithms can be found in the
algorithms textbook by Moret and Shapiro [Mor91], books on continuous and
discrete optimization (e.g., [Nem89]), and the literature on heuristic search
(e.g., [Mic10]).
Related Topics
Privacy Policy, Terms and Conditions, DMCA Policy and Compliant
Copyright © 2018-2023 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.