Greedy Algorithms for the Knapsack Problem We can think of several greedy approaches to this problem. One is to select the items in decreasing order of their weights; however, heavier items may not be the most valuable in the set.

**Approximation
Algorithms for the Knapsack Problem**

The knapsack problem, another
well-known *NP-*hard problem*,* was also intro-duced in Section 3.4:
given ** n** items of known weights

**Greedy Algorithms for the Knapsack Problem **We can
think of several greedy** **approaches
to this problem. One is to select the items in decreasing order of their
weights; however, heavier items may not be the most valuable in the set.
Alternatively, if we pick up the items in decreasing order of their value,
there is no guarantee that the knapsack’s capacity will be used efficiently.
Can we find a greedy strategy that takes into account both the weights and
values? Yes, we can, by computing the value-to-weight ratios *v*_{i}*/w*_{i}** , i** = 1

**Greedy
algorithm for the discrete knapsack problem**

**Step 1 **Compute the value-to-weight ratios** ***r*_{i}** **=** ***v*_{i}*/w*_{i}*, i*** **=** **1*, . . . , n,*** **for the** **items given.

**Step 2 **Sort the items in nonincreasing order of the ratios computed in
Step 1.** **(Ties can be broken
arbitrarily.)

**Step 3 **Repeat the following operation until no item is left in the sorted
list:** **if the current item on the
list fits into the knapsack, place it in the knapsack and proceed to the next
item; otherwise, just proceed to the next item.

**EXAMPLE
5 **Let us consider the instance of the knapsack problem with the** **knapsack capacity 10 and the
item information as follows:

Computing the value-to-weight
ratios and sorting the items in nonincreasing order of these efficiency ratios
yields

The greedy algorithm will
select the first item of weight 4, skip the next item of weight 7, select the
next item of weight 5, and skip the last item of weight 3. The solution
obtained happens to be optimal for this instance (see Section 12.2, where we
solved the same instance by the branch-and-bound algorithm).

Does this greedy algorithm
always yield an optimal solution? The answer, of course, is no: if it did, we
would have a polynomial-time algorithm for the *NP-*hard problem. In fact, the following example shows that no
finite upper bound on the accuracy of its approximate solutions can be given
either.

**EXAMPLE
6**

Since the items are already
ordered as required, the algorithm takes the first item and skips the second
one; the value of this subset is 2. The optimal selection con-sists of item 2
whose value is ** W.** Hence, the accuracy ratio

It is surprisingly easy to
tweak this greedy algorithm to get an approximation algorithm with a finite
performance ratio. All it takes is to choose the better of two alternatives:
the one obtained by the greedy algorithm or the one consisting of a single item
of the largest value that fits into the knapsack. (Note that for the instance
of the preceding example, the second alternative is better than the first one.)
It is not difficult to prove that the performance ratio of this *enhanced*** greedy
algorithm **is 2. That is, the value of an optimal subset

It is instructive to consider
the continuous version of the knapsack problem as well. In this version, we are
permitted to take arbitrary fractions of the items given. For this version of
the problem, it is natural to modify the greedy algorithm as follows.

**Greedy
algorithm for the continuous knapsack problem**

**Step 1 **Compute the value-to-weight ratios** ***v*_{i}*/w*_{i}*, i*** **=** **1*, . . . , n,*** **for the items** **given.

**Step 2 **Sort the items in nonincreasing order of the ratios computed in
Step 1.** **(Ties can be broken
arbitrarily.)

**Step 3 **Repeat the following operation until the knapsack is filled to its
full** **capacity or no item is left in
the sorted list: if the current item on the list fits into the knapsack in its
entirety, take it and proceed to the next item; otherwise, take its largest
fraction to fill the knapsack to its full capacity and stop.

For example, for the
four-item instance used in Example 5 to illustrate the greedy algorithm for the
discrete version, the algorithm will take the first item of weight 4 and then
6/7 of the next item on the sorted list to fill the knapsack to its full
capacity.

It should come as no surprise
that this algorithm always yields an optimal solution to the continuous
knapsack problem. Indeed, the items are ordered according to their efficiency
in using the knapsack’s capacity. If the first item on the sorted list has
weight *w*_{1} and value *v*_{1}, no solution can use *w*_{1} units of capacity with a higher payoff than *v*_{1}** .** If we cannot fill the knapsack with the first
item or its fraction, we should continue by taking as much as we can of the
second-most efficient item, and so on. A formal rendering of this proof idea is
somewhat involved, and we will leave it for the exercises.

Note also that the optimal
value of the solution to an instance of the contin-uous knapsack problem can
serve as an upper bound on the optimal value of the discrete version of the
same instance. This observation provides a more sophisti-cated way of computing
upper bounds for solving the discrete knapsack problem by the branch-and-bound
method than the one used in Section 12.2.

**Approximation Schemes **We now return to the discrete version of the
knap-sack problem. For this problem, unlike the traveling salesman problem,
there exist polynomial-time ** approximation schemes**, which are
parametric families of algo-rithms that allow us to

where ** k** is an integer parameter in the range 0 ≤

**EXAMPLE
7 **A small example of an approximation scheme with** k **=

You can be excused for not
being overly impressed by this example. And, indeed, the importance of this
scheme is mostly theoretical rather than practical. It lies in the fact that,
in addition to approximating the optimal solution with any predefined accuracy
level** ,** the time efficiency of this
algorithm is polynomial in

For each of those subsets, it
needs ** O(n)** time to determine the
subset’s possible extension. Thus, the algorithm’s efficiency is in

**Exercises 12.3**

**a. **Apply the nearest-neighbor algorithm to the instance defined by the
inter-city distance matrix below. Start the algorithm at the first city,
assuming that the cities are numbered from 1 to 5.

**b. **Compute the accuracy ratio of this approximate solution.

**
****a. **Write pseudocode for the
nearest-neighbor algorithm. Assume that its**
**input is given by an ** n** ×

**
**What is the time efficiency of the nearest-neighbor algorithm?

**
**Apply the twice-around-the-tree algorithm to the graph in Figure
12.11a with a walk around the minimum spanning tree that starts at the same
vertex ** a** but differs from the walk in
Figure 12.11b. Is the length of the obtained tour the same as the length of the
tour in Figure 12.11b?

**
**Prove that making a shortcut of the kind used by the
twice-around-the-tree algorithm cannot increase the tour’s length in a
Euclidean graph.

**
**What is the time efficiency class of the greedy algorithm for the
knapsack problem?

**
**Prove that the performance ratio *R***_{A}** of the enhanced greedy algorithm for the
knapsack problem is equal to 2.

**
**Consider the greedy algorithm for the bin-packing problem, which is
called the ** first-fit** (

**
**Apply *FF* to the instance

*s*_{1}** **=

and determine whether the
solution obtained is optimal.

**
**Determine the worst-case time efficiency of *FF* .

**
**Prove that *FF* is a
2-approximation algorithm.

**
**The ** first-fit decreasing** (

**
**Apply *FFD* to the instance

*s*_{1}** **=

and determine whether the
solution obtained is optimal.

**
**Does *FFD* always yield an
optimal solution? Justify your answer.

**
**Prove that *FFD* is a 1** .**5-approximation algorithm.

**
**Run an experiment to determine which of the two algorithms—*FF* or *FFD*—yields more accurate approximations on a random sample of the* *problem’s instances.

**
****a. **Design a simple
2-approximation algorithm for finding a** minimum vertex cover **(a vertex cover with the smallest number of vertices) in
a given graph.

Consider the following
approximation algorithm for finding a *maximum*** independent set **(an
independent set with the largest number of vertices) in

**
****a. **Design a polynomial-time
greedy algorithm for the graph-coloring prob-lem.

**
**Show that the performance ratio of your approximation algorithm is
in-finitely large.

Study Material, Lecturing Notes, Assignment, Reference, Wiki description explanation, brief detail

Introduction to the Design and Analysis of Algorithms : Coping with the Limitations of Algorithm Power : Approximation Algorithms for the Knapsack Problem |

**Related Topics **

Privacy Policy, Terms and Conditions, DMCA Policy and Compliant

Copyright © 2018-2024 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.