Proving an Algorithm’s
Correctness
Once an algorithm has been specified, you have
to prove its correctness. That is, you have to prove that the algorithm
yields a required result for every legitimate input in a finite amount of time.
For example, the correctness of Euclid’s algorithm for computing the greatest
common divisor stems from the correctness of the equality gcd(m, n) = gcd(n, m mod n) (which, in turn, needs a proof; see Problem 7
in Exercises 1.1), the simple observation that the second integer gets smaller
on every iteration of the algorithm, and the fact that the algorithm stops when
the second integer becomes 0.
For some algorithms, a proof of correctness is
quite easy; for others, it can be quite complex. A common technique for proving
correctness is to use mathemati-cal induction because an algorithm’s iterations
provide a natural sequence of steps needed for such proofs. It might be worth
mentioning that although tracing the algorithm’s performance for a few specific
inputs can be a very worthwhile activ-ity, it cannot prove the algorithm’s
correctness conclusively. But in order to show that an algorithm is incorrect,
you need just one instance of its input for which the algorithm fails.
The notion of correctness for approximation
algorithms is less straightforward than it is for exact algorithms. For an
approximation algorithm, we usually would like to be able to show that the
error produced by the algorithm does not exceed a predefined limit. You can
find examples of such investigations in later.
Related Topics
Privacy Policy, Terms and Conditions, DMCA Policy and Compliant
Copyright © 2018-2023 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.