Once an algorithm has been specified, you have to prove its correctness. That is, you have to prove that the algorithm yields a required result for every legitimate input in a finite amount of time.

**Proving an Algorithmâ€™s
Correctness**

Once an algorithm has been specified, you have
to prove its ** correctness**. That is, you have to prove that the algorithm
yields a required result for every legitimate input in a finite amount of time.
For example, the correctness of Euclidâ€™s algorithm for computing the greatest
common divisor stems from the correctness of the equality gcd

For some algorithms, a proof of correctness is
quite easy; for others, it can be quite complex. A common technique for proving
correctness is to use mathemati-cal induction because an algorithmâ€™s iterations
provide a natural sequence of steps needed for such proofs. It might be worth
mentioning that although tracing the algorithmâ€™s performance for a few specific
inputs can be a very worthwhile activ-ity, it cannot prove the algorithmâ€™s
correctness conclusively. But in order to show that an algorithm is incorrect,
you need just one instance of its input for which the algorithm fails.

The notion of correctness for approximation
algorithms is less straightforward than it is for exact algorithms. For an
approximation algorithm, we usually would like to be able to show that the
error produced by the algorithm does not exceed a predefined limit. You can
find examples of such investigations in later.

Study Material, Lecturing Notes, Assignment, Reference, Wiki description explanation, brief detail

Introduction to the Design and Analysis of Algorithms : Proving an Algorithmâ€™s Correctness |

**Related Topics **

Privacy Policy, Terms and Conditions, DMCA Policy and Compliant

Copyright Â© 2018-2024 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.