Efficiency of an algorithm
Computer resources are limited that should be
utilized efficiently. The efficiency of an algorithm is defined as the number
of computational resources used by the algorithm. An algorithm must be analyzed
to determine its resource usage. The efficiency of an algorithm can be measured
based on the usage of different resources.
For maximum efficiency of algorithm we wish to
minimize resource usage. The important resources such as time and space
complexity cannot be compared directly, so time and space complexity could be
considered for an algorithmic efficiency.
The efficiency of an algorithm depends on how
efficiently it uses time and memory space.
The time efficiency of an algorithm is measured
by different factors. For example, write a program for a defined algorithm,
execute it by using any programming language, and measure the total time it
takes to run. The execution time that you measure in this case would depend on
a number of factors such as:
·
Speed of the machine
·
Compiler and other system Software tools
·
Operating System
·
Programming language used
·
Volume of data required
However, to determine how efficiently an
algorithm solves a given problem, you would like to determine how the execution
time is affected by the nature of the algorithm. Therefore, we need to develop
fundamental laws that determine the efficiency of a program in terms of the
nature of the underlying algorithm.
A space-time or
time-memory tradeoff is a way of
solving in less time by using more storage space or by solving a given
algorithm in very little space by spending more time.
To solve a given programming problem, many
different algorithms may be used. Some of these algorithms may be extremely
time-efficient and others extremely space-efficient.
Time/space trade off refers to a situation
where you can reduce the use of memory at the cost of slower program execution,
or reduce the running time at the cost of increased memory usage.
Asymptotic Notations are languages that uses
meaningful statements about time and space complexity. The following three
asymptotic notations are mostly used to represent time complexity of algorithms:
Big O is often used to describe the worst-case
of an algorithm.
Big Omega is the reverse Big O, if Bi O is used
to describe the upper bound (worst - case) of a asymptotic function, Big Omega
is used to describe the lower bound (best-case).
When an algorithm has a complexity with lower
bound = upper bound, say that an algorithm has a complexity O (n log n) and (n
log n), it’s actually has the complexity Θ (n log n), which means the running
time of that algorithm always falls in n log n in the best-case and worst-case.
Let us assume a list of n number of values
stored in an array. Suppose if we want to search a particular element in this
list, the algorithm that search the key element in the list among n elements,
by comparing the key element with each element in the list sequentially.
The best case would be if the first element in
the list matches with the key element to be searched in a list of elements. The
efficiency in that case would be expressed as O(1) because only one comparison
is enough.
Similarly, the worst case in this scenario
would be if the complete list is searched and the element is found only at the
end of the list or is not found in the list. The efficiency of an algorithm in
that case would be expressed as O(n) because n comparisons required to complete
the search.
The average case efficiency of an algorithm can
be obtained by finding the average number of comparisons as given below:
Minimum number of comparisons = 1 Maximum
number of comparisons = n
If the element not found then maximum
number of comparison = n
Therefore,
average number of comparisons =
(n + 1)/2
Hence the average case efficiency will be
expressed as O (n).
Related Topics
Privacy Policy, Terms and Conditions, DMCA Policy and Compliant
Copyright © 2018-2023 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.