Understanding
Algorithmic Complexity
Algorithmic complexity is a measure of how much computation a program will perform when using a particular algorithm. It
is a measure of its efficiency and estimate of operation count. It is not a
measure of the complexity of the code necessary to implement a particular
algorithm. An algorithm with low algorithmic complexity is likely to be more
difficult to implement than an algorithm with higher algorithmic complexity. The
most important fact is that the algorithmic complexity is not a model of the
execution time but a model of the way execution time changes as the size of the
input changes. It is probably best to illustrate this through some examples.
Examples
of Algorithmic Complexity
Suppose you want to write a program that sums the first N numbers. You
would probably write something like the code shown in Listing 2.1.
Listing 2.1 Sum
of the First N Numbers
void
sum(int N)
{
int total=0;
for (int i=1; i<=N; i++)
{
total += i;
}
printf( "Sum of first %i integers is %i\n", N, total );
}
For a given input value N, the code will take N
trips around the loop and do N additions. The algorithmic complexity focuses on
the number of operations, which in this case are the N additions. It assumes
that any additional costs are proportional to this number. The time it would
take to complete this calculation is some cost per addition, k, multiplied by
the number of additions, N. So, the time would be k ∗ N. The algorithmic complexity is a measure of how this time will change
as the size of the input changes, so it is quite acceptable to ignore the
(constant) scaling factor and say that the calculation will take of the order of N computations. This is typically
written O(N). It is a very useful measure of the time it will take to complete
the program as the size of the input changes. If N is doubled in value, then it
will take twice as long for the program to complete.
Another example will probably clarify how this can
be useful. Assume that you want to sum the total of the first N factorials (a
factorial is N ∗ (N1) ∗ (N2) ∗ ... ∗ 1); you might write a program similar to the one shown in Listing 2.2.
Listing 2.2 Sum
of the First N Factorials
int factorial(int F)
{
int f = 1;
for (int i=1; i<=F; i++)
{
f = f*i;
}
return f;
}
void fsum(int N)
{
int total = 0;
for (int i=1; i<N; i++)
{
total+=factorial(i);
}
}
This program contains a doubly nested loop. The outer loop will do N
iterations, and the inner loop will do an average (over the run) of N/2
iterations. Consequently, there will be about N ∗ N/2 multiply operations and N
additions. The algorithmic complex ity is concerned only with the dominant
factor, which is N ∗ N. So, the entire calculation
is O(N^{2}). If N doubles in size, the time that this algorithm takes will go up
by a factor of 4.
The complexity is represented by the dominant term
in the function. If the complexity turned out to be (N+1)^{2}, then as N increased, there
would eventually be little difference between this and N^{2}. In the limit, as N becomes
larger, O(N^{2}) will dominate the complexity of O(N).
The previous examples are somewhat contrived, and
in both cases there are more efficient ways to provide the required result.
A more common programming problem is sorting data.
The domain of sorting has many different algorithms, of which the two most
famous are probably bubble sort and quicksort.
Bubble sort iterates through a list of numbers,
comparing adjacent pairs of numbers. If the pair is not in the correct sort
order, the pair is swapped. Listing 2.3 shows a simple implementation.
Listing 2.3 Implementation of Bubble Sort
void
bubble_sort(int*array, int N)
{
int
sorted =0; while ( !sorted )
{
sorted=1;
for ( int i=0; i < N1; i++ )
{
if
(array[i] > array[i+1])
{
int temp = array[i+1];
array[i+1] = array[i];
array[i] = temp;
sorted=0;
}
}
}
}
The smallest elements “bubble” to the top of the
array, which is why it’s called a bubble
sort. It also leads to the optimization, omitted from this implementation,
that it is not necessary to sort the
bottom of the list, because it will gradually fill with previously sorted
largest elements.
Using the algorithm as written, it will take N comparisons to get each
element into the correct place. There are N elements, so it will take N ∗ N comparisons to sort the entire list. For unsorted data, this leads to
an algorithmic complexity of O(N^{2}).
On the other hand, quicksort,
as might be expected by its name, is a much faster algorithm. It relies on
splitting the list of elements into two: a list of elements that are smaller
than a “pivot” element and elements that are larger. Once the list has been
split, the algorithm repeats on each of the smaller lists. An implementation
of the quicksort algorithm might look like the code shown in Listing 2.4.
Listing 2.4 Implementation
of Quicksort
void quick_sort(int * array, int lower, int upper)
{
int tmp; 



int 
mid 
= 
(upper+lower)/2; 
int 
pivot 
= 
array[mid]; 
int tlower = lower int tupper = upper;
while (tlower <= tupper)
{
while ( array[tlower] < pivot ) { tlower++; }
while ( array[tupper] > pivot ) { tupper; } if ( tlower <= tupper )
{
tmp =
array[tlower];
array[tlower] = array[tupper];
array[tupper] = tmp;
tupper;
tlower++;
}
}
if (lower<tupper) { quick_sort(array, lower, tupper); }
if (tlower<upper) { quick_sort(array, tlower, upper);
}
}
The algorithmic complexity of quicksort is hard to calculate. For the
average case, it is possible to prove that the complexity is O(N∗log_{2}(N)). This can be explained using the following reasoning. Assume a
uniform distribution of items, and that every time the array is split, it will
result in two equal halves. For a given N, it will take log_{2}(N) splits before the array is
split into a sequence of N individual elements and the sort is complete. Each
time the array splits, there are two function calls, one to sort the lower half
and one to sort the upper half. So, at every split, the entire array of N
elements will be iterated through. Therefore, the algorithmic complexity is O(N∗log_{2}(N)).
To see
the practical impact of this difference in the complexity of sorting
operations, consider sorting a list of 10 items. The bubble sort algorithm will
perform about 10 ∗ 10 = 100
operations, whereas quicksort will perform about 10 ∗ log_{2}(10) = 30 operations, which is
about three times fewer operations. However, the time taken to sort a
10element list is unlikely to be a significant factor in the runtime of an
application. It is more interesting to consider a list of 1,000 elements. The
bubble sort will take about 1,000,000 operations, while quicksort will take
about 3,000 operations—a ratio of about 300×.
Related Topics
Privacy Policy, Terms and Conditions, DMCA Policy and Compliant
Copyright © 20182024 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.