Complexity of an Algorithm

    Suppose there is a problem and two different algorithms. How can we decide which algorithm is better ? You can implement and run both of the algorithms, it is called Experimental Evaluation but it is expensive and waste of time. There should be another way to decide that. Actually, yes, there is. Let’s see..

    The running time of an algorithm depends on the computer, programming language, compiler and so on and this causes some differences like an algorithm work faster in one programming language than the other. Therefore, the algorithm should be analyzed mathematically in a machine-independent way. This is called Asymptotic Analysis. Asymptotic analysis provide us to evaluate how does an algorithm behave when the input size get larger. We calculate the growth-rate instead of the actual running time. There are several ways to calculate growth-rate of an algorithm but each method has the same basic idea; express the bounds of the function of growth-rate. There are three common asymptotic notation; Big O, Big Ө and Big Ω. In this post, we will talk about Big O notation;

Big O Notation

Big-O notation is used to express the upper bound of the growth rate. When we calculate big-O, we just care about the most significant values at the function. The constants have no influence on the growth-rate. Let’s see some examples;

f(n) = n^2 + 23n + 6  —–> For a big n,  “6” has not affect the growth-rate. Also, assume that n is equal to 24123124 (a random big value), it has no remarkable effect on the growth-rate. At this function, n^2 the most significant effect to growth-rate. Therefore, the worst case of the given function is O(n^2).

f(n) = 5n + 2 ——> O(n)
f(n) = 4n^3 + 25n^2 + 13 —–> O(n^3)
f(n) = 21 —-> O(1)

    As clearly seen on the examples, the biggest power of n determine worst case of the growth-rate. Other constant values or multiplications are not efficient.

Now, let’s order big-O functions;

O(1)
    If the algorithm do not contain any loop, recursion or call another function, its time complexity is O(1). For example: accessing an element in array, push or pop on stack etc.

O(logn)   [Binary search, calculating Fibonacci numbers(Best Method) , etc. ]


    If the loop variables on the algorithm contain has divided or multiplied by a constant value the time complexity of the algorithm will grown up smaller ratio than n. For example:

for(int i = 2;   i < n;   i *=4)  // —> “i” multiplied by a constant value.

{

    //some constant time complexity operations

}

O(n)   [calculating Fibonacci numbers(Iterative), Traversing an array, check for palindrome, compare two strings, etc.]

If the algorithm contain single loops which its variable incremented or decremented by a constant value, the operations inside of the loops executed n times, so the time complexity of the algorithm will be O(n). For example:

for(int i = 0;   i < n;   i ++)  // —> “i” incremented by a constant value.

{

    //some constant time complexity operations

}

//iterative Fibonacci numbers calculation; returns nth fibonacci number.

int fibonacci(int n)
{
int previous = 1;
int current = 1;
int next = 1;

for (int counter = 3; counter <= n; counter++)  //”counter” incremented by a constant.

{
next = current + previous;
previous = current;
current = next;
}
return next;
}

O(n^c)   [Bubble sort, Insertion sort, Selection sort, etc.]

If the algorithm contain nested loops, the time complexity will be n^c  (c is the number of how many loops are nested). For example:

for(int i = 0;   i < n;   i ++)

{

    for(int b = 0;   b < n;   b ++)


    {


            //some constant time complexity operations


    }

}

There are two “for loops” nested. So, the time complexity is O(n^2).

bubbleSort

Bubble sort codes is seen on the picture above. There are two “for loops” nested and the operations inside of the loop are constant time complexity operations. So, the time complexity of Bubble sort is O(n^2).

O(2^n)   [Fibonacci numbers(Recursive)]
    The recursive Fibonacci number calculations is an example of O(2^n). Let’s see the algorithm.

int fibonacci(int n)

{

    if( n =< 1 ) return n;

    else return fibonacci(n-2) + fibonacci(n-1);

}

the function call itself until n equal or smaller than 1.

Here is the order;
O(1)  [constant growth rate]
O(logn)
O(n)
O(nlogn)
O(n^c)
O(2^n)
O(n!)  
[biggest growth rate]

At the introduction, we were talked about how can we decide which algorithm is better if we have two algorithms about a problem. Then, we were talked about big-O notation and give some examples. There are two different algorithm about calculating fibonnacci numbers above (Actually, there are three -Best, Iterative, Recursive- but we only give just two of them). We have iterative and recursive algorithm. We have calculate the time complexity of both algorithm and obtain two different time complexity. Iterative algorithm has O(n) time complexity while recursive has O(2^n). O(2^n) is a serious time complexity. If n is equal to 100, 2^100 is approximately 10^30 nano-second while age of the universe is 10^26 nano-second. Interesting, right?  So, we should choose iterative algorithm and we decided that through calculating the time complexity. This shows us to importance of time complexity.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s