Wednesday, August 1, 2012

Getting a bit deeper into Time Complexity Analysis


What is Time Complexity Analysis?

In the last blog, you got a rough idea about the Time Complexity and that was all about to judge how fast the algorithm can run, or putting it in another way, how much time an algorithm should require while running.
Suppose, someone asks you that how fast you can do a job or how much time do you require finishing a job. Most probably, your answer will be any of these threes (A) at most time t or (B) at least time t or (C) exactly time t. Whatever your answer might be, your answer must have been guided by identification of the number of basic operation needed to complete the job. So unless and until you do have a clear cut idea of how many basic operations are needed to be performed, you cannot say how much time you do require finishing a job, Right?

Similar is the case of answering how much time is required executing an algorithm. You need to identify first that how many basic operations (like comparison, memory access etc.) this algorithm performs. Once you figure it out, then you are all set to compute how fast you can do these all.
This computation is not an easy business. It is done through establishing a relationship between the run time (execution time) of the given algorithm and the input size of the problem, which the given algorithm is supposed to solve. Sounds weird?  You might be wondering what about the “number of basic operations needed”? Why did you identified that all?

Genuine, your doubts are well accepted. But the thing is, the number of basic operations needed for an algorithm depends on the input size of the problem and that is why we are really interested to have a relationship between the run time and the input size.  This relationship actually gives you a clear idea about the time required to run an algorithm and hence about the time complexity. 
  
What is the physical significance of this relationship? What are its features?

Well, a relationship between the run time and input size gives you the nature of the growth of time complexity. The growth indicates physically that on increasing the input size, how the run time changes. This relationship has a form of function which does not deal with any exact quantification; rather, it is expressed as a proportion called the “Order”.  Suppose, if the run time of an algorithm increases with the same proportion of the increase in input size, then the run time is of linear order.

 
Why do we need to know about the run time complexity? What are the objectives?

The objective of computing run time complexity is twofold. Firstly it gives a measure of efficiency of an algorithm with respect to run/execution time and secondly, in presence of multiple algorithms for a specific problem, it helps in deciding which one to choose for implementation. That means comparison among different algorithms for the same problem is a huge benefit that we can derive from Time Complexity analysis.       
How does the Time Complexity Analysis get realized?

There are mainly three classes of Time Complexity Analysis. (A) Worst Case Analysis – It estimates the upper bound of the number of basic operations needed to be performed if the algorithm is to be executed, and (B) Best Case Analysis – It estimates the lower bound of the number of operations needed to be performed if the algorithm is to be executed, and lastly (C) Average Case Analysis – any estimation in between.
If not said otherwise, in the rest of all our discussions, Time Complexity Analysis will always indicate the Worst Case Analysis.

Characteristics of computing Time Complexity

While computing time complexity, generally following characteristics are maintained –
a.       If any number of basic operations get identified which are independent of input size of the problem, the time complexity for that gets identified with Constant order.
b.      While comparing multiple algorithms, the Time Complexity analysis should ignore the common tasks       
c.       If the Time Complexity expression is realized with a polynomial, then the order of time complexity is dictated by the highest order of the polynomial

Give me on example

OK. Let us suppose we need to compute f(x) = 7x^4 + 6x^3 -5x^2 + 8x -5.
To compute this polynomial, we can consider two algorithms, (A) Brute Force Algorithm and (B) Horner’s Algorithm.

Brute Force Algorithm computes f(x) as 7*x*x*x*x + 6*x*x*x - 5*x*x + 8*x -5 and Horner’s Algorithm computes f(x) as (((7*x + 6)*x – 5)*x + 8)*x – 5.

Both the algorithms perform two additions and two subtractions apart from some number of multiplications which are different for the two cases. Hence we will ignore the basic operations addition and subtraction and will concentrate only on the number of multiplications.

Brute Force Algorithm, according to the example, for an input size n does k number of multiplications where k = n + (n-1)+(n-2)+……+2+1+0 = (n(n-1))/2.

On the other hand Horner’s Algorithm, as seen in the example, for an input size n,  does k’ number of multiplications where k’ = n.

So the order of Time Complexity of Brute Force Algorithm is expressed by the polynomial (n(n-1))/2 I,e n^2/2 + n/2. Hence finally the order of time complexity for Brute Force Algorithm will be dictated by n^2. It means the run time complexity of the Brute Force Algorithm increases proportional to the square of the input size.
In comparison, the time complexity of Horner’s algorithm is dictated by the term n only, which means that the increase in run time of Horner’s algorithm is linearly proportional to the input size.
It is evident from the discussion that the Horner’s Algorithm has a better run time than Brute Force Algorithm for computation of polynomial expressions.              

In the next blog, I will discuss on Asymptotic Notations focusing Big Oh, Big Omega, Big Theta, Small Oh and Small Omega.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.