Time complexity analysis in Data Structure and Algorithms

Why analysis of algorithms is important?

  • Deciding the efficient algorithm among more than one algorithms.
  • Estimating algorithm performance for different sizes of the input.
  • Understanding the nature of the code and finding scope for further optimization.

The critical question would be: How do we measure the time complexity of an algorithm? What are the essential concepts related to the complexity analysis? Let's dive deep into it and learn step by step.

What is the input size and running time?

The input size is defined as the total number of items present in the input. The time taken by an algorithm grows with the size of the input i.e. If we increase the input size, then the algorithms' total number of operations or instructions will increase. Actually, The idea of input size depends on the nature of the problem.

The running time of an algorithm for a given input size depends on the number of operations executed. The greater the number of operations, the longer the running time of an algorithm. We usually want to know how many operations an algorithm will run in proportion to the size of its input, or we can say that — we measure the running time as a function of input size. Think!

  • Most of the time, we calculate input size in terms of the number of items in the input. For example, sorting n integers in an array. We have n data elements in the array, so input size = n. To sort an array, we perform basic operations like comparison or swapping, etc. If we increase the value of n, then the total number of such operations will increase. Similar examples: Searching in a linked list of n nodes, traversing the tree of n nodes, etc.
  • Sometimes, we define the input size for some mathematical algorithms in terms of the total number of bits. For example, multiplying two integers A and B. To multiply two integers, we perform bitwise multiplication. If integer A has m bits and B has n bits then, the input size is defined in terms of m and n. If m or n increases, the total count of bitwise operations will increase. Similar examples: GCD of two numbers, Checking whether a number is prime or not, etc.
  • Sometimes, we define input size in terms of two numbers rather than one. For example, finding the shortest path in a graph from a given source. For finding the shortest path, we process vertices and edges present in the input graph. So the input size is defined as the total number of vertices and edges (V, E). If the value of V or E increases, the total number of operations performed by the shortest path algorithm will increase. Similar examples: Merging two sorted arrays of size m and n, finding longest common subsequence of two strings of size m and n, etc.

Types of operations in the algorithm

In algorithms, we perform various types of basic operations on data to deliver the correct output. So before moving forward to the analysis of algorithms, let's explore all the basic operations:

  • Arithmetic operations: Add (+), subtract (-), multiply (*), divide (/), modulus (%).
  • Relational Operations: equal (==), not equal (!=), greater than (>), less than (<), greater than or equal (>=), less than or equal (<=)
  • Bitwise operations: bitwise AND (&), bitwise OR (|), bitwise XOR (^), bitwise NOT (~), left shift (<<), right shift (>>)
  • Logical operations: logical AND (&&), logical OR (||), logical NOT (!)
  • Assignment Operations: assignment (=), add and assignment (+=), subtract and assignment (-=), multiply and assignment (*=), divide and assignment (/=), modulus and assignment (%=), left shift and assignment (<<=), right shift and assignment (>>=), bitwise AND and assignment (&=), bitwise OR and assignment (|=), bitwise XOR and assignment (^=)
  • Increment/Decrement Operations: Postincrement (++), Postdecrement (--), Preincrement (++), Predecrement (--)
  • Control operations: Conditional statements like if and else, a function call, return statements, unconditional statements like a break, etc.

Assumptions for analyzing the algorithms

  • An algorithm takes constant time to execute each line of code. One line may take a different time than another line, but we shall assume that running an ith line of code takes constant time Ci. 
  • We study the analysis of algorithms independent of programming language implementations, complier, hardware, etc. So we always calculate running time by “the number of operations as a function of an algorithm’s input size.”
  • In real-world applications, the input data size is mostly huge; i.e., it lies in ranges of millions or billions. So we analyse the algorithm for a large value of input size.

In general, how can we determine the running time of a piece of code? The answer depends on the kinds of statements used in the code: loop, conditional statement, etc.

function()
{ 
   //Sequence of code statements
   code statement 1;
   code statement 2;
   ...
   code statement k;  
}

The total running time is calculated by adding the times for all statements: Total time = Time (code statement 1) + Time (code statement 2) + ... + Time (code statement k)

Example 1: Analysis of linear search

int linearSearch(int X[], int n, int k)
{
    for(int i = 0; i < n; i = i + 1)
    {
        if(X[i] == k)               
            return i                
    }
    return -1                        
}

Algorithm Idea: we compare k with each value of X[] in a loop. If we found a value equal to k, then we return the index where the value is present (successful search). Otherwise, we move to the next value. By the end of the loop, if we did not find a value equal to k, then we return false (successful search). 

As mentioned above, we assume that each line of code takes different but constant time. Here are some critical observations from the above algorithm:

  • Input size = n because n integers are given in the input.
  • Input distribution is not known so that k can be present anywhere in the array. It can be present at the start or somewhere in the middle or not present in the array. In simple words, the total number of operations depends on the order of the given input. (Think!)

Best-case analysis of linear search: Best case occurs when value k is present at the first index. So the loop will run only one time and return the index.

int linearSearch(int X[], int n, int k)
{
    for(int i = 0; i < n; i = i + 1) => C1 time
    {
        if(X[i] == k)                => C2 time
            return i                 => C3 time
    }
    return -1                        => C4 time
}

Best case running time of linear search = Add the time taken by each line of code = C1 + C2 + C3 + C4 = a (which is some constant).

Worst-case analysis of linear search: when value k is not present in the array, so the loop will run n times and return false. We perform one comparison operation at each iteration and condition if(X[i] == k) become false every time.

int linearSearch(int X[], int n, int k)
{
    for(int i = 0; i < n; i = i + 1) => C1*n time
    {
        if(X[i] == k)                => C2*n time
            return i                 => C3 time
    }
    return -1                        => C4 time
}

Worst case running time of linear search = Add the time taken by each line of code= C1*n + C2*n + C3 + C4 = (C1 + C2)n + (C3 + C4)= an + b, Here a = (C1 + C2), b = (C3 + C4). So the worst case running time will be a linear function of n i.e. a*n + b.

Example 2: Analysis of finding max value in an array

int findMax(int X[], int n)
{
    int max = X[0]
    for (int i = 1; i < n; i = i + 1)
    {
        if (X[i] > max)
            max = X[i]
    }  
    return max
}

Algorithm Idea: We initialize a variable max with the first element X[0] and run a loop to compare the remaining value in the array with max. At each iteration, if we found a value X[i] > max then we update max with the value X[i]. By the end of the loop, the maximum value gets stored in the variable max, and we return this value.

  • Input size = n because we have n integers are given in the input.
  • Input distribution is not given so that max element can be present anywhere in the array, i.e., in the start or somewhere in the middle or at the end.
  • For finding the max value, we compare each value in the input. So the loop will always run n times, independent of the input distribution.
  • Comparison operation X[i] > max will execute n times, independent of the input distribution. Similarly, the instruction return max will run only one time.
  • The assignment operation (max = X[i]) will execute only if comparison if(X[i] > max) is true. Otherwise, it will not execute! So the execution of the assignment operation depends on the order of the values in the array. What would be the best and best care scenario? Think!

Best case scenario: When the array is sorted in decreasing order then the max value would be X[0] and comparison if(X[i] > max) will be false every time. In such scenario, assignment operation (max = X[i]) will not get executed every time.

int findMax(int X[], int n)
{
    int max = X[0]                        => C1 time
    for (int i = 1; i < n; i = i + 1)     => C2(n-1) time
    {
        if (X[i] > max)                   => C3(n-1) time
            max = X[i]                    => C4 time
    }  
    return max                            => C5 time
}

Running time in the best case = Add the time taken by each line of code = C1 + C2(n - 1) + C3(n -1) + C4 + C5 = (C2 + C3)n + (C4 + C5 + C1 - C2 - C3) = an + b, Here a = (C2 + C3), b = (C4 + C5 + C1 - C2 - C3). So in the best case, running time is a linear function of n.

Worst case scenario: When array is sorted is increasing order then max value would be X[n-1] and comparison if(X[i] > max) becomes true every time. In such scenario, assignment operation (max = X[i]) will get executed every time.

int findMax(int X[], int n)
{
    int max = X[0]                        => C1 time
    for (int i = 1; i < n; i = i + 1)     => C2(n-1) time
    {
        if (X[i] > max)                   => C3(n-1) time
            max = X[i]                    => C4(n-1) time
    }  
    return max                            => C5 time
}

Running time in the worst case = C1 + C2(n-1) + C3(n-1) + C4(n-1) + C5 = (C2 + C3 + C4)n + (C1 + C5 - C3 - C4) = a*n + b, Here a = (C2 + C3 + C4), b = (C1 + C5 - C3 - C4). So in the worst case, running time is also a linear function of n, but the values of constants are greater than value of constant in best case running time.

Special note: As observed in the above analysis, the running time of an algorithm depends on the order of the input data like sorted, partially sorted, reverse sorted, etc. The input data order decides the total count of the critical operations in our code, i.e. if the order of data will change, then the count of operations performed by our code may change. Therefore, we primarily understand the worst-case input scenario, best-case input, and uniform distribution.

Worst-case analysis of algorithms

The worst-case running time is the running time of the worst-case input, i.e., an input for which our algorithm executes a maximum number of operations. Thus, the worst-case running time of an algorithm gives us an upper bound on the running time and guarantees that the algorithm will never take longer. That's why we are likely to prefer a worst-case analysis of an algorithm for most of the coding problems.

For some algorithms, the worst case occurs pretty often. For example, in searching a database for a particular piece of information, the searching algorithm’s worst-case will often happen when the data is not present in the database.

Best-case analysis of algorithms

In the best case analysis, we calculate the lower bound on the running time of an algorithm. We must know the case that causes the minimum number of operations to be executed. We are usually not interested in the best case because this might happen rarely and analysis based on the best case is not likely to represent the algorithm's behavior.

However, there are rare instances where a best-case analysis is useful, particularly when the best case has a high probability of occurring. The Shell sort and Quicksort algorithms can both take advantage of Insertion Sort's best-case running time to become more efficient. (Think!)

Average-case analysis of algorithms

For some algorithms, particularly when we wish to combine the running the program of all possible input, the worst-case analysis might not represent the algorithm’s performance. In other words, average-case analysis is the best viable option when the chances of every possible input are equally likely.

In average case analysis, we calculate running time for all possible inputs, sum all the calculated values, and divide them by the total number of possible inputs. So knowing the distribution of input is important for the average-case analysis of algorithms. Most of the time, the average case is roughly as bad as the worst case.

T(n) = Running time sum for all possible input / Number of possible input

Average-case analysis of linear search

For the linear search problem, let us assume that input is uniformly distributed and the probability of searching or finding an element k at each location is the same (we include the case of k not being present in the array). So there will be n + 1 case, i.e., n cases of successful search and 1 case of unsuccessful search. So we might need to perform 1 comparison or 2 or 3 or 4 and so on. In general, the cost for finding at position i is c*i for values i = 1 to n. 

The average case running time of linear search = (total cost for finding value at all positions) / number of positions = (c + 2c + ⋅⋅⋅ + cn + cn) / (n+1) = c(1 + 2 + ⋅⋅⋅ + n + n) / (n+1) = c [ n(n+1)/2 + n ] / (n + 1) = c [n/2 + n/(n+1)] < c(n/2 + 1)

A linear search algorithm, on average, examines half of the array values, and it is a linear function of n. However, it is only true if the value k is equally likely to appear in any position in the array. If this assumption is not correct, then the algorithm does not necessarily examine half of the array values in the average case.

comparison of worst, best and average case analysis

Image source: Algorithm design by skiena

Rate of growth of running time

There can be several algorithms possible for a coding problem, and we need to decide the efficient algorithms among them. In the most practical scenario, our input size would be huge, i.e., in the range of millions or billions. So, the basic idea would be to compare the exact worst-case running time for each algorithm (as calculated above) and decide the efficiency. But this would be mathematically complex and difficult for algorithms with several lines of code. Do you want to do that? If yes, then go ahead! Otherwise, there is a need to define a simplified model to compare the efficiency of algorithms!

So, let's start from a simple perspective: If we increase the input size, our running time will increase. This growth is mostly dependent on the higher-order term in the running time function. For example: suppose worst-case running time is a quadratic function an² + bn + c. For a large value of n, lower-order terms are insignificant compared to the higher-order terms, i.e., an² >> bn + c. For better understanding, suppose worst-case running time is 3n^2 + 2n + 1, where a = 2, b = 2, c = 1.

Higher order term = 3n^2
Lower order terms = 2n + 1

for n = 1, 3n^2 = 3, 2n + 1 = 3 
(3n^2 = 2n + 1)

for n = 10, 3n^2 = 300, 2n + 1 = 21 
(3n^2 > 2n + 1)

for n = 1000, n^2 = 3*10^6, 2n + 1 = 2001 
(3n^2 >> 2n + 1)

for n = 10^5, n^2 = 3*10^10, 2n + 1 = 2*10^5 + 1 
(3n^2 >> 2n + 1)

From the above observation, for the large value of input size, we shall now make two assumptions to simplify the analysis of algorithms: 

  • It is the rate of growth or order of growth of the running time that interests us. We, therefore, consider only the higher-order term in the formula of running time and ignore lower-order terms.
  • We can also ignore the coefficient of the higher-order term because constant factors are less significant than the rate of growth in determining the algorithm efficiency for large inputs sizes.
  • To compare the efficiency of two algorithms of the same problem, we only compare the growth rate of the higher-order terms.

More examples to understand it better

  1. Running time = 3n² + 4nlogn + 2, higher order term = 3n², Lower order term = 4nlogn + 2, Rate of growth = rate of growth of higher order term = n²
  2. Running time = 2^n + 7n^4 + 4n + 2, higher order term = 2^n, Lower order term = 7n^4 + 4n + 2, Rate of growth = 2^n
  3. Running time = 2n + 5logn + 6, higher order term = 2n, Lower order term = 5logn + 6, Rate of growth = n
  4. Running time = 6nlogn + n + 5, higher order term = 6nlogn, Lower order term = 5logn + 6, Rate of growth = nlogn

Suppose algorithms A takes 3n² + 4nlogn + 2 time and algorithm B takes 6nlogn + n + 5 time. Then we can decide the efficient algorithm by comparing the rate of growth of higher-order terms (ignoring coefficients and lower order terms).

  • Rate of growth of algorithm A = n², Rate of growth of algorithm B = nlogn
  • n² > nlogn, so algorithm A is efficient than algorithm B for the large input size n. 

Note to think: We usually consider one algorithm to be more efficient than another if its worst-case running time has a lower order of growth. But in some situations, due to the large value of constant factors and lower order terms, an algorithm whose running time has a higher order of growth might take less time for small inputs than an algorithm whose running time has a lower order of growth. Think!

comparison of different running time growth rates

Image source: Algorithm Design by Jon Kleinberg and Eva Tardos

Time complexity and Big-O notation

To simplify the analysis and comparison of algorithms further, we define the term time complexity. Time complexity is an abstract way to represent the running time of an algorithm in terms of the rate of growth only. It is an approximate estimation of how much time an algorithm will take for a large value of input size. We use different notations to represent the best, average, and worst-case time complexity. 

Big Omega notation: Ω(rate of growth)

Big-Omega is a notation to represent the best case time complexity of an algorithm. It provides a lower bound for the running time of an algorithm. Best case time complexity = Ω(rate of growth of the best case running time)

Suppose best case running time T(n) = 2n + 3logn + 4
Rate of growth = n
Best case time complexity = Ω(n)

Big Theta notation :  Θ(rate of growth)

Big Theta is a notation to represent the average case time complexity of an algorithm. It provides a tight bound for the running time of an algorithm. Average case time complexity = Θ(rate of growth of the average case running time)

Suppose average case running time T(n) = 4n^2 + 2n + 5
Rate of growth = n^2
average case time complexity = Θ(n^2)

Big-O notation : O(rate of growth)

Big-O is a notation to represent the worst-case time complexity of an algorithm. It provides an upper bound of the runtime of an algorithm. In simple words, the Big-O notation and worst-case analysis are tools that greatly simplify our ability to compare the efficiency of algorithms. That's why in practice, we mostly use Big O notation during the analysis of algorithms.

Worst-case time complexity = O(rate of growth of the average case running time).

Suppose worst case running time T(n) = 4nlogn + 2n + 6
rate of growth = nlogn
worst case time complexity = O(nlogn)

Some basic properties of big O notations

  • O(n) *O(m) = O(m*n), O(n) + O(m) = O(n + m)
  • constant * O(n) = O(n), constant + O(n) = O(n)
  • O(n) means the running time grows at the most linear function of n, but it could grow more slowly. In general, If a running time is O(n) for a large value of n, the running time is at most k*n for some constant k. (Think!)

Popular examples of big-O notations

Constant time complexity : O(1) 

Such time complexity appears when our algorithm performs a constant number of operations. The time complexity does not depend on the input size, i.e., regardless of the input size, the algorithm will have the same runtime. For example, the following situation has O(1) time complexity: 

  • A loop or recursion that runs a constant number of times.
  • If the algorithm doesn’t contain a loop, recursion, and call to any other non-constant time function. 

Best Examples: Accessing an element in an array, Finding minimum value in the min-heap, Searching elements in the hash table [O(1) average], Finding median in a sorted array, swapping two variables, etc.

Logarithmic time complexity: O(logn)

A logarithm is the inverse of exponentiation. For example, logn with base 2 means the number of times n must be divided by 2 to get 1. Such time complexity appears when input size decreases by some constant factor (mostly by half) at each step. In such a scenario, the algorithm will execute the O(logn) number of operations for the input size n. 

Best Examples: Binary search, divide and conquer solutions similar to binary search, Euclid algorithm of finding GCD, Matrix exponentiation method of finding nth Fibonacci, Searching in a balanced BST or Trie, Deleting minimum from min-heap, etc.

Linear time complexity: O(n) 

Such time complexity appears when we need to process each input in O(1) time. This is the frequently occurring efficient time complexity in coding problems because most of the time, it is usually necessary to access each input element at least once in constant time to get the output.

Such time complexity often contains a single loop or a series of single loops. Sometimes, we also find such time complexity in recursive algorithms where we access each element and perform constant operations at each recursion step.

Best Examples: Finding a max element in an array, kadane algorithm of maximum sub-array sum, merging two sorted arrays, partition algorithm in quicksort, BFS and DFS traversals in a binary tree, etc.

Log-linear time complexity: O(nlogn)

Such time complexity appears in the situation of the divide and conquers algorithms where we divide the problem into more than one independent sub-problems and perform O(n) operations to combine the solutions. In other words, in such a problem, recursion tree height will be O(logn), and at each level, we perform O(n) operations.

Best Examples: Merge sort, Quicksort, Divide and Conquer algorithm of finding max subarray sum, etc. 

Some time O(nlogn) running time is simply the result of performing an O(log n) operation n times. For example, Tree sort creates a BST by inserting each element of the n sized array one by one. Since the insert operation on a self-balancing BST takes O(log n) time, the entire algorithm takes O(n logn) time. 

Other Examples: Heapsort, finding the shortest path in a graph, etc.

Quadratic time complexity: O(n²)

A quadratic time complexity often comes when there are two nested loops in the algorithm. Such situations occur mostly during matrix-based problems and brute force solutions to explore all pairs of the input element.

Best Examples: Bubble sort, Insertion sort, Transpose of a matrix, etc.

Exponential time complexity: O(2^n)

This time complexity often occur when an algorithm has to explore 

  • Finding all possible subsets of the input elements. For example, the subsets of {1,2,3} are: {1}, {2}, {3}, {1,2}, {1,3}, {2,3} and {1,2,3}.
  • Finding all permutations of the input elements. For example, the permutations of {1,2,3} are (1,2,3), (1,3,2), (2,1,3), (2,3,1), (3,1,2) and (3,2,1).
  • All possible solutions to optimization problems in dynamic programming.

Best Examples: Find all permutations of a string, find all subsets of a given set, brute force solution to count all possible ways to reach nth stair, brute force solution of longest common subsequence problem, brute force solution to find nth Fibonacci, recursive solution of the Tower of Hanoi, etc.

Comparison of time complexities

O(1) > O(logn) > O(n) > O(nlogn) > O(n²) > O(2^n)

comparison of popular time complexities in algorithms

Image source: www.bigocheatsheet.com

Steps to analyse and compare algorithms

  • Step 1: First count the total number of critical operations performed by all the algorithms with respect to the input size i.e. n. For example, it can be in the form of (an^2 + bn + c) or (an + b) or (alogn + b) etc.
  • Step 2: Then, we ignore lower order terms & coefficients and represent them in the form of Big-O notations. For example, we write it like O(n^2) or O(n) or O(logn).
  • Step 3: Finally, we compare the higher-order terms present in the big-O notations and decide the efficient algorithm. O(n^2) > O(n) > O(logn).

Critical ideas to explore further!

References

  • Algorithms by CLRS
  • Algorithm design manual by Skiena

Enjoy coding, Enjoy algorithms!

Our Weekly Newsletter

Subscribe to get well-designed content on data structures and algorithms, machine learning, system design, oops, and mathematics. enjoy learning!

We Welcome Doubts and Feedback!

More Content From EnjoyAlgorithms