Time Complexity Analysis in Data Structures and Algorithms

Why do we analyze algorithms?

  • Deciding on the most efficient algorithm among multiple algorithms.
  • Estimating algorithm performance for different input sizes.
  • Understanding the nature of the code and finding opportunities for further optimization.

The critical questions are: What is the meaning of the time complexity of an algorithm? How do we analyze time complexity? What are the essential concepts related to it? Let's learn step by step and understand various terms used in the analysis of algorithms. 

What is input size?

We define the input size as the total number of items present in the input. If we increase the input size, the total number of operations performed by an algorithm will increase. In other words, the time taken by an algorithm will increase with the growth in the input size.

How do we find input size?

The value of the input size depends on the nature of the problem.

  • Most of the time, we calculate the input size based on the number of items in the input. For example, for sorting n integers in an array, we have n data elements, so the input size is n. We perform basic operations like comparison, swapping, etc., to sort the input. If we increase the value of n, the count of such operations will increase. Similar examples include searching in a linked list of n nodes, traversing a tree of n nodes, etc.
  • Sometimes, we define the input size in terms of the total number of bits. For example, we perform bitwise multiplication to multiply two integers, A and B. If integer A has m bits and B has n bits, then the input size will be defined in terms of m and n. If m or n increases, the total count of bitwise operations will increase. Similar examples include finding the GCD of two numbers, checking whether a number is prime or not, etc.
  • Sometimes, we define the input size in terms of two numbers. For example, for finding the shortest path in a graph from a given source, we process vertices and edges present in the graph. So the input size is defined as the total number of vertices and edges (V, E) in the graph. If the value of V or E increases, the total number of operations performed by the shortest path algorithm will increase. Similar examples include merging two sorted arrays of size m and n, finding the longest common subsequence of two strings of size m and n, etc.

What is running time?

The running time of an algorithm for a given input size depends on the number of operations executed. The greater the number of operations, the longer the running time of an algorithm.

We usually want to know how many operations an algorithm will run in proportion to the size of its input, or we can say: We measure the running time of an algorithm as a function of input size.

Types of operations used in algorithms

We perform various types of operations on data to deliver the correct output. So before moving forward to the algorithm analysis, let's explore all the basic operations in algorithms:

  • Arithmetic operations: Addition (+), subtraction (-), multiplication (*), division (/), modulus (%).
  • Relational operations: Equal (==), not equal (!=), greater than (>), less than (<), greater than or equal to (>=), less than or equal to (<=).
  • Bitwise operations: Bitwise AND (&), bitwise OR (|), bitwise XOR (^), bitwise NOT (~), left shift (<<), right shift (>>).
  • Logical operations: Logical AND (&&), logical OR (||), logical NOT (!), etc.
  • Assignment operations: Assignment (=), add and assignment (+=), subtract and assignment (-=), multiply and assignment (*=), divide and assignment (/=), modulus and assignment (%=), left shift and assignment (<<=), right shift and assignment (>>=), bitwise AND and assignment (&=), bitwise OR and assignment (|=), bitwise XOR and assignment (^=), etc.
  • Increment/Decrement operations: Post-increment (++), post-decrement (--), pre-increment (++), pre-decrement (--).
  • Control operations: Conditional statements like if and else, function calls, return statements, unconditional statements like break, etc.

Critical assumptions for the analysis of algorithms

  • Each line of code in an algorithm will take constant time to execute. One line may take a different time than another, but we assume that the ith line takes a constant time Ci.
  • We study algorithm analysis independently of system-related things, i.e., programming language implementation, compiler, hardware, etc. For the given specification of a system, these things are constant.
  • The variable thing is the input size, which depends on the problem specification. So we calculate the running time of an algorithm as a function of input size.
  • Input data size is mostly huge in real-world applications, which lies in the ranges of millions or billions. So we mostly analyze the algorithm for a large value of input size.

Now the critical question is: How can we determine the running time of an algorithm? The answer depends on the code statements used inside the algorithm: loops, recursive calls, conditional statements, etc.

algorithm(...)
{ 
   code statement 1
   code statement 2
   ...
   code statement k  
}

The basic idea to calculate running time by adding the time taken for all code statements. Total time taken by algorithm = Time taken by code statement 1 + Time taken by code statement 2 + ... + Time taken by code statement k. Let's understand this idea using some examples!

Example 1: Analysis of linear search

int linearSearch(int X[], int n, int k)
{
    for(int i = 0; i < n; i = i + 1)
    {
        if(X[i] == k)               
            return i                
    }
    return -1                        
}

Algorithm idea: We compare each value of X[] with k using a loop. If we find a value X[i] equal to k, we return the index where the value is present (successful search). Otherwise, we move to the next value. By the end of the loop, if we did not find a value equal to k, we return false (unsuccessful search).

As mentioned above, we assume that each line of code takes a different but constant time. Here are some critical observations from the above algorithm:

  • Input size = n because n integers are given in the input.
  • Input distribution is unknown, and k can be present anywhere in the array, i.e. it can be present at the start or somewhere in the middle or not present. In simple words, the total number of operations depends on the order of the given input. (Think!)

Best-case analysis of linear search: The best case occurs when k is present at the first index. So the loop will run only once.

int linearSearch(int X[], int n, int k)
{
    for(int i = 0; i < n; i = i + 1) => C1 time
    {
        if(X[i] == k)                => C2 time
            return i                 => C3 time
    }
    return -1                        => C4 time
}

To calculate the best-case running time of the linear search, we add the time taken by each line of code i.e. C1 + C2 + C3 + C4 = a (which is some constant). Here C4 value will be 0.

Worst-case analysis of linear search: When k is not present in the array, loop will run n times and we return false. We perform one comparison at each iteration and condition if(X[i] == k) become false every time.

int linearSearch(int X[], int n, int k)
{
    for(int i = 0; i < n; i = i + 1) => C1 * n time
    {
        if(X[i] == k)                => C2 * n time
            return i                 => C3 time
    }
    return -1                        => C4 time
}

Worst case running time of linear search = Add time taken by each line of code = C1 * n + C2 * n + C3 + C4 = (C1 + C2) * n + (C3 + C4) = an + b. Here a = (C1 + C2), b = (C3 + C4). So the worst case running time will be a linear function of n i.e. a*n + b. Note: here C3 value is 0.

Example 2: Analysis of finding max value in an array

int findMax(int X[], int n)
{
    int max = X[0]
    for (int i = 1; i < n; i = i + 1)
    {
        if (X[i] > max)
            max = X[i]
    }  
    return max
}

Algorithm idea: We initialize a variable max with the first element X[0] and run a loop to compare the remaining values with max. At each iteration, when X[i] > max, we update max with X[i]. By the end of the loop, the maximum value gets stored in the variable max, and we return this as an output.

  • Input size = n.
  • Input distribution is not given, so the maximum element can be present anywhere, i.e., at the start, middle, or end.
  • For finding the maximum, we compare each value X[i] in the input. So the loop will always run n times independent of the input distribution.
  • The comparison X[i] > max will execute n times, independent of the distribution.
  • Similarly, return max will run only one time.
  • The assignment max = X[i] will execute only if the comparison X[i] > max is true. Otherwise, it will not execute. So the execution of the assignment operation depends on the distribution of input.

Best-case analysis of maximum finding: When the array is sorted in decreasing order, the max value would be X[0], and comparison X[i] > max will be false every time. In such a scenario, the assignment operation max = X[i] will not get executed inside the loop.

int findMax(int X[], int n)
{
    int max = X[0]                        => C1 time
    for (int i = 1; i < n; i = i + 1)     => C2 * (n-1) time
    {
        if (X[i] > max)                   => C3 * (n-1) time
            max = X[i]                    => C4 time
    }  
    return max                            => C5 time
}

The best case running time of finding maximum = Add time taken by each line of code = C1 + C2 * (n - 1) + C3 * (n -1) + C4 + C5 = (C2 + C3) * n + (C4 + C5 + C1 - C2 - C3) = a * n + b, Here a = (C2 + C3), b = (C4 + C5 + C1 - C2 - C3). So in the best case, running time is a linear function of n. Note: The value of C4 will be 0 here!

Worst case scenario: When an array is sorted in increasing order, max value would be X[n-1] and comparison X[i] > max will be true every time. In such a scenario, assignment operation max = X[i] will get executed every time inside the loop.

int findMax(int X[], int n)
{
    int max = X[0]                        => C1 time
    for (int i = 1; i < n; i = i + 1)     => C2 * (n-1) time
    {
        if (X[i] > max)                   => C3 * (n-1) time
            max = X[i]                    => C4 * (n-1) time
    }  
    return max                            => C5 time
}

Running time in the worst case = C1 + C2 * (n-1) + C3 * (n - 1) + C4 * (n - 1) + C5 = (C2 + C3 + C4) * n + (C1 + C5 - C3 - C4) = a * n + b, Here a = (C2 + C3 + C4), b = (C1 + C5 - C3 - C4). So in the worst case, running time is also a linear function of n, but value of constants are greater than value of constants in best case running time.

Special note: As observed in the above analysis, running time of an algorithm depends on the distribution of the input data like sorted, partially sorted, reverse sorted, etc. In other words, the order of input data decides the total count of critical operations in our code. If the order of data changes, count of operations performed by our code will change. Therefore, we need to understand the scenario of the worst-case input and best-case input.

Worst-case analysis of an algorithm

The worst-case running time is the time taken by the worst-case input, i.e., an input for which our algorithm executes maximum number of operations. It gives us an upper bound of the running time and guarantees that the algorithm will never take time longer than that. That's why we prefer worst-case analysis of an algorithm for most of the coding problems.

The worst case scenario occurs pretty often for several algorithms. For example, searching a database for particular information, worst case will often happen when data is not present in the database.

Best-case analysis of an algorithm

The best-case running time is the time taken by the best-case input, i.e., an input for which our algorithm executes minimum number of operations. It gives us a lower bound of the running time and guarantees that the algorithm will never take less time than that.

We are usually not interested in the best case because this might happen rarely and analysis based on the best case is not likely to represent the algorithm's behavior. However, there are some instances where best-case analysis can be useful. For example: Shell sort and Quicksort algorithms can take advantage of Insertion Sort's best-case running time to become more efficient.

Average-case analysis of an algorithm

For some algorithms, the worst-case analysis may not represent the correct performance behaviour. To obtain the correct estimated performance behaviour, we need to combine the running time of all possible inputs. In other words, average-case analysis is the best viable option when the chances of every possible input are equally likely.

To calculate the average case running time, we add the running time for all possible inputs and divide it by the total number of possible inputs. Therefore, knowing all possible distributions of input is essential for analyzing the average case.

The average case running time is usually roughly the same as the worst-case running time. However, for some situations, it will be closer to the best-case running time. For example, the analysis of quicksort, insertion analysis in the dynamic array, etc.

Image source: Algorithm design by skiena

Comparison of worst, best and average case analysis of algorithms for various input sizes

Average-case analysis of linear search

For the linear search problem, let's assume that the input is uniformly distributed and the probability of finding element k at each location is the same (we also need to include the case when k is not present). So there will be n+1 cases, i.e., n cases of successful search and 1 case of unsuccessful search.

  • Based on the given input, the value k can be present at index 1 or 2 or 3, and so on. In general, the cost of finding value k at the ith position is ci. This cost will be cn for an unsuccessful search. (Here c is some constant).
  • The average case running time of linear search = Sum of cost for finding value at all positions / Number of positions = (c + 2c + ⋅⋅⋅ + cn + cn) / (n + 1) = c(1 + 2 + ⋅⋅⋅ + n + n) / (n + 1) = c [n(n+1)/2 + n] / (n+1) = c [n/2 + n/(n+1)] < c(n/2 + 1).

So the linear search algorithm, on average, examines half of the array values, and it is a linear function of n. However, this is valid only if the value k is equally likely to appear at any array position. If this assumption is incorrect, the algorithm does not necessarily examine half of the array values in the average case.

Rate of growth of running time

Several algorithms can be applicable to a coding problem, and we need to determine the most efficient algorithm among them. In most cases, the input size would be significant, usually in the range of millions or billions. So the primary idea is to compare the exact worst-case running time for each algorithm (as calculated above) and decide which is the most efficient.

However, this process can be mathematically complex and difficult for algorithms with several lines of code. Do you want to do that? If yes, then go ahead! Otherwise, we need to define a simplified model to compare the efficiency of algorithms! Let's start from a simple perspective!

Running time will increase as we increase the input size. This growth mostly depends on the higher-order term in the running time function. For example, suppose the worst-case running time is a quadratic function, an² + bn + c. For large values of n, the lower-order term, bn + c, is insignificant compared to the higher-order term, an², i.e., an² >> bn + c. For a better understanding, suppose the worst-case running time is 3n^2 + 2n + 1, where a = 3, b = 2, and c = 1.

Higher order term = 3n^2
Lower order terms = 2n + 1

for n = 1, 3n^2 = 3, 2n + 1 = 3 
(3n^2 = 2n + 1)

for n = 10, 3n^2 = 300, 2n + 1 = 21 
(3n^2 > 2n + 1)

for n = 1000, n^2 = 3*10^6, 2n + 1 = 2001 
(3n^2 >> 2n + 1)

for n = 10^5, n^2 = 3*10^10, 2n + 1 = 2*10^5 + 1 
(3n^2 >> 2n + 1)

So for a large value of input size, we shall make three assumptions to simplify the analysis:

  • The rate of growth of the running time is one of the significant factors in the analysis. So we consider only the higher-order term in the running time function and ignore lower-order terms.
  • We can also ignore the coefficient of the higher-order term because constant factors are less significant than the rate of growth in determining algorithm efficiency for large input sizes.
  • To compare the efficiency of two algorithms of the same problem, we only compare the rate of growth of higher-order terms.

More examples to understand it better

  1. Running time = 3n² + 4nlogn + 2, higher-order term = 3n², lower-order term = 4nlogn + 2, rate of growth = rate of growth of higher-order term = n²
  2. Running time = 2^n + 7n^4 + 4n + 2, higher-order term = 2^n, lower-order term = 7n^4 + 4n + 2, rate of growth = 2^n
  3. Running time = 2n + 5logn + 6, higher-order term = 2n, lower-order term = 5logn + 6, rate of growth = n
  4. Running time = 6nlogn + n + 5, higher-order term = 6nlogn, lower-order term = 5logn + 6, rate of growth = nlogn

Suppose algorithm A takes 3n² + 4nlogn + 2 time, and algorithm B takes 6nlogn + n + 5 time. We can decide on the efficient algorithm by comparing the rate of growth of higher-order terms (ignoring coefficients and lower-order terms).

  • Rate of growth of algorithm A = n², rate of growth of algorithm B = nlogn
  • n² > nlogn. Therefore, algorithm B is more efficient than algorithm A for the large input size n.

Note to think: We usually consider one algorithm to be more efficient than another if its worst-case running time has a lower order of growth. However, in some situations, due to the large value of constant factors and lower-order terms, an algorithm whose running time has a higher order of growth might take less time for small inputs than an algorithm whose running time has a lower order of growth. Think!

What is time complexity?

To simplify the analysis and comparison of algorithms further, we define the term time complexity. Time complexity is an abstract way to represent the running time of an algorithm in terms of the rate of growth and Big-O notations only. It is an approximate estimation of how much time an algorithm will take for large value of input size. We use different notations to represent the best, average, and worst-case time complexity. 

Asymptotic notations

Big Omega notation: Ω(rate of growth)

Big-Omega is a notation to represent the best case time complexity of an algorithm. It provides a lower bound for the running time of an algorithm. Best case time complexity = Ω(rate of growth of the best case running time).

Suppose best case running time = 2n + 3logn + 4
Rate of growth = n
Best case time complexity = Ω(n)

Big Theta notation :  Θ(rate of growth)

Big Theta is a notation to represent the average case time complexity of an algorithm. It provides a tight bound for the running time of an algorithm. Average case time complexity = Θ(rate of growth of the average case running time).

Suppose average case running time T(n) = 4n^2 + 2n + 5
Rate of growth = n^2
Average case time complexity = Θ(n^2)

Big-O notation : O(rate of growth)

Big-O is a notation to represent the worst-case time complexity of an algorithm. It provides an upper bound of the runtime of an algorithm. In simple words, the Big-O notation and worst-case analysis are tools that greatly simplify our ability to compare the efficiency of algorithms. That's why in practice, we mostly use Big O notation during the algorithm analysis.

Worst-case time complexity = O(rate of growth of the worst-case running time).

Suppose worst case running time T(n) = 4nlogn + 2n + 6
Rate of growth = nlogn
Worst case time complexity = O(nlogn)

Some basic properties of big O notations

  • O(n) *O(m) = O(m*n), O(n) + O(m) = O(n + m)
  • constant * O(n) = O(n), constant + O(n) = O(n)
  • O(n) means the running time grows at the most linear function of n, but it could grow more slowly. In general, If a running time is O(n) for a large value of n, the running time is at most k*n for some constant k. (Think!)

Popular time complexities in algorithms

Constant time complexity : O(1)

Such time complexity appears when our algorithm performs a constant number of operations. The time complexity does not depend on the input size, i.e., regardless of the input size, the algorithm will have the same runtime. For example, the following situation has O(1) time complexity: 

  • A loop or recursion that runs a constant number of times.
  • If the algorithm doesn’t contain a loop, recursion, and call to any other non-constant time function. 

Best examples: Accessing an element in an array, Finding minimum value in the min-heap, Searching elements in the hash table [O(1) average], Finding median in a sorted array, swapping two variables, etc.

Logarithmic time complexity: O(logn)

A logarithm is the inverse of exponentiation. For example, logn with base 2 means the number of times n must be divided by 2 to get 1. Such time complexity appears when input size decreases by some constant factor (mostly by half) at each step. In such a scenario, the algorithm will execute the O(logn) number of operations for the input size n. 

Best examples: Binary search, divide and conquer solutions similar to binary search, Euclid algorithm of finding GCD, Matrix exponentiation method of finding nth Fibonacci, Searching in a balanced BST or Trie, Deleting minimum from min-heap, etc.

Linear time complexity: O(n)

Such time complexity appears when we need to process each input in O(1) time. This is the frequently occurring efficient time complexity in coding problems because most of the time, it is usually necessary to access each input element at least once in constant time to get the output.

Such time complexity often contains a single loop or a series of single loops. Sometimes, we also find such time complexity in recursive algorithms where we access each element and perform constant operations at each recursion step.

Best examples: Finding a max element in an array, kadane algorithm of maximum sub-array sum, merging two sorted arrays, partition algorithm in quicksort, BFS and DFS traversals in a binary tree, etc.

Log-linear time complexity: O(nlogn)

Such time complexity appears in the situation of the divide and conquer algorithms where we divide the problem into more than one independent sub-problems and perform O(n) operations to combine the solutions. In other words, in such a problem, the recursion tree height will be O(logn), and at each level, we perform O(n) operations.

Best Examples: Merge sort, Quicksort, Divide and Conquer algorithm of finding max subarray sum, etc. 

Sometimes O(nlogn) running time is simply the result of performing an O(log n) operation n times. For example, Tree sort creates a BST by inserting each element of the n-sized array one by one. Since the insert operation on a self-balancing BST takes O(log n) time, the entire algorithm takes O(n logn) time. 

Other examples: Heapsort, finding the shortest path in a graph, etc.

Quadratic time complexity: O(n²)

A quadratic time complexity often comes when there are two nested loops in the algorithm. Such situations occur mostly during matrix-based problems and brute force solutions to explore all pairs of the input element.

Best Examples: Bubble sort, Insertion sort, Transpose of a matrix, etc.

Exponential time complexity: O(2^n)

This time complexity often occurs when an algorithm has to explore all possible scenarios of output. Examples of such algorithms include:

  • Finding all possible subsets of the input elements. For example, the subsets of {1,2,3} are: {1}, {2}, {3}, {1,2}, {1,3}, {2,3}, and {1,2,3}.
  • Finding all permutations of the input elements. For example, the permutations of {1,2,3} are (1,2,3), (1,3,2), (2,1,3), (2,3,1), (3,1,2), and (3,2,1).
  • Finding all possible solutions to optimization problems in dynamic programming.

Some best examples of algorithms with this type of time complexity include finding all permutations of a string, finding all subsets of a given set, brute force solutions to count all possible ways to reach the nth stair, brute force solutions of the longest common subsequence problem, brute force solutions to find the nth Fibonacci number, recursive solutions of the Tower of Hanoi, etc.

Steps to calculate the time complexity and compare the efficiency of algorithms

  1. First, count the total number of critical operations performed by all the algorithms with respect to the input size, i.e., n. For example, it can be in the form of (an^2 + bn + c) or (an + b) or (alogn + b), etc.
  2. Then, we ignore lower-order terms and coefficients and represent them in the form of Big-O notation. For example, we write it like O(n^2) or O(n) or O(log n).
  3. Finally, we compare the higher-order terms present in the Big-O notation and decide on the efficient algorithm.

Note:

  • When comparing time complexity in terms of Big-O notation, the order from smallest to largest is: O(1) < O(logn) < O(n) < O(nlogn) < O(n²) < O(2^n).
  • In Step 3, if the higher-order terms are the same for two algorithms, then we need to consider the coefficient of the higher-order terms and the value of the lower-order terms. But how do we decide the coefficient values? Think and explore!

Critical ideas to explore further!

References

  • Algorithms by CLRS
  • Algorithm design manual by Skiena

If you have any queries/doubts/feedback, please write us at contact@enjoyalgorithms.com. Enjoy learning, Enjoy algorithms!

Share Your Insights

☆ 16-week live DSA course
☆ 16-week live ML course
☆ 10-week live DSA course

More from EnjoyAlgorithms

Self-paced Courses and Blogs

Coding Interview

Machine Learning

System Design

Our Newsletter

Subscribe to get well designed content on data structure and algorithms, machine learning, system design, object orientd programming and math.