Recursion is one of the popular problem-solving approaches in data structure and algorithms. Even some problem-solving approaches are totally based on recursion: decrease and conquer, divide and conquer, DFS traversal of tree and graph, backtracking, top-down approach of dynamic programming, etc. So learning time complexity analysis of recursion is critical to understand these approaches and improving our code's efficiency.

Sometimes programmers find it challenging to analyze recursive algorithms due to their mathematical details. But working with recursive algorithms becomes easy when we understand recursive patterns and methods used in the analysis. Without delaying further, let's learn fundamental concepts related to recursion time complexity.

A recurrence relation is an equation describing a sequence where any term is defined using its previous terms. We use recurrence relation to analyze recursive function time complexity in terms of input size.

In the case of recursion, we solve a problem using the solution of smaller subproblems. If time complexity of recursive function is T(n), then time complexity of smaller sub-problems will be defined by the same function but in terms of subproblems input size.

Here is a common approach to writing T(n) if we have k number of subproblems: T(n) = T(input size of 1st subproblem) + T(input size of 2nd subproblem) + .... + T(input size of kth subproblem) + Time complexity of additional operations other than recursive calls

The above formula provides an approach to define recurrence relation of every recursive algorithm. Then, we solve this recurrence relation and calculate the overall time complexity for recursion in terms of Big-O notation. Let's understand this better via examples of various recurrence relations of popular recursive algorithms.

```
reverse (A[], l, r)
- swap(A[l], A[r])
- reverse(A, l + 1, r - 1)
- Base Case: if (l >= r), then return
```

We are reversing n size array by swapping the first and last values and recursively solving remaining subproblem of reversing (n - 2) size array. At each step of recursion, input size is decreasing by 2.

- Time complexity T(n) = Time complexity of solving (n - 2) size problem + Time complexity of swapping operation = T(n - 2) + O(1)
- Recurrence relation to reverse an array: T(n) = T(n - 2) + c, where T(1) = c

```
binarySearch(A[], l, r, target)
- if (A[mid] == target), return mid
- if (A[mid] > target), binarySearch(A, l, mid - 1, target)
- if (A[mid] < target), binarySearch(A, mid + 1, r, target)
Base case: If (l > r), then return -1
```

At each step of recursion, we are doing one comparison and decreasing the input size by half. In other words, we are solving problem of n size using the solution of one subproblem of input size n/2 (Based on comparison, either left or right sub-problem)

- Time complexity T(n) = Time complexity of n/2 size problem + Time complexity of comparison operation = T(n/2) + O(1).
- Recurrence relation of binary search: T(n) = T(n/2) + c, where T(1) = c

```
mergeSort (A[], l, r)
- mid = l + (r - l)/2
- mergeSort(A, l, mid)
- mergeSort(A, mid + 1, r)
- merge(A, l, mid, r)
Base case: if (l == r), then return
```

In merge sort, we are solving problem of n size by using solution of two equal subproblems of input size n/2 and merging sub-problems solution in O(n) time.

- Time complexity T(n) = Time complexity of left sub-problem of size n/2 + Time complexity of right sub-problem of size n/2 + Time complexity of merging = T(n/2) + T(n/2) + O(n) = 2 T(n/2) + cn
- Recurrence relation of merge sort: T(n) = 2T(n/2) + cn, where T(1) = c

```
quickSort(A[], l, r)
- pivot = partition(A, l, r)
- quickSort(A, l, pivot - 1)
- quickSort(A, pivot + 1, r)
Base case: if (l >= r), then return
```

In quick sort, we are dividing array in O(n) time and recursively solving two subproblems of different sizes. The size of the subproblems depends on the choice of pivot in partition algorithm. Suppose after the partition, i elements are in the left subarray (left of the pivot), and n - i - 1 elements are in the right subarray (right of the pivot). Size of the left subproblem = i, Size of the right subproblem = n - i - 1.

- Time complexity T(n) = Time complexity of partition algorithm + Time complexity of left sub-problem of size i + Time complexity of right sub-problem of size (n - i - 1) = O(n) + T(i) + T(n - i - 1) = T(i) + T(n - i - 1) + cn
- Recurrence relation of quick sort: T(n) = T(i) + T(n - i - 1) + cn, where T(1) = c

**Karatsuba algorithm for fast multiplication:**Here, we solve integer multiplication problem of size n bits using three sub-problems of size n/2 bits and combine these solutions in O(n) time. Recurrence relation: T(n) = 3*T(n/2) + cn, where T(1) = c**Strassen’s matrix multiplication:**Here, we solve matrix multiplication problem of size n using solution of seven sub-problems of size n/2 and combining these solutions in O(n^2) time. Recurrence relation: T(n) = 7*T(n/2) + cn^2, where T(1) = c

```
fib (n) = fib (n - 1) + fib (n - 2)
Base case: if (n <= 1), return n
Here we have 2 base cases: fib(0) = 0 and fib(1) = 1
```

For finding nth Fibonacci, we are recursively solving and adding two sub-problems of size (n - 1) and (n - 2). Both subproblems are dependent on each other because the value of (n - 1)th Fibonacci depends on the value of (n - 2)th Fibonacci, and so on. Such type of dependent subproblems arises in case of optimization problems in dynamic programming.

- Time complexity T(n) = Time complexity of finding (n - 1)th fibonacci + Time complexity of finding (n - 2)th fibonacci + Time complexity of addition = T(n - 1) + T(n - 2) + O(1)
- Recurrence relation of finding nth fibonacci: T(n) = T(n - 1) + T(n - 2) + c, where T(n) = c for n<=1

- We first identify the input size of the larger problem.
- Then we recognise the total number of smaller sub-problems.
- Finally, we identify the input size of smaller sub-problems.

- We define time complexity function for the larger problem in terms of input size. For example, if the input size is n, then time complexity function would be T(n).
- Then we define time complexity of smaller subproblems. For example, in case of merge sort, input size of both subproblems is n/2 each, so time complexity of each subproblem would be T(n/2).
- Now we analyze worst-case time complexity of additional operations. For example, in merge sort, divide (O(1)) and merging process (O(n)) are extra operations that we need to perform to get the larger sorted array.
- We add time complexities of the sub-problems and additional operations to get the equation of overall time complexity function T(n).
- We also define the time complexity of base case, i.e., the smallest version of the problem. Our solution of the recurrence depends on this, so we need to define it correctly. Think!

We mainly use the following two methods to solve recurrence relations in algorithms and data structure. We can choose these methods depending on the nature of the recurrence relation. The master method works best to analyse divide and conquer algorithms, but a recursion tree method is always a fundamental approach applied to all types of recursive algorithms.

**Method 1**: Recursion Tree Method**Method 2**: Master Theorem

A recursion tree is a tree diagram of recursive calls, where each tree node represents the cost of a certain subproblem. The idea would be simple! Time complexity of recursion depends on two factors: 1) Total number of recursive calls and 2) Time complexity of additional operations for each recursive call.

So recursion tree is a diagram representing additional cost for each recursive call in terms of their input size. We should add extra cost for each recursive call to get the overall time complexity. Note: One of the best ideas would be to perform this cost sum level by level.

- Draw recursion tree of given recurrence relation
- Calculate total number of levels in the recursion tree.
- Calculate cost of additional operations at each level by adding cost of each node present at the same level.
- Finally, add cost of each level to determine the total cost of recursion. We simplify this expression and find the overall time complexity in terms of big O notation.

Merge sort recurrence relation: T(n) = 2T(n/2) + cn, T(1) = c

We are dividing problem of size n into two different subarrays (sub-problems) of size n/2. Here **cn** represents cost of merging solution of smaller sub-problems (smaller sorted arrays of size n/2). So, at each step of recursion, problem will be divided in half until the size of sub-problem becomes 1.

- We start with term
**cn**as the root, which is the cost of additional operations at the first level of recursion tree. Left and right child of root are time complexity of two smaller sub-problems i.e. T(n/2). - Now we move one level further by expanding T(n/2).
- At the second level, there are two nodes and cost of the additional operations at each node is cn/2. So the overall cost of additional operations at the second level of recursion = cn/2 + cn/2 = cn
- Similarly, we move to the third level where total cost = cn/4 + cn/4 +cn/4 + cn/4= cn, and so on.
- We continue expanding each internal node in recursion tree similarly until the problem sizes get down to 1. The bottom level has n nodes, each contributing c cost in total cost cn.
- In general, level i has 2^i nodes, each contributing a cost of cn/2^i, So the total cost of ith level = 2^i (cn/2^i) = cn.

Here, recursion tree is a full binary tree structure where each node has two children, and each level is completely full. At each level, input size decreases by the factor of 2, and recursion tree will terminate at input size 1. So the height of recursion tree **h = logn** **(Think!)**

- Total number of level in binary tree = height of the binary tree + 1 = logn + 1
- To compute the total cost represented by the recurrence, we add up the costs of all the levels. The recursion tree has logn + 1 levels, each costing cn. So total cost = cn * (logn + 1) = cnlogn + cn
- After Ignoring the low-order term and the constant c in (cnlogn + cn), the time complexity of merge sort = O(nlogn)

To explore further, understand the analysis of binary search and quick sort using recursion tree method.

We use the master method for finding time complexity of divide and conquer algorithm that partition an input into smaller subproblems of equal sizes. It is primarily a direct way to get the solution for recurrences that can be transformed to the type: T(n) = aT(n/b) + O(n^k), where a≥1 and b>1.

The master theorem recurrence describes time complexity of an algorithm that divides a problem of size **n** into **a** number of subproblems, each of size **n/b**, where **a** and **b** are positive constants. Here **a** number of subproblems are solved recursively, each in time T(n/b) and O(n^k) is the cost of dividing the problem and combining the solution of subproblems. There are three cases of analysis using the master theorem:

**Case 1:**When k < logb(a) then T(n) = O(n^logb(a))**Case 2:**When k = logb(a) then T(n) = O(n^k * logn)**Case 3:**When k > logb(a) then T(n) = O(n^k)

- The general recurrence of the master theorem is T(n) = aT(n/b) + f(n), where f (n) is an asymptotically positive function. For ease of simplicity and application in analysis, we encounter f(n) equal to O(n^k) in most of the coding problems.
- The master theorem is derived from the recursion tree method, where the height of the recurrence tree is logb(n). We are skipping the mathematical details and proof behind this theorem. We will cover it in a separate blog, but if you are interested in knowing it. Please refer to the CLRS book.

Comparing with master theorem relation with binary search recurrence relation:

- T(n) = aT(n/b) + O(n^k)
- T(n) = T(n/2) + c

```
Here a = 1, b = 2 (a > 1 and b > 1)
k = 0 because n^k = c = Cn^0
=> logb(a) = log2(1) = 0
=> k = logb(a)
We can apply case 2 of the master theorem.
```

T(n) = O(n^k * log(n)) = O(n^0 * log(n)) = O(logn)

Comparing master theorem recurrence with merge sort recurrence relation:

- T(n) = aT(n/b) + O(n^k)
- T(n) = 2 T(n/2) + cn

```
Here a = 2, b = 2 (a > 1 and b > 1)
O(n^k)= cn = O(n¹) => k = 1
logb(a) = log 2(2) = 1
Hence => k = logb(a) = 1
We can apply case 2 of the master theorem.
```

Time complexity T(n) = O(n^k * logn) = O(n^1 * logn) = O(nlogn)

Comparing master theorem recurrence with finding max and min recurrence relation:

- T(n) = aT(n/b) + O(n^k)
- T(n) = 2 T(n/2) + c

```
Here a = 2, b = 2 (a > 1 and b > 1)
O(n^k)= cn = O(n^0) => k = 0
logb(a) = log 2(2) = 1
Hence => logb(a) > k
We can apply case 1 of the master theorem.
```

Time complexity T(n) = O(n^logb(a)) = O(n^1) = O(n)

Comparing master theorem recurrence with strassen’s multiplication recurrence:

- T(n) = aT(n/b) + O(n^k)
- T(n) = 7T(n/2) + cn^2

```
Here a = 7, b = 2 (a > 1 and b > 1)
O(n^k)= cn^2 = O(n^2) => k = 2
logb(a) = log 2(7) = 2.8 (Approximately)
Hence => logb(a) > k
so we can apply case 1 of the master theorem.
```

Time complexity T(n) = O(n^logb(a)) = O(n^log2(7)) = O(n^2.8)

- Finding max element in an array recursively: T(n) = T(n - 1) + O(1)
- Recursive insertion sort: T(n) = T(n - 1) + O(n)
- Tower of Hanoi puzzle: T(n) = 2T(n - 1) + O(1)
- Finding closest pair of points: T(n) = 2T(n/2) + O(nlogn)
- Divide and conquer algorithm of finding max and min: T(n) = 2T(n/2) + O(1)
- Divide and conquer algorithm of max subarray sum: T(n) = 2T(n/2) + O(n)
- Recursively searching in a balanced BST: T(n) = T(n/2) + O(1)
- DFS traversal of a binary tree: T(n) = T(i) + T(n - i - 1) + O(1), where i number of nodes are in the left subtree and (n - i - 1) number of nodes in right subtree
- Average case analysis of quick-select algorithm of finding kth smallest element: T(n) = 2/n (i = n/2 to n-1 ∑ T(i)) + cn
- Brute force recursive algorithm of longest common subsequence: T(n, m) = T(n-1, m-1) + O(1), if the last values of strings are equal, otherwise T(n, m) = T(n-1, m) + T(n, m-1) + O(1)

**Enjoy learning, Enjoy coding, Enjoy algorithms!**

Subscribe to get weekly content on data structure and algorithms, machine learning, system design and oops.