The concept of iteration and recursion are two fundamental problem-solving approaches in data structures and algorithms.

**Iterative problem-solving approaches:** Building partial solution using a single loop and nested loops, Two pointers approach, Sliding window approach, Problem-solving using data structures (Hash table, stack, queue, priority queue, BST, etc.), BFS traversal, Dynamic programming bottom-up approach, Backtracking using stack, Greedy approach, etc.

**Recursive problem-solving approaches:** Decrease and conquer, Divide and conquer, DFS traversal of tree and graph, Recursive backtracking, Dynamic programming top-down memoization, etc.

One thing is common to both approaches: Repeated execution of some set of instructions until our task is done. But there are lot of differences between them in terms of thought process, implementation approach, analysis techniques, boundary conditions, code complexity, code performance, etc.

Sometimes, we can solve a coding problem using both approaches, but sometimes only one approach works better. Why do such things happen? The idea is simple: Sometimes iteration is easy to think, sometimes recursion is easy to think but sometimes both. This intuition depends on the nature of the problem.

For example, iteration is more challenging to think in the case of some algorithms. In other words, an algorithm that can be expressed naturally using recursion may not be as easy to understand if described iteratively.

- Algorithms like binary search, merge sort, quick sort, DFS Traversal of graph and tree, etc., are easy to think of using recursion.
- Approaches like backtracking and data structure like trees are recursive in nature. So they are easier to understand using recursion.

Similarly, a large variety of coding problems are easy to think and solve using iteration. Sometimes, these questions are hard or impossible to think about using recursion—for example, insertion sort, heap sort, BFS traversal of graph and tree, etc.

Iteration is implemented using loops and defined by initialization and increment of loop variables, loop body, and termination condition. On another side, recursion is implemented using function calls and determined by the number of sub-problems, function parameters, base case, and additional operations. Here function calls must be stored in a call stack to allow the return back to the caller functions.

We can transform every recursive implementation into an iterative implementation using a stack (That’s what compiler does in the background). Most of the time, iterative implementation using a stack becomes complex due to many input variables, additional operations, and complex nature of recursive calls. So changing a recursive algorithm to an iterative algorithm might require a lot of code modification, which can make our code more complicated or less maintainable.

- Sometimes, a recursive solution can be often implemented iteratively without using a stack. The classic example of this is Fibonacci numbers, finding factorial, binary search, etc.
- In some situations, recursion is simple, and we can easily convert it into an iterative code using a stack. One idea works best: we need to mimic what the compiler does! An excellent example of this is iterative DFS traversal using stack.

An iterative process is about repeatedly executing some code statements using a loop until our problem gets solved. On the other hand, the recursive process involves solving the problem using the smaller sub-problems until we reach the smallest version of the problem, i.e., the base case.

Recursion is usually slower due to the extra overhead of function calls and recursion call stack. But iteration does not involve any such overhead.

In iteration, an infinite loop may occur due to a mistake in the assignment or increment of loop variable or a wrong terminating condition. Such type of error will consume system resources like processor time or memory and stop the program execution.

In recursion, an infinite recursion may occur due to the absence of base cases or wrong base cases. This will cause a stack overflow scenario, leading to a CPU crash.

Sometimes we encounter small but critical mistakes in iteration: Off by one error! A similar error scenario arises in recursion when we pass the wrong value to function parameters.

We analyze iteration by counting loop iterations and multiplying them with the time complexity of code executed at each iteration. So most of the time, analysis of iterative code is simple. There are some exceptions when counting loop iteration is tricky, or operations performed by loop are lower than expected.

The fundamental method of recursion analysis is the recursion tree method, where we draw the recursion tree and sum the total cost of the recursion at each level. So most of the time, analyzing recursive code is difficult due to the complex recurrence relations. But in the case of the divide and conquer solution, we can analyse simply using the idea of the master theorem.

Using various examples of recursive and iterative solutions to the same question, let's understand this difference. It will be an excellent exercise to understand the comparison between the solution thought process, performance, and code simplicity.

Both iterative and recursive solutions are easy to think of, and recursive code looks more straightforward compared to iterative code. On another side, the time complexity of both implementations is O(n), but recursive implementation will take O(n) extra space for the call stack.

**Recursive Pseudocode**

```
int fact(int n)
{
if (n == 0)
return 1
return n * fact(n - 1)
}
```

**Iterative Pseudocode**

```
int fact(int n)
{
int res = 1
for (int i = 2; i <= n; i = i + 1)
res = res * i
return res
}
```

Here recursive solution is simple and easy to think of due to the natural definition of the Fibonacci number. But recursive code works in O(2^n) time and O(n) space (for call stack), which is highly inefficient due to repeated sub-problems scenario. On another side, iterative code is highly efficient and works in O(n) time and O(1) space.

**Recursive Pseudocode**

```
int fib(int n)
{
if(n <= 1)
return n
return fib(n - 1) + fib(n - 2)
}
```

**Iterative Pseudocode**

```
int fib(int n)
{
if (n <= 1 )
return n
a = 0, b = 1
for (int i = 2; i <= n; i = i + 1)
{
c = a + b
a = b
b = c
}
return c
}
```

Here both solution code looks simple, but recursive solution looks natural because we can quickly think of solving the larger problem using the solution of a smaller problem. On the other hand, an iterative solution is a loop-based simulation (without using a stack!) of the recursive solution.

Both implementations work in O(logn) time, but recursive solution will take O(logn) extra space for the call stack. Iterative code has no overhead of recursive calls and works in O(1) space. So iterative solution would be the best choice in terms of efficiency.

**Recursive pseudocode**

```
int binarySearch(int X[], int l, int r, int key)
{
if (l > r)
return -1
else
{
int mid = l + (r - l) / 2
if (X[mid] == key)
return mid
if (X[mid] > key)
return binarySearch(X, l, mid - 1, key)
else
return binarySearch(X, mid + 1, r, key)
}
}
```

**Iterative pseudocode**

```
int binarySearch(int X[], int l, int r, int key)
{
while (l <= r)
{
int mid = l + (r - l) / 2
if (X[mid] == key)
return mid
if (X[mid] < key)
l = mid + 1
else
r = mid - 1
}
return -1
}
```

Both solutions work in O(n^2) time, but recursive solution will take O(n) extra space for the call stack. On another side, iterative code has no overhead of recursive calls and is easy to think about. So iterative solution would be the best choice regarding efficiency and thinking.

**Iterative pseudocode**

```
void insertionSort(int X[], int n)
{
for (int i = 1; i < n; i = i + 1)
{
int currValue = X[i]
int j = i - 1
while (j >= 0 && X[j] > currValue)
{
X[j + 1] = X[j]
j = j - 1
}
X[j + 1] = currValue
}
}
```

**Recursive pseudocode**

```
void insertionSort(int X[], int n)
{
if (n <= 1)
return
insertionSort(X, n - 1)
int temp = X[n - 1]
int i = n - 2
while (i >= 0 && X[i] > temp)
{
X[i + 1] = X[i]
i = i - 1
}
X[i + 1] = temp
}
```

Here recursive code is simple and easy to visualize, i.e., one function parameter and 3–4 lines of code. On another side, the recursive preorder traversal is tail-recursive, i.e., no extra code operation after the final recursive call. So we can easily implement iteratively using a stack.

The good thing is that both codes work in O(n) space (worst case) and O(n) time. If we put little effort into understanding iterative code, it can also be helpful in problem-solving.

**Recursive pseudocode**

```
void preorder(TreeNode root)
{
if (root == NULL)
return
process(root->data)
preorder(root->left)
preorder(root->right)
}
```

**Iterative pseudocode**

```
void iterativePreorder(TreeNode root)
{
if(root == NULL)
return
Stack<TreeNode> treeStack
TreeNode currNode = root
TreeNode prevNode = NULL
while (treeStack.empty() == false || currNode != NULL)
{
if (currNode != NULL)
{
process(currNode->data)
treeStack.push(currNode)
currNode = currNode->left
}
else
{
prevNode = treeStack.pop()
currNode = prevNode->right
}
}
}
```

- If recursion is usually slower, why do we sometimes prefer it over iteration?
- Is there any rule of thumb for converting a recursive code into an iterative code?
- Can we convert every recursive code into an iterative code or vice versa?

**Enjoy learning, Enjoy algorithms!**

Subscribe to get weekly content on data structure and algorithms, machine learning, system design and oops.