At the high level, two approaches are important for solving data structures and algorithms problems: The iterative approach and the Recursive approach.
Iterative problem-solving approaches: Building partial solution using a single loop and nested loops, Two pointers approach, Sliding window approach, Problem-solving using data structures (Hash Table, Stack, Queue, Priority Queue, BST, etc.), BFS traversal, Dynamic programming bottom-up approach, Backtracking using stack, Greedy approach, etc.
Recursive problem-solving approaches: Decrease and conquer, divide and conquer, DFS traversal, Recursive backtracking, Dynamic programming top-down memoization, etc.
One thing is common to both approaches: Repeated execution of a set of instructions until our task is done. But there are a lot of differences between them in terms of thought process, implementation approach, analysis techniques, boundary conditions, code complexity, code performance, etc.
Sometimes, we can solve a problem using both approaches, but sometimes only one approach works better. Why do such things happen? The idea would be simple: sometimes iteration is easy to think, sometimes recursion is easy to think but sometimes both. It depends on the nature of the problem.
Iteration is more challenging to understand in the case of some algorithms. In other words, an algorithm that can be expressed naturally using recursion may not be as easy to understand if described iteratively.
We analyze iteration using counting loop iteration and multiplying it with the complexity of code executed at each iteration. So most of the time, the analysis of an iterative code is simple. There are some exceptions when counting loop iteration is tricky, or operations performed by the nested loop are lower than expected. Think!
The fundamental method of recursion analysis is the recursion tree method, where we draw the recursion tree and sum the total cost of the recursion at each level. So most of the time, analyzing recursive code is difficult due to the complex recurrence relations.
Using various examples of recursive and iterative solutions to the same question, let's understand this difference. It will be an excellent exercise to understand the comparison between the solution thought process, performance, and code simplicity.
Both iterative and recursive solutions are easy to think of, and recursive code looks a little more straightforward compared to the iterative code. On another side, the time complexity of both codes is O(n), but recursive implementation will take O(n) extra space for the call stack.
Recursive Pseudocode
int fact(int n)
{
if (n == 0)
return 1
return n * fact(n - 1)
}
Iterative Pseudocode
int fact(int n)
{
int res = 1
for (int i = 2; i <= n; i = i + 1)
res = res * i
return res
}
Here recursive solution is simple and easy to think of due to the natural definition of the Fibonacci number. But recursive code works in O(2^n) time and O(n) space (call stack), which is highly inefficient due to the repeated sub-problem scenario. But on another side, iterative code is highly efficient and works in O(n) time and O(1) space.
Recursive Pseudocode
int fib(int n)
{
if(n <= 1)
return n
return fib(n - 1) + fib(n - 2)
}
Iterative Pseudocode
int fib(int n)
{
if (n <= 1 )
return n
a = 0, b = 1
for (int i = 2; i <= n; i = i + 1)
{
c = a + b
a = b
b = c
}
return c
}
Here both solution code looks simple, but the recursive solution looks natural because we can quickly think of solving the larger problem using the solution of a smaller problem of binary search. On the other hand, an iterative solution is a loop-based simulation(without using a stack!) of the recursive solution, which is tricky to think of!
Both codes work in O(logn) time, but the recursive solution will take O(logn) extra space for the call stack. Conversely, iterative code has no overhead of recursive calls and works in O(1) space. So iterative solution would be the best choice in terms of efficiency.
Recursive Pseudocode
int binarySearch(int X[], int l, int r, int key)
{
if (l > r)
return -1
else
{
int mid = l + (r - l) / 2
if (X[mid] == key)
return mid
if (X[mid] > key)
return binarySearch(X, l, mid - 1, key)
else
return binarySearch(X, mid + 1, r, key)
}
}
Iterative Pseudocode
int binarySearch(int X[], int l, int r, int key)
{
while (l <= r)
{
int mid = l + (r - l) / 2
if (X[mid] == key)
return mid
if (X[mid] < key)
l = mid + 1
else
r = mid - 1
}
return -1
}
Both solutions work in O(n^2) time, but the recursive solution will take O(n) extra space for the call stack. Iterative code has no overhead of recursive calls and is easy to think. So iterative solution would be the best choice in terms of efficiency and thinking.
Iterative Pseudocode
void insertionSort(int X[], int n)
{
for (int i = 1; i < n; i = i + 1)
{
int currValue = X[i]
int j = i - 1
while (j >= 0 && X[j] > currValue)
{
X[j + 1] = X[j]
j = j - 1
}
X[j + 1] = currValue
}
}
Recursive Pseudocode
void insertionSort(int X[], int n)
{
if (n <= 1)
return
insertionSort(X, n - 1)
int temp = X[n - 1]
int i = n - 2
while (i >= 0 && X[i] > temp)
{
X[i + 1] = X[i]
i = i - 1
}
X[i + 1] = temp
}
Here recursive code is simple and easy to visualize, i.e., one function parameter and 3–4 lines of code. On another side, recursive preorder traversal is tail-recursive, i.e., there is no extra code operation after the final recursive call. So we can easily implement it iteratively using a stack, which is a little complex in code implementation.
The good thing is that both codes have O(n) time and space complexity. So if we put little effort into understanding iterative code, it can be highly useful in problem-solving.
Recursive Pseudocode
void preorder(TreeNode root)
{
if (root == NULL)
return
process(root->data)
preorder(root->left)
preorder(root->right)
}
Iterative Pseudocode
void iterativePreorder(TreeNode root)
{
if(root == NULL)
return
Stack<TreeNode> treeStack
TreeNode currNode = root
TreeNode prevNode = NULL
while (treeStack.empty() == false || currNode != NULL)
{
if (currNode != NULL)
{
process(currNode->data)
treeStack.push(currNode)
currNode = currNode->left
}
else
{
prevNode = treeStack.pop()
currNode = prevNode->right
}
}
}
Enjoy learning, Enjoy algorithms!
Competitive Programming is a coding contest involving many participants who compete to design efficient solutions to coding problems in a given time. It is one of the great activities to enhance coding, problem-solving and analytical skills. This blog is a step-by-step guide for beginners to start a competitive programming journey.
To process data stored in a binary tree, we need to traverse each tree node, and the process to visit all nodes is called binary tree traversal. In this blog, we will be discussing three popular recursive tree traversals: preorder, inorder and postorder traversals. These traversals are also called depth-first search traversal or dfs traversal in data structures.
Comparison of sorting algorithms based on different parameters helps us choose an effcient sorting method in various problem-solving scenarios. You will get an answer to the following questions: how to compare two sorting algorithms? Which sorting is best in terms of properties like efficiency, in-place, stability, online vs. offline, etc.
Learning analysis of recursion is critical to understand the time complexity analysis of recursive algorithms. We will discuss these concepts related to the recursion analysis: Recurrence relations of recursive algorithms, steps to analyze the time complexity of recursion, Recursion tree method, and master theorem to analyze divide and conquer algorithms.
Subscribe to get free weekly content on data structure and algorithms, machine learning, system design, oops design and mathematics.