Divide and conquer and dynamic programming are popular problem-solving approaches in data structure and algorithms. Both approaches look similar in one way: They use a similar idea to break problems into subproblems and combine their solutions to obtain the solution to the original problem. But there are a lot of differences between both approaches in terms of:

- The nature of sub-problems involved in the solution
- Performance when sub-problems are overlapping
- Thinking and building recursive solution
- Time and space complexity analysis
- State and input size of subproblems
- Types of problems they solve

Let’s understand the difference between Divide and conquer and dynamic programming with respect to each one of the above points.

We apply divide and conquer when sub-problems are independent of each other and apply dynamic programming when subproblems are dependent on each other. In other words, in the recursive solution of the divide and conquer problems, we decompose the problem into independent subproblems but in the recursive solution of dynamic programming problems, it is almost always decomposed into dependent and overlapping subproblems.

Independent means the solution to one subproblem does not affect the solution to another subproblem. So the solution to each subproblem can be computed independently of one another. For example, in the divide and conquer algorithm of merge sort, the input array is partitioned into two equal, and different subarrays, and each subproblem is solved recursively. Since the subarrays are different, the subproblems are different.

Dependent means the solution of one sub-problem requires the solution of another sub-problems. So the solution to subproblems can not be calculated independently from one another. For example, for finding the nth Fibonacci, we use two subproblems of finding n-1 and n-2 Fibonacci. Similarly, for calculating n-1 th Fibonacci, we again use n-2 th Fibonacci.

When the subproblems overlap with each other, a divide-and-conquer algorithm does more work than necessary. It repeatedly solves the same subproblems and the time complexity of the solutions can be exponential time!

On another side, dynamic programming optimizes the time complexity, when sub-problems are overlapping. It solves each subsubproblem just once, saves its solution in an array or hash table, and avoids solving the same subsubproblem every time. This could improve the time complexity from exponential time solution to polynomial time.

We can analyze divide and conquer algorithms using the idea of recursion tree or master theorem. Since each sub-problems are independent, we can easily define the recurrence relation, calculate the total number of operations performed at each level and do the sum level by level. On another side, we can also use the ready-made formula of the master theorem.

In the case of dynamic programming problems, the recurrence relation of the recursive (Divide and conquer) solution can be complex and we can not apply the master theorem. The best idea is to apply the recursion tree method or apply the idea of basic combinatorics.

When we implement dynamic programming efficiently using the recursive top-down approach (Memoization), we need to count the total number of different sub-problems and multiply it with operations performed to calculate each sub-problem. Actually, each sub-problem is calculated once, the idea of recurrence relation will not work here.

Similarly, when we implement dynamic programming efficiently using the iterative bottom-up approach (Tabulation), we analyze it using simply by counting loop iteration and multiplying it by the total number of operations performed at each iteration. This is just a simple approach to analyzing the loop.

In divide and conquer problems, the input size of subproblems decreases exponentially. Due to this, the height of the recursion tree will be O(logn). The best example is merge sort and binary search, where subproblems input size decreases by the factor of 1/2 i.e. n-> n/2 -> n/4 and so on.

In the case of dynamic programming problems, the input size of the sub-problems decreases linearly. So the height of the recursion tree will be the linear function of input sizes like O(n) or something else. Due to this thing, the size of stack space used in the divide and conquer problem is less in comparison to recursive implementation of dynamic programming problems.

Dynamic programming (DP) is generally a more difficult technique than divide and conquer. The main reason is that the difficulty is in figuring out how to build a recursive solution or decompose a given problem into subproblems. Thus the nontrivial work in a DP solution is figuring out this decomposition.

We mostly solve optimization and counting problems using dynamic programming where the solution space is very large. But there is no such pattern in divide and conquer problems.

Enjoy learning, Enjoy algorithms!

Subscribe to get weekly content on data structure and algorithms, machine learning, system design and oops.