Divide and conquer and dynamic programming are popular problem-solving approaches in data structure and algorithms. Both approaches look similar in one way: They use a similar idea to break problems into subproblems and combine their solutions to obtain the solution to the original problem. But there are a lot of differences between both approaches in terms of:
Let’s understand the difference between Divide and conquer and dynamic programming with respect to each one of the above points.
Divide and conquer is used when the sub-problems in a problem are independent of each other, while dynamic programming is used when the sub-problems are dependent on each other.
In divide and conquer, we decompose a problem into smaller independent sub-problems, which are then solved separately and combined to give the final solution. On the other hand, in dynamic programming, the problem is decomposed into smaller, dependent and overlapping sub-problems, which are solved in a specific order to give the final solution.
In the context of algorithms, independent means that the solution to one subproblem does not depend on or affect the solution to another subproblem. This means that the subproblems can be solved separately and their solutions can be combined to get the final solution. An example of this is the divide and conquer algorithm of merge sort, where the input array is partitioned into two equal and distinct subarrays. The subproblems are then solved recursively, with each subarray being treated as a separate problem.
Dependent means that the solution to one subproblem requires the solution to another subproblem. This means that the subproblems must be solved in a specific order, with the solution to one subproblem being used to solve the next. An example of this is the problem of finding the nth Fibonacci number, where we use the solutions to the subproblems of finding the (n-1)th and (n-2)th Fibonacci numbers. Similarly, to calculate the (n-1)th Fibonacci number, we again use the solution to the (n-2)th Fibonacci number.
In divide and conquer algorithms, when the subproblems overlap with each other, it can lead to a lot of unnecessary work being done. This is because the algorithm will repeatedly solve the same subproblems, leading to an exponential time complexity for the overall solution.
On the other hand, dynamic programming optimizes the time complexity when subproblems are overlapping. It does this by solving each subproblem only once, and then saving the solution in an array or hash table. This allows the algorithm to avoid solving the same subproblem multiple times, which can improve the time complexity from exponential to polynomial.
Divide and conquer algorithms can be analyzed using either the recursion tree method or the master theorem. The recursion tree method involves defining a recurrence relation for the algorithm, calculating the total number of operations performed at each level, and summing them up level by level. The master theorem is a ready-made formula that can be used to analyze divide and conquer algorithms more efficiently.
On the other hand, the recursive (divide and conquer) solution for dynamic programming problems often has a complex recurrence relation, and it is not possible to apply the master theorem. In such cases, it is often better to use the recursion tree method or to apply basic combinatorial ideas to analyze the algorithm.
When implementing dynamic programming using the recursive top-down approach (also known as memoization), we need to consider the total number of different subproblems and the number of operations required to calculate each one. This is because each subproblem is only calculated once, so the recurrence relation method is not applicable.
Similarly, when implementing dynamic programming using the iterative bottom-up approach (also known as tabulation), we can analyze the algorithm by simply counting the number of loop iterations and multiplying it by the number of operations performed at each iteration.
In divide and conquer problems, the input size of the subproblems decreases exponentially, which means that the height of the recursion tree will be O(log n). Examples of divide and conquer problems that exhibit this behavior include merge sort and binary search, where the input size of the subproblems decreases by a factor of 1/2 (e.g. n -> n/2 -> n/4, etc.).
On the other hand, in dynamic programming problems, the input size of the subproblems decreases linearly. This means that the height of the recursion tree will be a linear function of the input size, such as O(n). Because of this, the stack space used in the recursive implementation of dynamic programming problems is larger compared to the stack space used in divide and conquer problems.
Dynamic programming (DP) is generally considered to be more difficult than divide and conquer, because it can be challenging to figure out how to build a recursive solution or decompose a given problem into subproblems. This process of decomposition is often the most nontrivial part of a DP solution, and requires careful thought and analysis.
Once the problem has been decomposed into subproblems, dynamic programming algorithms typically proceed by solving the subproblems in a specific order, using the solutions to earlier subproblems to solve later ones. This process can be complex, especially when the subproblems are dependent on each other in a non-trivial way.
Dynamic programming is often used to solve optimization and counting problems where the solution space is very large. On the other hand, there is no specific pattern to the types of problems that are typically solved using divide and conquer.
Enjoy learning, Enjoy algorithms :)