Divide and conquer is a recursive problem-solving approach in data structure and algorithms that divide the problem into smaller subproblems, recursively solve each subproblem, and combine subproblems solution to get the solution of the original problem. So, there are three parts of problem-solving using divide and conquer:

**Divide:**We divide the problem into one or more than one smaller sub-problems, where sub-problems are the smaller versions of the same problem for less input size.**Conquer:**We solve each sub-problem recursively i.e. calling the same function for a smaller input size. Here recursion will automatically do the work for us.**Combine:**We perform some extra operations to combine subproblems solution to build the solution to the original problem.

Note: Before learning this strategy, we recommend learners to explore the concept of recursion.

We need to take care of several critical steps during the implementation of divide and conquer approach:

- There could be several sub-problems to a problem. So we need to choose the correct sub-problems involved in the solution. In most situations, we divide the problem into two equal parts.
- We solve each sub-problem recursively. So we need to carefully decide the input parameters for the recursive call of each sub-problem.
- The correct base case is essential for the correctness of the divide and conquer solution. So we need to identify the smallest or trivial version of the sub-problem for which we already know the solution.
- To implement the conquer stage, we need to identify the correct operation to combine the solutions of sub-problems to get the final solution.

Based on a comparison with the middle value, we recursively search either the left or right half until we find the target value or reach the base case.

**Base case:** When the left end crosses the right end, the subarray size will be zero i.e. if (l > r), return -1.

**Divide:** Calculate the middle index, i.e., mid = l + (r - l)/2.

**Conquer:**

- If (X[mid] == key), return the value mid.
- If (X[mid] > key), recursively search the target value in the left half.
- If (X[mid] < key), recursively search the target value in the right half.

**Combine:** There is no need for this step. Think!

The input array will look like this after the partition process, similar to quicksort: X[l…pos-1] < pivot < X[pos+1…r]. The pivot element is present at index pos, and (pos - l) elements are smaller than the pivot. So the pivot element is the (pos - l + 1)th smallest element in the array.

- If (pos - l + 1 == k), return X[pos].
- If (pos - l + 1 > k), the kth smallest is present in the left subarray.
- If (pos - l + 1 < k), the kth smallest is present in the right subarray.

We recur for either the left or right side based on comparing k and the pivot position. In other words, we solve the problem using only one sub-problem.

- Recursive algorithm of searching an element in a BST
- Recursive solution of calculating power function
- Euclidean algorithm of finding GCD of two numbers
- Median of two sorted arrays of the same size

We divide the input array of size n into two equal sub-arrays of size n/2, recursively sort both sub-arrays, and merge the sorted halves to sort the complete array.

**Base case:**When l == r during the recursive calls, the sub-array has one element left, which is trivially sorted.**Divide:**Calculate the middle index, i.e., mid = l + (r - l)/2.**Conquer:**Recursively sort both smaller halves.**Combine:**Merge both sorted halves to sort the whole array.

We divide the input array into two sub-arrays using the partition process and sort each sub-problem recursively to get the final sorted array.

**Base case:**When l ≥ r during the recursive calls, the sub-array would either be empty or have one value left.**Divide:**Divide the problem into two smaller sub-problems by partitioning the array X[l…r] into smaller subarrays around the pivot. The partition process returns the pivotIndex, i.e., the index of the pivot in the sorted array.**Conquer:**Recursively sort both sub-arrays.**Combine:**This is a trivial case because our whole array will get sorted after sorting both smaller arrays.

**Divide:**Divide the array into two equal parts around the mid-value.**Conquer:**Recursively find the min and max of both the left and right parts.**Combine:**Compare the max and min of both halves to get the max and min of the whole array.**Base case 1:**If the subarray size is 1, return the element as both max and min.**Base case 2:**If the subarray size is 2, compare both elements and return the max and min among them.

We divide the array into two equal subarrays: **X[l … mid]** and **X [mid + 1 … r]**. Then the sub-array **X[i … j]** with maximum sum must lie in one of these places: 1) Entirely in the left sub-array X[l…mid] 2) Entirely in the right sub-array X[mid+1…r] 3) Subarray crossing the mid-point. This includes some rightmost elements of the left subarray and some leftmost elements of the right subarray.

**Divide**: Calculate the middle index i.e., mid = l + (r — l)/2.**Conquer:**Recursively find the max sub-array sum of left and right halves.**Combine:**Calculate the maximum sub-array that crosses the mid-point and takes the maximum of all three possibilities.**Base case:**When l == r, there is only one element in the array, and we can directly return the max subarray sum as X[l] or X[r].

**Divide:**Calculate the middle index, i.e., mid = l + (r - l) / 2.**Conquer:**Recursively find the majority element of the left and right halves.**Combine:**If the majority elements of the left and right halves are equal, then it must be the overall majority element. Otherwise, one of them must be equal to the overall majority element. To determine this, count the frequency of the left majority and right majority in the input array and return the value with the maximum frequency as the final output.**Base case:**This is the case of a single-element array, i.e., if (l == r), return X[l] or X[r].

- Divide and conquer solution of finding the max difference
- Finding the closest pair of points
- Building segment tree for range query operations
- Divide and conquer solution of the convex hull
- Divide and conquer solution of merging k sorted list
- Constructing binary tree from preorder and inorder
- Converting a sorted array to a balanced BST
- Divide and conquer solution of fast Fourier transform

There are a few cases where we use more than two subproblems to get the solution to the final problem. We will add detailed insights about this idea later.

We use recurrence relation to define the time complexity of divide and conquer algorithms. If T(n) is the time complexity for input size n, then T(n) = Time complexity of the divide part + Time complexity of the conquer part + Time complexity of the combine part.

- The time complexity of the divide part is O(1) in most of the scenarios because it calculates the middle index. However, there are a few exceptions! For example, the quicksort divide part is a partition process, which requires O(n) operations.
- Suppose we divide the problem into k number of different sub-problems and solve each sub-problem recursively. So, the time complexity of the conquer part is T(Input size of subproblem 1) + T(Input size of subproblem 2)….+ T(Input size of subproblem k).
- The time complexity of the combine part depends on the cost of extra operations to combine the solution of smaller sub-problems. It can be O(1), O(n), O(n^2), etc., based on the nature of the operation.

The above approach provides a way to define the recurrence relation of the divide and conquer algorithm. Now we solve the recurrence relation and calculate the overall time complexity in terms of Big-O notation.

There are two popular ways to solve such recurrence relations: The **recursion tree method** and the **Master method**. You can explore this blog to understand how to analyze divide and conquer algorithms: Analysis of recursion in programming.

We implement a divide and conquer algorithm using recursion, which involves a call stack. Therefore, the space complexity depends on the size of the recursion call stack, which is equal to the depth of the recursion tree.

It is essential to ensure that the memory allocated for the recursion stack is sufficient. Otherwise, the execution may fail due to stack overflow. In other words, time-efficient divide and conquer algorithms usually have a relatively small recursion depth.

Since the sub-problems are independent, they can be computed in parallel. In other words, divide and conquer is a natural parallel algorithmic approach that leads to a large amount of parallelism. Even if we only divide our problem into two sub-problems, each sub-problem will generate two more smaller sub-problems, and so on.

Because of this property, divide-and-conquer algorithms can be used for execution in multi-processor or shared-memory systems. Data communication between processors does not need to be planned in advance because distinct sub-problems can be executed on different processors.

Additionally, it can make efficient use of memory caches. The idea is simple: once a sub-problem is small enough, it can be solved within the cache without accessing the main memory.

Sometimes, recursion with small base cases can lead to many recursive calls and huge recursive stacks. This may increase time and space complexity. On the other hand, we can choose the base case wisely to terminate recursion after fewer steps, improving efficiency significantly.

For example, a quicksort algorithm could stop when a single input or input is empty. To improve efficiency, we can switch to a simple loop-based insertion sort algorithm once the number of items to be sorted is small enough. The idea is simple: a base case larger than two can reduce many recursive calls and the recursion stack size.

The following recursion tree diagram is an example of finding the maximum and minimum for an input size of 16 elements. We can observe that the total number of recursive calls will decrease if we terminate at a larger base case.

Sometimes, we can optimize the divide and conquer solution by reducing the total number of sub-problems. The idea is simple: it reduces the number of recursive calls significantly. The best examples of this are calculating the power function, Karatsuba algorithm, and Strassen matrix multiplication. We will provide more detailed insights into this idea.

- Divide-and-conquer algorithms can also be implemented by a non-recursive algorithm using an explicit stack data structure. We recommend exploring the stack-based implementation of merge sort and quicksort.
- The risk of stack overflow can also be reduced by minimizing the parameters and internal variables of the recursive calls to the sub-problems.
- Divide and conquer can solve several challenging problems with less time complexity than their brute-force solutions. However, to be efficient, one must ensure that the divide and combine steps are efficient.
- There is a difference between the recursive solution of divide and conquer and dynamic programming problems. Understanding this difference is important for mastering the dynamic programming approach. Explore this blog: Difference between dynamic programming and divide and conquer approach.
- We can solve several coding problems using the divide and conquer idea of binary search and merge sort.

We will keep updating this blog to add more insights. Please share your feedback or insights to improve this further. Enjoy learning, Enjoy algorithms!