# Introduction to Dynamic Programming

Dynamic Programming is a popular problem-solving approach in data structures and algorithms, where we solve problems by combining the solutions to subproblems like the divide-and-conquer method. But rather than computing the same sub-problem repeatedly, we solve the sub-problem once and store the calculated value in extra memory to avoid the recomputation. We return the already stored solution to the memory when the same sub-problem appears again.

This is an idea of the Time-Memory Trade-Off, where we use extra space to improve the time complexity from exponential time solution to polynomial-time solution. But before moving forward to the dynamic programming strategy and implementation steps, let’s start by understanding a critical challenge with the recursive solution of finding nth Fibonacci.

### Recursive solution of finding nth Fibonacci

In the Fibonacci sequence, every number after the first two is the sum of the two preceding ones. For example, these are numbers of the Fibonacci series: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, … and so on.

The following recurrence relation defines the nth Fibonacci numbers in terms of n-1 and n-2 Fibonacci numbers.

``````Recursive structure: fib(n) = fib(n - 1) + fib(n - 2)
Base case: fib(0) = 0 and fib(1) = 1
``````

We are solving the problem of finding nth Fibonacci using the solution of finding (n — 1)th and (n — 2)th Fibonacci, where fib(0) and fib(1) are the base cases. So we can easily implement the recursive solution by combining the solution of these two sub-problems.

#### Pseudocode

``````int fib(n)
{
if(n == 0)
return 0
else if(n == 1)
return 1
else
return fib(n - 1) + fib(n - 2)
}
``````

#### Time complexity analysis

The above solution looks simple and elegant at first look but it is highly inefficient. Let’s understand the reason by creating a recursion tree diagram of finding nth Fibonacci. At each step of recursion, there are two recursive calls to smaller sub-problems and one addition operation to combine the solution to sub-problems. In other words, we are doing a constant number of operations at each step of recursion. So one simple idea of analysis would be to count the total number of recursive calls which is equal to the total number of nodes in the recursion tree.

If we observe the above recursion tree:

• As we move downward in the tree, the total number of recursive calls is growing exponentially i.e. 2, 4, 8, 16…, and so on.
• On the left side, the input parameter is decreasing by 1 and on the right side, the input parameter is decreasing by 2. So the height of all leaf nodes is in the range of (n/2, n). In other words, the height of the recursion tree in the worst case is n.
• Total number of recursive calls = 1 + 2 + 4 + 8 …+ 2^n = 2^(n +1) — 1 = O(2^n) [Using the summation formula of geometric series]. So time complexity = O(2^n) * O(1) = O(2^n).

This is a highly inefficient algorithm because time complexity is growing exponentially with respect to the input size. Overall, It will take a very long time to generate output for a small value of n like 30 or 50 or 70.  The critical question is: Why time complexity is growing exponentially? Let’s explore the reason with the help of the above recursion tree diagram.

We are solving the same sub-problems again and again during the recursion i.e. fib(n-2) is calculated 2 times, fib(n-3) is calculated 3 times, and so on. If we further decrease the value of n then repeated sub-problems will increase. Because of the repeated calculation of the same sub-problem, our time complexity is in the exponential order of n. Again the critical question is: Can we stop the repeated computation and improve the efficiency? Yes, here comes the idea of DP!

### Techniques to solve dynamic programming problems

There are two popular techniques to solve dynamic programming problems:

• Top-down memoization (Recursive approach)
• Bottom-up approach (Iterative approach). This is the most popular and effcient way to solve dynamic programming problems.

### Top-down memoization (Recursive approach)

This approach is an optimized version of the recursive approach. Instead of solving each sub-problem several times during recursion, we cache or store the solution in some extra memory or hash table, when we encounter the sub-problem for the first time.

When we again discover the same sub-problem during the recursion, we check for the value stored in the extra memory or hash table instead of computing it again. If the sub-problem solution is already stored, we return that value. Otherwise, we calculate the solution and store its value in extra memory or hash table.

There is a critical question: What would be the size of the extra memory? The idea is simple: We need to allocate space to store each unique sub-problem. In other words, the size of extra memory is equal to the total number of different subproblems.

One more important thing: We need to initialize the table with some empty flag or value to identify that the sub-problem solution is not calculated yet. Let’s visualize and understand this idea using a basic example of finding the nth Fibonacci problem.

#### Fibonacci solution using top-down memoization

In the recursive solution of finding nth Fibonacci, there are total n+1 different sub-problems i.e. fib(0), fib(1), fib(2)….., fib(n-2), fib(n-1) and fib(n). So we need to use extra memory of size n + 1 to store the solution to the different sub-problems. Let’s say F[n+1].

Now we initialize all the values in the table to -1, as the value of Fibonacci can’t be negative. This could help us to check whether the solution of the sub-problem has been already computed or not. Now we are ready to modify the recursive solution.

• If(F[i] < 0), it means the value of ith Fibonacci has not been computed yet. We calculate the solution recursively and store the solution at the ith index of the table i.e. F[i] = fib(i — 1) + fib(i — 2). Here the initial value of i is n.
• If (F[i] > 0), then solution of ith Fibonacci has already been calculated and stored at F[i]. So we don’t need to calculate it again and just return the value stored at F[i].
• By end of the above recursive process, all the solutions to the sub-problems will get stored in the table. So as a final solution of nth Fibonacci, we return the value stored at F[n].

One most important thing: The role of the base case is critical in this process, so we need to define it correctly (Why? Think!). A base case is a situation when recursion reaches the scenario of n = 0 and n =1, for which already know the solution. We can directly store 0 at F and 1 at F.

Pseudocode of nth Fibonacci using the top-down approach

``````Initaize F[n + 1] with -1.
int fib(n)
{
if(n <= 1)
F[n] = n
else if(F[n] < 0)
F[n] = fib(n - 1) + fib(n - 2)
else
return F[n]
}
`````` Time and space complexity analysis

At each stage of recursion, we have two sub-problems and we are solving each sub-problem only once. So the total number of recursive calls = n + n — 1 = 2n — 1 (We are solving n +1 sub-problems only once). So time complexity = O(n). Space complexity = O(n) for n + 1 size extra array to store the solution of the subproblems.

### The intuition of the bottom-up approach from the top-down approach

If we observe the flow of recursive calls to store results in the table in the top-down approach, we can get insights related to the bottom-up approach of dynamic programming. How? Let’s think!

When we call the fib(n), then recursion will come top-down calling the larger problem of size n to the smallest version of the problem of size 0 and 1 (base case). So it will first store the value at F and F. In other words, it will keep storing values from smaller problems to larger problems, where F and F are the first two values that get stored in the table. Order of execution of recursive calls:

fib(n)-> fib(n-1)-> fib(n-2) …-> fib(i)-> fib(i-1)-> fib(i-2)…-> fib(2)-> fib(1)-> fib(0)

Order of storing the results in the table

F-> F-> F …-> F[i]-> F[i-1]-> F[i-2]…-> F[n-2]-> F[n-1] ->F[n]

So one idea is simple: Rather than using the top-down approach, we can use an iterative approach to store the results in a bottom-up fashion. In other words, we can start by storing the solution for input sizes 0 and 1 (base case or recursion) and iteratively store the solution for the larger sub-problem in a bottom-up fashion. Think!

### The bottom-up approach to dynamic programming

This is the most popular and effcient way to solve dynamic programming problems in an iterative way. As seen in the above section, we can solve the smaller problems first and then combine their results to build the solution to the larger sub-problem.

We can implement this using a simple loop storing results in a table. Let’s understand and visualize this approach using the example of finding the solution of nth Fibonacci.

#### Fibonacci solution using the bottom-up approach

Step 1: Defining Table Structure

To store the solution in a bottom-up approach, we need to first define the table structure and its size. The table structure is defined by the number of problem variables. Here we have only one variable on which the state of the problem depends, which is n (The value of n is decreasing after every recursive call). So, we can construct a one-dimensional array to store the solution to the sub-problems.

Step 2: Defining Table Size

The size of this table is defined by the total number of different subproblems. If we observe the recursion tree, there can be total (n+1) sub-problems of different sizes.

Step 3: Table Size Initialization

We can initialize the table by using the base cases. This could help us to fill the table and build the solution for the larger sub-problem. F = 0 and F = 1

Step 4: Defining Iterative Structure to fill the Table

We should define the iterative structure to fill the table by using the recursive structure of the recursive solution.

``````Recursive structure: fib(n) = fib(n-1) + fib(n-2)
Iterative structure: F[i] = F[i-1] + F[i-2]
``````

Step 4: Termination and returning the final solution

After storing the solution in a bottom-up manner, our final solution gets stored at the last Index of the array i.e. return F[n].

Pseudocode of nth Fibonacci using bottom-up approach

``````int fib(int n)
{
int F[n + 1]
F = 0
F = 1
for(int i = 2; i <= n; i = i + 1)
F[i] = F[i-1] + F[i-2]
return F[n]
}
``````

Time and space complexity analysis

We are using a table of size n +1 and running only one loop to fill the table. Time complexity = O(n), Space complexity = O(n) ### Subproblem graphs in Dynamic Programming

In a dynamic-programming problem, we must identify the subproblems involved and their dependency on each other. To understand this idea, let’s understand the concept of the subproblem graph.

The sub-problem graph is a directed graph of sub-problems, where we have one node for each unique sub-problem. It has a directed edge from the node of subproblem p to subproblem q if the solution for subproblem p involves the solution for subproblem q.

For example, the bottom-up approach considers the node of the subproblem graph in such an order that we solve the subproblems q adjacent to a given subproblem p before we solve subproblem p. This will create nodes of the subproblem graph in reverse topological order of sub-problems. Think! In other words, no subproblem is considered until all of the subproblems it depends upon have been solved.

The size of the subproblem graph can help us determine the dependency and order in which we need to build solutions in a bottom-up manner. On another side, the time to compute the solution to a subproblem is proportional to the degree (number of outgoing edges) of the corresponding vertex in the subproblem graph, and the number of subproblems is equal to the number of vertices in the sub-problem graph. In this common case, the running time of dynamic programming is linear in the number of vertices and edges.

### Why is DP popular during coding interviews?

Dynamic programming is important during coding interviews due to several reasons:

• There is a lot of variation available to dynamic programming problems.
• There can be a possibility to optimize the time and space complexity of dynamic programming solutions. Sometimes we can optimize DP solution from O(n²) to O(n) time or O(n²) space to O(n) or O(1) space.
• It is one of the best ideas to evaluate the understanding of problem-solving using iteration and recursion.

### Real-life Application of Dynamic Programming

• Sequence Alignment, Document diffing Algorithms, Document Distance Algorithm (Edit Distance), Plagiarism Detection, a Typesetting system
• Duckworth Lewis Method in cricket, Flight control
• Speech recognition, Image processing, Machine learning algorithms
• Economics, Financial Trading, Bioinformatics, Operations research

### Suggested problems to solve

References

• Algorithms by CLRS
• Algorithm Design Manual by Skiena

Happy coding! Enjoy Algorithms.