Dynamic Programming: Examples, Common Problems, and Solutions
Dynamic programming problems can catch you off-guard in an interview or exam. Check out the most common problems and the solutions here.
There’s no doubt that dynamic programming problems can be very intimidating in a coding interview. Even when you may know that a problem needs to be solved using a dynamic programming method, it’s a challenge to be able to come up with a working solution in a limited time frame.
The best way to be good at dynamic programming problems is to go through as many of them as you can. While you don’t necessarily need to memorize the solution to every problem, it’s good to have an idea of how to go about implementing one.

What Is Dynamic Programming?
Simply put, dynamic programming is an optimization method for recursive algorithms, most of which are used to solve computing or mathematical problems.
You can also call it an algorithmic technique for solving an optimization problem by breaking it into simpler sub-problems. A key principle that dynamic programming is based on is that the optimal solution to a problem depends on the solutions to its sub-problems.
Wherever we see a recursive solution that has repeated calls for the same inputs, we can optimize it using dynamic programming. The idea is to simply store the results of subproblems so that we do not have to re-compute them when needed later.
Dynamically programmed solutions have a polynomial complexity which assures a much faster running time than other techniques like recursion or backtracking. In most cases, dynamic programming reduces time complexities, also known as big-O , from exponential to polynomial.
Now that you have a good idea of what dynamic programming is, it’s time to check out a few common problems and their solutions.
Dynamic Programming Problems
1. knapsack problem.
Problem Statement
Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight doesn’t exceed a given limit and the total value is as large as possible.
You’re given two integer arrays values[0..n-1] and weights[0..n-1] which represent values and weights associated with n items respectively. Also given is an integer W which represents the knapsack capacity.
Here we’re solving the 0/1 knapsack problem, which means that we can choose to either add an item or exclude it.
- Create a two-dimensional array with n+1 rows and w+1 columns. A row number n denotes the set of items from 1 to i , and a column number w denotes the maximum carrying capacity of the bag.
- The numeric value at [i][j] denotes the total value of items up till i in a bag that can carry a maximum weight of j.
- At every coordinate [i][j] in the array, pick the maximum value that we can obtain without item i , or the maximum value that we can obtain with item i ---whichever is larger.
- The maximum obtainable value by including item i is the sum of item i itself and the maximum value that can be obtained with the remaining capacity of the knapsack.
- Perform this step until you find the maximum value for the W th row.
2. Coin Change Problem
Suppose you’re given an array of numbers that represent the values of each coin. Given a specific amount, find the minimum number of coins that are needed to make that amount.
- Initialize an array of size n+1 , where n is the amount. Initialize the value of every index i in the array to be equal to the amount. This denotes the maximum number of coins (using coins of denomination 1) required to make up that amount.
- Since there is no denomination for 0, initialise the base case where array[0] = 0 .
- For every other index i , we compare the value in it (which is initially set to n+1 ) with the value array[i-k] +1 , where k is less than i . This essentially checks the entire array up till i-1 to find the minimum possible number of coins we can use.
- If the value at any array[i-k] + 1 is lesser than the existing value at array[i] , replace the value at array[i] with the one at array[i-k] +1 .
3. Fibonacci
The Fibonacci Series is a sequence of integers where the next integer in the series is the sum of the previous two.
It’s defined by the following recursive relation: F(0) = 0, F(n) = F(n-1) + F(n-2) , where F(n) is the nth term. In this problem, we have to generate all the numbers in a Fibonacci sequence up till a given nth term.
- First, use a recursive approach to implement the given recurrence relation.
- Recursively solving this problem entails breaking down F(n) into F(n-1) + F(n-2) , and then calling the function with F(n-1) and F(n+2) as parameters. We do this until the base cases where n = 0 , or n = 1 are reached.
- Now, we use a technique called memoization. Store the results of all function calls in an array. This will ensure that for every n, F(n) only needs to be calculated once.
- For any subsequent calculations, its value can simply be retrieved from the array in constant time.
4. Longest Increasing Subsequence
Find the length of the longest increasing subsequence inside a given array. The longest increasing subsequence is a subsequence within an array of numbers with an increasing order. The numbers within the subsequence have to be unique and in ascending order.
Also, the items of the sequence do not need to be consecutive.
- Start with a recursive approach where you calculate the value of the longest increasing subsequence of every possible subarray from index zero to index i, where i is lesser than or equal to the size of the array.
- To turn this method into a dynamic one, create an array to store the value for every subsequence. Initialise all the values of this array to 0.
- Every index i of this array corresponds to the length of the longest increasing subsequence for a subarray of size i .
- Now, for every recursive call of findLIS(arr, n) , check the n th index of the array. If this value is 0, then calculate the value using the method in the first step and store it at the n th index.
- Finally, return the maximum value from the array. This is the length of the longest increasing subsequence of a given size n .
Solutions to Dynamic Programming Problems
Now that you’ve gone through some of the most popular dynamic programming problems, it’s time to try implementing the solutions by yourself. If you’re stuck, you can always come back and refer to the algorithm section for each problem above.
Given how popular techniques such as recursion and dynamic programming are today, it won’t hurt to check out some popular platforms where you can learn such concepts and hone your coding skills . While you might not run into these problems on a daily basis, you'll surely encounter them in a technical interview.
Naturally, having the know-how of common problems is bound to pay dividends when you go for your next interview. So open up your favourite IDE , and get started!
Dynamic Programming: An Approach to Solving Computing Problems

Sometimes in computer science, you will run into problems. You can divide these into subproblems, which can, in turn, be divided into smaller subproblems. If the smaller subproblems overlap, then you can save the result in memory for future reference. This way, you don’t need to compute the same result multiple times, thus increasing the efficiency of the program substantially. This way of solving these problems is referred to as dynamic programming.
In this article, you will learn what dynamic programming is. I will also show how to compute Fibonacci numbers, which is a simple problem that dynamic programming can solve. I will compare the dynamic programming solutions to the naive solution that uses recursion. These examples are written in Python syntax. Finally, I’ll also give some general pointers to keep in mind when attempting to solve problems using dynamic programming
Dynamic programming
More From Artturi Jalli: Python Cheat Sheet: A Handy Guide to Python
What Types of Problems Can Dynamic Programming Solve?
Dynamic programming is typically a way to optimize solutions to certain problems that use recursion. If a recursive solution to a problem has to compute solutions for subproblems with the same inputs repeatedly, then you can optimize it through dynamic programming. As mentioned earlier, in this case, you would simply save the result of the computation for use later if and when it’s needed. This optimization can reduce the time complexity of an algorithm from exponential time to polynomial time. This means that the number of computations n scales like a polynomial expression instead of scaling like an exponential expression as n increases. In general, polynomial expressions grow much slower than exponential expressions.
There are two conditions that need to be satisfied to use dynamic programming:
- Overlapping subproblems
- Optimal substructure property
What Are Overlapping Subproblems?
I alluded to the overlapping subproblems condition earlier. It simply means that when solving the problem in question, the solutions for the same subproblems are repeatedly necessary. In this case, the solutions to these subproblems can be stored for later use to skip recomputation.
Another way to think about this condition is to turn it upside down. If there are no overlapping subproblems, then there’s no point in saving the solutions for the subproblems, and you can’t use dynamic programming.
There are two different ways of storing the solutions to the overlapping subproblems:
- Memoization (top-down)
- Tabulation (bottom-up)
What Is Memoization?
The memoization approach to dynamic programming is very similar to the naive recursion approach, with only a small change. The difference is that we use a lookup table to store solutions to the subproblems, and then use this table to check whether that solution already exists.
If the solution for a certain subproblem already exists in the lookup table, then that solution is returned from the lookup table. If it does not, then it needs to be computed (through recursion) and added to the lookup table.
For the sake of clarity, let’s define a solution for a subproblem in our dynamic programming problem to be DP[X] ., with DP[N] being the desired solution and DP[0] being the base solution. In the memoization approach, the program starts from DP[N] and asks for the solutions from which DP[N] can be reached (these should be subproblems of lower order DP[n<N]) . Then, from these states, the same process is repeated recursively, until reaching the base solution DP[0] .
If this feels a little too abstract, don’t worry. The examples introduced later in this article should clarify what I mean.
Memoization is known as a top-down approach to dynamic programming since the program will start from the desired, last (top) state and work its way down recursively.
What Is Tabulation?
The tabulation approach to dynamic programming works in a reverse manner compared to the memoization approach. The program will start from the base (or bottom) solution for the subproblem and work its way up, solving the subproblems one by one until reaching the desired solution.
In terms of solutions to subproblems, the tabulation approach starts from the base solution DP[0] and then calculates DP[1], DP[2], … DP[N] until reaching the desired solution for the subproblem DP[N] . Since we started from the base solution DP[0] and worked towards the desired solution DP[N] , the tabulation approach is also known as a bottom-up approach.
Again, the examples below should make this easier to understand.
What Is Optimal Substructure Property?
If the optimal solution to a problem can be obtained using the optimal solution to its subproblems, then the problem is said to have optimal substructure property .
As an example, let’s consider the problem of finding the shortest path between ‘Start’ and ‘Goal’ nodes in the graph below. The nodes are connected to other nodes via edges, and the distance between two connected nodes is marked with a number next to the edge.

The shortest path from the Start node to the Goal node is through nodes three and four.
This problem clearly has optimal substructure property. Since the shortest path from the Start node to the Goal node goes through Node Four, it clearly follows that this path is a combination of the shortest path from the Start node to Node Four and the shortest path from Node Four to the Goal node.
Many problems do not have optimal substructure property. For example, the problem of finding the longest path (with no cycles) between the Start node and the Goal node in the above graph doesn’t. Here’s how you can see this:
The longest path is: Start - Node Three - Node Two - Node One - Node Four - Goal. However, this does not imply that the longest path from the Start node to Node Two is Start - Node Three - Node Two. The longest path from the Start node to Node Two is actually Start - Node One - Node Four - Node Three - Node Two (and Start - Node One - Node Four - Goal - Node Two, which has equal length to the first one).
Dynamic Programming Example: Calculating Fibonacci Numbers
One of the simplests examples of dynamic programming is the computation of Fibonacci numbers, which are numbers from the Fibonacci sequence. The first Fibonacci number is zero, the second is one, and the subsequent numbers are the sum of the previous two Fibonacci numbers. The 10 first Fibonacci numbers are zero, one, one, two, three, five, eight, 13, 21, and 34.
Let’s first start with the naive, recursive solution. Here’s a Python function to calculate the nth Fibonacci number (indexing starts from zero):
From this example it is easy to see that this problem satisfies the overlapping subproblems condition since the function clearly has to calculate the previous Fibonacci numbers multiple times (when n > three). The smallest Fibonacci numbers are computed most often, when this function is called for a large n.
This problem also clearly has optimal substructure since there is only a single solution for the subproblems, which are used to compute the solution to the desired problem.
Due to the recursion, this function runs in exponential time.
Let’s next look at how this could be solved using dynamic programming. Let’s start with a top-down solution using memoization. Here’s a Python function that calculates the nth Fibonacci number that implements dynamic programming through memoization:
This approach is quite similar to the recursive approach since it starts from calculating the nth Fibonacci number and, in order to calculate it, goes onto calculating the n-1st and n-2nd Fibonacci numbers. The difference is in the lookup table, where the smaller Fibonacci numbers will be saved as they are calculated, so that they do not need to be calculated again and again.
This makes this function actually run in linear time instead of exponential time.
For the sake of an example, let’s also look at a Python function that solves the same problem in a bottom-up manner with dynamic programming using tabulation:
In this solution, the nth Fibonacci number is calculated in a bottom-up manner, starting by calculating the first Fibonacci number, then the second, third and so on until reaching the nth Fibonacci number. This function also runs in linear time.
More in Software Engineering: Glob Module in Python: Explained
How to Solve Problems Using Dynamic Programming
The first step to solving a problem using dynamic programming is to identify it as a dynamic programming problem. If you can validate that the problem has overlapping subproblems and that it satisfies the optimal substructure property, you can be sure that you can solve it with dynamic programming.
The second step is to decide on a state expression. This state is the set of parameters that uniquely identifies the different subproblems.
In the Fibonacci numbers example, the parameter identifying the state would just be the serial number of a certain Fibonacci number.
There can be more than one parameter identifying a state. You should always use as few parameters as possible, however.
The third — and probably hardest — step in solving problems using dynamic programming is formulating the relation between the different states.
In the Fibonacci number case this is simple, however, since the nth Fibonacci number is the sum of the n-1st and n-2nd Fibonacci numbers. So F[n] = F[n-1] + F[n-2].
The fourth step is to implement memoization or tabulation for the states that you decided upon, using the relation you discovered between the states. This means simply saving the state (or in other words the solution to a certain subproblem) so it can be recovered from memory without recomputation when it is needed again. This should be fairly simple, if you did steps one to three well
Built In’s expert contributor network publishes thoughtful, solutions-oriented stories written by innovative tech professionals. It is the tech industry’s definitive destination for sharing compelling, first-person accounts of problem-solving on the road to innovation.
Great Companies Need Great People. That's Where We Come In.
The complete beginners guide to dynamic programming
Dynamic programming isn't about design patterns; it's a way of thinking that breaks down a problem into individual components.

If you've been programming for long enough, you've probably heard the term dynamic programming. Often a key subject in technical interviews , the idea will also come up in design review meetings or regular interactions with fellow developers. This essay will examine what dynamic programming is and why you would use it. I'll be illustrating this concept with specific code examples in Swift, but the concepts I introduce can be applied to your language of choice. Let's begin!
A way of thinking
Unlike specific coding syntax or design patterns, dynamic programming isn't a particular algorithm but a way of thinking . Therefore, the technique takes many forms when it comes to implementation.
The main idea of dynamic programming is to consider a significant problem and break it into smaller, individualized components. When it comes to implementation, optimal techniques rely on data storage and reuse to increase algorithm efficiency. As we'll see, many questions in software development are solved using various forms of dynamic programming. The trick is recognizing when optimal solutions can be devised using a simple variable or require a sophisticated data structure or algorithm.
For example, code variables can be considered an elementary form of dynamic programming. As we know, a variable's purpose is to reserve a specific place in memory for a value to be recalled later.
While addNumbersMemo above does provide a simple introduction, the goal of a dynamic programming solution is to preserve previously seen values because the alternative could either be inefficient or could prevent you from answering the question. This design technique is known as memoization .
Code challenge—Pair of numbers
Over the years, I've had the chance to conduct mock interviews with dozens of developers preparing for interviews at top companies like Apple, Facebook, and Amazon. Most of us would be happy to skip the realities of the dreaded whiteboard or take-home coding project. However, the fact is that many of these brain-teaser questions are designed to test for a basic understanding of computer science fundamentals. For example, consider the following:
As developers, we know there are usually multiple ways to arrive at a solution. In this case, the goal is knowing which numbers should be paired to achieve the expected result. As people, it's easy for us to quickly scan the sequence of numbers and promptly come up with the pair of 9 and 2. However, an algorithm will need to either check and compare each value in the sequence or develop a more streamlined solution to help us find the values we are seeking. Let's review both techniques.
A brute force approach
Our first approach involves looking at the first value, then reviewing each subsequent value to determine if it will provide the difference needed to solve the question. For example, once our algorithm checks the value of the first array item, 8, it will then scan the remaining values for 3 (e.g., 11 - 8 = 3). However, we can see the value of 3 doesn't exist, so the algorithm will repeat the same process for the next value (in our case, 10) until it finds a successful matching pair. Without going into the details of big-O notation, we can assume this type of solution would have an average runtime of O(n ^ 2)time or greater, mainly because our algorithm works by comparing each value with every other value. In code, this can be represented as follows:
A memoized approach
Next, let's try a different approach using the idea of memoization. Before implementing our code, we can brainstorm how storing previously seen values will help streamline the process. While using a standard array is feasible, a set collection object (also referred to as a hash table or hash map) could provide an optimized solution.
Using a memoized approach, we've improved the algorithm's average run time efficiency to O(n + d) by adding previously seen values to a set collection object. Those familiar with hash-based structures will know that item insert and retrieval occurs in O(1) - constant time. This further streamlines the solution, as the set is designed to retrieve values in an optimized way regardless of size.
The Fibonacci sequence
When learning various programming techniques, one topic that comes to mind is recursion. Recursive solutions work by having a model that refers to itself. As such, recursive techniques are implemented through algorithms or data structures. A well-known example of recursion can be seen with the Fibonacci sequence—a numerical sequence made by adding the two preceding numbers (0, 1, 1, 2, 3, 5, 8, 13, 21, etc):
When examined, our code is error-free and works as expected. However, notice a few things about the performance of the algorithm:
Positions (n) fibRec() - Number of times called214510109151219
As shown, there's a significant increase in the number of times our function is called. Similar to our previous example, the algorithm's performance decreases exponentially based on the input size. This occurs because the operation does not store previously calculated values. Without access to stored variables, the only way we can obtain the required (preceding) values is through recursion. Assuming this code is used in a production setting, the function could introduce bugs or performance errors. Let's refactor the code to support a memoized approach:
This revised solution now supports memoization through the use of stored variables. Notice how the refactored code no longer requires a recursive technique. The two most previous values are added to a result, which is appended to the main array sequence. Even though the algorithm's performance still depends on the sequence size, our revisions have increased algorithmic efficiency to O(n) - linear time. In addition, our iterative solution should be easier to revise, test and debug since a single function is added to the call stack, thus reducing complexities with memory management and object scope.
We've learned that dynamic programming isn't a specific design pattern as it is a way of thinking. Its goal is to create a solution to preserve previously seen values to increase time efficiency. While examples include basic algorithms, dynamic programming provides a foundation in almost all programs. This includes the use of simple variables and complex data structures.
Forgot password? New user? Sign up
Existing user? Log in
Dynamic Programming
Already have an account? Log in here.
- Karleigh Moore
- Norbert Madarász
Dynamic programming refers to a problem-solving approach, in which we precompute and store simpler, similar subproblems, in order to build up the solution to a complex problem. It is similar to recursion , in which calculating the base cases allows us to inductively determine the final value. This bottom-up approach works well when the new value depends only on previously calculated values.
An important property of a problem that is being solved through dynamic programming is that it should have overlapping subproblems. This is what distinguishes DP from divide and conquer in which storing the simpler values isn't necessary.
To show how powerful the technique can be, here are some of the most famous problems commonly approached through dynamic programming:
- Backpack Problem : Given a set of treasures with known values and weights, which of them should you pick to maximize your profit whilst not damaging your backpack which has a fixed capacity?
- Egg Dropping : What is the best way to drop \(n\) eggs from an \(m\)-floored building to figure out the lowest height from which the eggs when dropped crack?
- Longest Common Subsequence : Given two sequences, which is the longest subsequence common to both of them?
- Subset Sum Problem : Given a set and a value \(n,\) is there a subset the sum of whose elements is \(n?\)
- Fibonacci Numbers : Is there a better way to compute Fibonacci numbers than plain recursion?
In a contest environment, dynamic programming almost always comes up (and often in a surprising way, no matter how familiar the contestant is with it).
Motivational Example: Change of Coins
Recursion with memoization, bidimensional dynamic programming: example, example: maximum paths.
What is the minimum number of coins of values \(v_1,v_2, v_3, \ldots, v_n\) required to amount a total of \(V?\) You may use a denomination more than once.
Optimal Substructure
The most important aspect of this problem that encourages us to solve this through dynamic programming is that it can be simplified to smaller subproblems.
Let \(f(N)\) represent the minimum number of coins required for a value of \(N\).
Visualize \(f(N)\) as a stack of coins. What is the coin at the top of the stack? It could be any of \(v_1,v_2, v_3, \ldots, v_n\). In case it were \(v_1\), the rest of the stack would amount to \(N-v_1;\) or if it were \(v_2\), the rest of the stack would amount to \(N-v_2\), and so on.
How do we decide which is it? Sure enough, we do not know yet. We need to see which of them minimizes the number of coins required.
Going by the above argument, we could state the problem as follows:
\[f(V) = \min \Big( \big\{ 1 + f(V - v_1), 1 + f(V-v_2), \ldots, 1 + f(V-v_n) \big \} \Big). \]
Because the coin at the top of the stack also counts as one coin, and then we can look at the rest.
Overlapping Subproblems
It is easy to see that the subproblems could be overlapping.
For example, if we are trying to make a stack of $11 using $1, $2, and $5, our look-up pattern would be like this: \[\begin{align} f(11) &= \min \Big( \big\{ 1+f(10),\ 1+ f(9),\ 1 + f(6) \big\} \Big) \\ &= \min \Big ( \big \{ 1+ \min {\small \left ( \{ 1 + f(9), 1+ f(8), 1+ f(5) \} \right )},\ 1+ f(9),\ 1 + f(6) \big \} \Big ). \end{align} \] Clearly enough, we'll need to use the value of \(f(9)\) several times.
One of the most important aspects of optimizing our algorithms is that we do not recompute these values. To do this, we compute and store all the values of \(f\) from 1 onwards for potential future use.
The recursion has to bottom out somewhere, in other words, at a known value from which it can start.
For this problem, we need to take care of two things:
Zero : It is clear enough that \(f(0) = 0\) since we do not require any coins at all to make a stack amounting to 0.
Negative and Unreachable Values : One way of dealing with such values is to mark them with a sentinel value so that our code deals with them in a special way. A good choice of a sentinel is \(\infty\), since the minimum value between a reachable value and \(\infty\) could never be infinity.
The Algorithm
Let's sum up the ideas and see how we could implement this as an actual algorithm:
We have claimed that naive recursion is a bad way to solve problems with overlapping subproblems. Why is that? Mainly because of all the recomputations involved.
Another way to avoid this problem is to compute the data first time and store it as we go, in a top-down fashion.
Let's look at how one could potentially solve the previous coin change problem in the memoization way. 1 2 3 4 5 6 7 8 9 10 11 12 def coinsChange ( V , v ): memo = {} def Change ( V ): if V in memo : return memo [ V ] if V == 0 : return 0 if V < 0 : return float ( "inf" ) memo [ V ] = min ([ 1 + Change ( V - vi ) for vi in v ]) return memo [ V ] return Change ( V )
Dynamic Programming vs Recursion with Caching
There are \(k\) types of brackets each with its own opening bracket and closing bracket. We assume that the first pair is denoted by the numbers 1 and \(k+1,\) the second by 2 and \(k+2,\) and so on. Thus the opening brackets are denoted by \(1, 2, \ldots, k,\) and the corresponding closing brackets are denoted by \(k+1, k+2, \ldots, 2k,\) respectively.
Some sequences with elements from \(1, 2, \ldots, 2k\) form well-bracketed sequences while others don't. A sequence is well-bracketed if we can match or pair up opening brackets of the same type in such a way that the following holds:
- Every bracket is paired up.
- In each matched pair, the opening bracket occurs before the closing bracket.
- For a matched pair, any other matched pair lies either completely between them or outside them.
In this problem, you are given a sequence of brackets of length \(N\): \(B[1], \ldots, B[N]\), where each \(B[i]\) is one of the brackets. You are also given an array of Values: \(V[1],\ldots, V[N] \).
Among all the subsequences in the Values array, such that the corresponding bracket subsequence in the B Array is a well-bracketed sequence, you need to find the maximum sum.
Task: Solve the above problem for this input.
Input Format
One line, which contains \((2\times N + 2)\) space separate integers. The first integer denotes \(N.\) The next integer is \(k.\) The next \(N\) integers are \(V[1],..., V[N].\) The last \(N\) integers are \(B[1],..., B[N].\)
Constraints
- \(1 \leq k \leq 7\)
- \(-10^6 \leq V[i] \leq 10^6\), for all \(i\)
- \(1 \leq B[i] \leq 2k\), for all \(i\)
Illustrated Examples
For the examples discussed here, let us assume that \(k = 2\). The sequence 1, 1, 3 is not well-bracketed as one of the two 1's cannot be paired. The sequence 3, 1, 3, 1 is not well-bracketed as there is no way to match the second 1 to a closing bracket occurring after it. The sequence 1, 2, 3, 4 is not well-bracketed as the matched pair 2, 4 is neither completely between the matched pair 1, 3 nor completely outside of it. That is, the matched pairs cannot overlap. The sequence 1, 2, 4, 3, 1, 3 is well-bracketed. We match the first 1 with the first 3, the 2 with the 4, and the second 1 with the second 3, satisfying all the 3 conditions. If you rewrite these sequences using [, {, ], } instead of 1, 2, 3, 4 respectively, this will be quite clear.
Suppose \(N = 6, k = 3,\) and the values of \(V\) and \(B\) are as follows: Then, the brackets in positions 1, 3 form a well-bracketed sequence (1, 4) and the sum of the values in these positions is 2 (4 + (-2) =2). The brackets in positions 1, 3, 4, 5 form a well-bracketed sequence (1, 4, 2, 5) and the sum of the values in these positions is 4. Finally, the brackets in positions 2, 4, 5, 6 form a well-bracketed sequence (3, 2, 5, 6) and the sum of the values in these positions is 13. The sum of the values in positions 1, 2, 5, 6 is 16 but the brackets in these positions (1, 3, 5, 6) do not form a well-bracketed sequence. You can check the best sum from positions whose brackets form a well-bracketed sequence is 13.
We'll try to solve this problem with the help of a dynamic program, in which the state , or the parameters that describe the problem, consist of two variables.
First, we set up a two-dimensional array dp[start][end] where each entry solves the indicated problem for the part of the sequence between start and end inclusive.
We'll try to think what happens when we run across a new end value, and need to solve the new problem in terms of the previously solved subproblems. Here are all the possibilities:
- When end <= start , there are no valid subsequences.
- When b[end] <= k , i.e, the last entry is an open bracket, no valid subsequence can end with it. Effectively, the result is the same if we hadn't included the last entry at all.
- When b[end] > k , i.e, the last entry is a closing bracket, one has to find the best match for it, or simply ignore it, whichever maximizes the sum.
Can you use these ideas to solve the problem?
Very often, dynamic programming helps solve problems that ask us to find the most profitable (or least costly) path in an implicit graph setting. Let us try to illustrate this with an example.
You are supposed to start at the top of a number triangle and chose your passage all the way down by selecting between the numbers below you to the immediate left or right. Your goal is to maximize the sum of the elements lying in your path. For example, in the triangle below, the red path maximizes the sum.
To see the optimal substructures and the overlapping subproblems , notice that everytime we make a move from the top to the bottom right or the bottom left, we are still left with smaller number triangle, much like this:
We could break each of the sub-problems in a similar way until we reach an edge-case at the bottom:
In this case, the solution is a + max(b,c) .
A bottom-up dynamic programming solution is to allocate a number triangle that stores the maximum reachable sum if we were to start from that position . It is easy to compute the number triangles from the bottom row onward using the fact that the
\[\text{best from this point} = \text{this point} + \max(\text{best from the left, best from the right}).\]
Let me demonstrate this principle through the iterations. Iteration 1: 1 8 5 9 3 Iteration 2: 1 2 10 13 15 8 5 9 3 Iteration 3: 1 2 3 20 19 10 13 15 8 5 9 3 Iteration 4: 1 2 3 4 23 20 19 10 13 15 8 5 9 3 So, the effective best we could do from the top is 23, which is our answer.
Problem Loading...
Note Loading...
Set Loading...
Dynamic Programming Made Easy
Dynamic Programming is an approach where the main problem is divided into smaller sub-problems, but these sub-problems are not solved independently.
For a problem to be solved using dynamic programming, the sub-problems must be overlapping. This means that two or more sub-problems will evaluate to give the same result.
So, we use the memoization technique to recall the result of the already solved sub-problems for future use. We then use cache storage to store this result, which is used when a similar sub-problem is encountered in the future.
Now let's look at this topic in more depth.
What is memoization?
Memoization is the technique of memorizing the results of certain specific states, which can then be accessed to solve similar sub-problems. In other words, it is a specific form of caching.
This ensures that the results already computed are stored generally as a hashmap. This decreases the run time significantly, and also leads to less complicated code.
But we know that any benefit comes at the cost of something. So, when we use dynamic programming, the time complexity decreases while space complexity increases.
Different approaches in DP
In dynamic programming, we can either use a top-down approach or a bottom-up approach.
The top-down approach involves solving the problem in a straightforward manner and checking if we have already calculated the solution to the sub-problem.
This approach includes recursive calls (repeated calls of the same function) . It builds up a call stack, which leads to memory costs. It is also vulnerable to stack overflow errors.
The bottom-up approach includes first looking at the smaller sub-problems, and then solving the larger sub-problems using the solution to the smaller problems.
This approach avoids memory costs that result from recursion.
But both the top-down approach and bottom-up approach in dynamic programming have the same time and space complexity. So in the end, using either of these approaches does not make much difference.
Just a quick note: dynamic programming is not an algorithm. But I have seen some people confuse it as an algorithm (including myself at the beginning).
It is used only when we have an overlapping sub-problem or when extensive recursion calls are required. It is a way to improve the performance of existing slow algorithms.
This means, also, that the time and space complexity of dynamic programming varies according to the problem.
Dynamic Programming Example
Now let us solve a problem to get a better understanding of how dynamic programming actually works.
Consider the problem of finding the longest common sub-sequence from the given two sequences.
⇒ ‘gtcab’ and ‘gxtxab’
We can solve this problem using a naive approach, by generating all the sub-sequences for both and then find the longest common sub-sequence from them.
But the time complexity of this solution grows exponentially as the length of the input continues increasing.
So, how do we know that this problem can be solved using dynamic programming?
For the two strings we have taken, we use the below process to calculate the longest common sub-sequence (LCS).
As we can see, here we divide the main problem into smaller sub-problems. Let us check if any sub-problem is being repeated here.

We can see here that two sub-problems are overlapping when we divide the problem at two levels.
If we further go on dividing the tree, we can see many more sub-problems that overlap. So we conclude that this can be solved using dynamic programming.
Next, let us look at the general approach through which we can find the longest common sub-sequence (LCS) using dynamic programming.
How to fill the matrix?
We will use the matrix method to understand the logic of solving the longest common sub-sequence using dynamic programming.
Here we will only discuss how to solve this problem – that is, the algorithm part. And for that we use the matrix method.
Look at the below matrix. We have filled the first row with the first sequence and the first column with the second sequence.
Then we populated the second row and the second column with zeros for the algorithm to start. We denote the rows with ‘i’ and columns with ‘j’ .

Now we move on to fill the cells of the matrix. Compare the two sequences until the particular cell where we are about to make the entry.
The length/count of common sub-sequences remains the same until the last character of both the sequences undergoing comparison becomes the same.
If the sequences we are comparing do not have their last character equal, then the entry will be the maximum of the entry in the column left of it and the entry of the row above it.
When the last characters of both sequences are equal, the entry is filled by incrementing the upper left diagonal entry of that particular cell by 1 .
The logic we use here to fill the matrix is given below:
The bottom right entry of the whole matrix gives us the length of the longest common sub-sequence.
Finding the longest common sub-sequence
In order to get the longest common sub-sequence, we have to traverse from the bottom right corner of the matrix. Then we check from where the particular entry is coming.
That is, we can check whether it is the maximum of its left and top entry or else is it the incremental entry of the upper left diagonal element?
We repeat this process until we reach the top left corner of the matrix. The sub-sequence we get by combining the path we traverse (only consider those characters where the arrow moves diagonally) will be in the reverse order.
We have to reverse this obtained sequence to get the correct longest common sub-sequence. So in this particular example, the longest common sub-sequence is ‘gtab’ .
I have made a detailed video on how we fill the matrix so that you can get a better understanding. You can find it here: Video Explanation .
What did we learn?
In this article, we learned what dynamic programming is and how to identify if a problem can be solved using dynamic programming.
Then we went on to study the complexity of a dynamic programming problem.
Next we learned how we can solve the longest common sub-sequence problem using dynamic programming.
I hope you enjoyed it and learned something useful from this article.
If you found this post helpful, please share it. If you have any feedback, feel free to contact me on Twitter .
Python | AI lover | Technical Writer
If you read this far, thank the author to show them you care. Say Thanks
Learn to code for free. freeCodeCamp's open source curriculum has helped more than 40,000 people get jobs as developers. Get started

- Prep Courses
- Coding Questions
- Behavioral Questions
- Build Your Portfolio
- Goal-Setting
- Productivity
- Start a Blog
- Software Engineer
- Game Development
- Blockchain Developer
- Cloud Computing
- Web3 Developer
- The Complete Software Developer’s Career Guide
- 10 Steps to Learn Anything Quickly
- How to Market Yourself as a Software Developer
- Create a Blog That Boosts Your Career
- 10 Ways to Make Money From Your Blog
- Best Coding Hardware
- Blockchain Languages
The Ultimate Guide to Dynamic Programming
Written By Sam Gavis-Hughson

Did you feel a little shiver when you read that?
Imagine it again with those spooky Goosebumps letters.
Dynamic programming.
When I talk to students of mine over at Byte by Byte , nothing quite strikes fear into their hearts like dynamic programming.
And I can totally understand why. Dynamic programming (DP) is as hard as it is counterintuitive. Most of us learn by looking for patterns among different problems. But with dynamic programming, it can be really hard to actually find the similarities.
Even though the problems all use the same technique, they look completely different.
However, there is a way to understand dynamic programming problems and solve them with ease. And in this post I’m going to show you how to do just that.
What Is Dynamic Programming?
Before we get into all the details of how to solve dynamic programming problems, it’s key that we answer the most fundamental question:
What is dynamic programming?
Simply put, dynamic programming is an optimization technique that we can use to solve problems where the same work is being repeated over and over. You know how a web server may use caching? Dynamic programming is basically that.
However, dynamic programming doesn’t work for every problem. There are a lot of cases in which dynamic programming simply won’t help us improve the runtime of a problem at all. If we aren’t doing repeated work, then no amount of caching will make any difference.
To determine whether we can optimize a problem using dynamic programming, we can look at both formal criteria of DP problems. I’ll also give you a shortcut in a second that will make these problems much quicker to identify.
Let’s start with the formal definition.
A problem can be optimized using dynamic programming if it:
- has an optimal substructure.
- has overlapping subproblems.
If a problem meets those two criteria, then we know for a fact that it can be optimized using dynamic programming.
Optimal Substructure
Optimal substructure is a core property not just of dynamic programming problems but also of recursion in general. If a problem can be solved recursively, chances are it has an optimal substructure.
Optimal substructure simply means that you can find the optimal solution to a problem by considering the optimal solution to its subproblems.
For example, if we are looking for the shortest path in a graph, knowing the partial path to the end (the bold squiggly line in the image below), we can compute the shortest path from the start to the end, without knowing any details about the squiggly path.

What might be an example of a problem without optimal substructure?
Consider finding the cheapest flight between two airports. According to Wikipedia :
“Using online flight search, we will frequently find that the cheapest flight from airport A to airport B involves a single connection through airport C, but the cheapest flight from airport A to airport C involves a connection through some other airport D.”
While there is some nuance here, we can generally assume that any problem that we solve recursively will have an optimal substructure.
Overlapping Subproblems
Overlapping subproblems is the second key property that our problem must have to allow us to optimize using dynamic programming. Simply put, having overlapping subproblems means we are computing the same problem more than once.
Imagine you have a server that caches images. If the same image gets requested over and over again, you’ll save a ton of time. However, if no one ever requests the same image more than once, what was the benefit of caching them?
This is exactly what happens here. If we don’t have overlapping subproblems, there is nothing to stop us from caching values. It just won’t actually improve our runtime at all. All it will do is create more work for us.
For an example of overlapping subproblems, consider the Fibonacci problem. Here is a tree of all the recursive calls required to compute the fifth Fibonacci number:

Notice how we see repeated values in the tree. The number 3 is repeated twice, 2 is repeated three times, and 1 is repeated five times. Each of those repeats is an overlapping subproblem. There is no need for us to compute those subproblems multiple times because the value won’t change. If we cache it, we can save ourselves a lot of work.
When Should I Use Dynamic Programming?
To be absolutely certain that we can solve a problem using dynamic programming , it is critical that we test for optimal substructure and overlapping subproblems. Without those, we can’t use dynamic programming.
However, we can use heuristics to guess pretty accurately whether or not we should even consider using DP. This quick question can save us a ton of time.
All we have to ask is: Can this problem be solved by solving a combination problem?
Consider a few examples:
- Find the smallest number of coins required to make a specific amount of change . Look at all combinations of coins that add up to an amount, and count the fewest number.
- Find the most value of items that can fit in your knapsack . Find all combinations of items and determine the highest value combination.
- Find the number of different paths to the top of a staircase . Enumerate all the combinations of steps.
While this heuristic doesn’t account for all dynamic programming problems, it does give you a quick way to gut-check a problem and decide whether you want to go deeper.
Solve Any DP Problem Using the FAST Method
After seeing many of my students from Byte by Byte struggling so much with dynamic programming, I realized we had to do something. There had to be a system for these students to follow that would help them solve these problems consistently and without stress.
It was this mission that gave rise to The FAST Method .
The FAST Method is a technique that has been pioneered and tested over the last several years. As I write this, more than 8,000 of our students have downloaded our free e-book and learned to master dynamic programming using The FAST Method.
So how does it work?
FAST is an acronym that stands for F ind the first solution, A nalyze the solution, identify the S ubproblems, and T urn around the solution. Let’s break down each of these steps.
Find the First Solution
The first step to solving any dynamic programming problem using The FAST Method is to find the initial brute force recursive solution.
Your goal with Step One is to solve the problem without concern for efficiency. We just want to get a solution down on the whiteboard. This gives us a starting point (I’ve discussed this in much more detail here ).
There are a couple of restrictions on how this brute force solution should look:
- Each recursive call must be self-contained . If you are storing your result by updating some global variable, then it will be impossible for us to use any sort of caching effectively. We want to have a result that is completely dependent on the inputs of the function and not affected by any outside factors.
- Remove unnecessary variables. The fewer variables you pass into your recursive function, the better. We will be saving our cached values based on the inputs to the function, so it will be a pain if we have extraneous variables.
Let’s consider two examples here. We’ll use these examples to demonstrate each step along the way.
The first problem we’re going to look at is the Fibonacci problem. In this problem, we want to simply identify the n-th Fibonacci number. Recursively we can do that as follows:
It is important to notice here how each result of fib(n) is 100 percent dependent on the value of “n.” We have to be careful to write our function in this way. For example, while the following code works, it would NOT allow us to do DP.
While this may seem like a toy example, it is really important to understand the difference here. This second version of the function is reliant on result to compute the result of the function and result is scoped outside of the fibInner() function.
You can learn more about the difference here.
The second problem that we’ll look at is one of the most popular dynamic programming problems: 0-1 Knapsack Problem. For this problem, we are given a list of items that have weights and values, as well as a max allowable weight. We want to determine the maximum value that we can get without exceeding the maximum weight.
Here is our brute force recursive code:
With these brute force solutions, we can move on to the next step of The FAST Method.
Note: I’ve found that many people find this step difficult. It is way too large a topic to cover here, so if you struggle with recursion, I recommend checking out this monster post on Byte by Byte .
Analyze the First Solution
Now that we have our brute force solution, the next step in The FAST Method is to analyze the solution. In this step, we are looking at the runtime of our solution to see if it is worth trying to use dynamic programming and then considering whether we can use it for this problem at all.
We will start with a look at the time and space complexity of our problem and then jump right into an analysis of whether we have optimal substructure and overlapping subproblems. Remember that those are required for us to be able to use dynamic programming.
Let’s go back to our Fibonacci example.
Example 1 (Fibonacci):
For this problem, our code was nice and simple, but unfortunately our time complexity sucks. The easiest way to get a handle on what is going on in your code is to sketch out the recursive tree. Here’s the tree for fib(4) :

What we immediately notice here is that we essentially get a tree of height n . Yes, some of the branches are a bit shorter, but our Big Oh complexity is an upper bound. Therefore, to compute the time complexity, we can simply estimate the number of nodes in the tree.
For any tree, we can estimate the number of nodes as branching_factorheight , where the branching factor is the maximum number of children that any node in the tree has. That gives us a pretty terrible runtime of O(2n) .
So it would be nice if we could optimize this code, and if we have optimal substructure and overlapping subproblems, we could do just that.
In this case, we have a recursive solution that pretty much guarantees that we have an optimal substructure. We are literally solving the problem by solving some of its subproblems.
We also can see clearly from the tree diagram that we have overlapping subproblems. Notice fib(2) getting called two separate times? That’s an overlapping subproblem. If we drew a bigger tree, we would find even more overlapping subproblems.
Given that we have found this solution to have an exponential runtime and it meets the requirements for dynamic programming, this problem is clearly a prime candidate for us to optimize.
Example 2 (0-1 Knapsack):
The code for this problem was a little bit more complicated, so drawing it out becomes even more important. When we sketch out an example, it gives us much more clarity on what is happening ( see my process for sketching out solutions ).
Here’s what our tree might look like for the following inputs:

Note the two values passed into the function in this diagram are the maxWeight and the current index in our items list.
So with our tree sketched out, let’s start with the time complexity. Similar to our Fibonacci problem, we see that we have a branching tree of recursive calls where our branching factor is 2. This gives us a time complexity of O(2n) .
Do we have optimal substructure? Yep. Again, the recursion basically tells us all we need to know on that count.
And overlapping subproblems? Since we’ve sketched it out, we can see that knapsack(3, 2) is getting called twice, which is a clearly overlapping subproblem.
This also looks like a good candidate for DP.
Identify the Subproblems
The third step of The FAST Method is to identify the subproblems that we are solving. This is where we really get into the meat of optimizing our code.
We are going to start by defining in plain English what exactly our subproblem is. I’m always shocked at how many people can write the recursive code but don’t really understand what their code is doing. Understanding is critical.
Once we understand the subproblems, we can implement a cache that will memoize the results of our subproblems, giving us a top-down dynamic programming solution.
Memoization is simply the strategy of caching the results. We can use an array or map to save the values that we’ve already computed to easily look them up later.
We call this a top-down dynamic programming solution because we are solving it recursively. Essentially we are starting at the “top” and recursively breaking the problem into smaller and smaller chunks. This is in contrast to bottom-up , or tabular , dynamic programming, which we will see in the last step of The FAST Method.
So what is our subproblem here? This problem is quite easy to understand because fib(n) is simply the nth Fibonacci number. Since our result is only dependent on a single variable, n , it is easy for us to memoize based on that single variable.
Consider the code below. By adding a simple array, we can memoize our results. Notice the differences between this code and our code above:
See how little we actually need to change? Once we understand our subproblem, we know exactly what value we need to cache.
Now that we have our top-down solution, we do also want to look at the complexity. In this case, our code has been reduced to O(n) time complexity. We can pretty easily see this because each value in our dp array is computed once and referenced some constant number of times after that. This is much better than our previous exponential solution.
This problem starts to demonstrate the power of truly understanding the subproblems that we are solving. Specifically, not only does knapsack() take in a weight, it also takes in an index as an argument. So if you call knapsack(4, 2) what does that actually mean? What is the result that we expect? Well, if you look at the code, we can formulate a plain English definition of the function:
Here, “ knapsack(maxWeight, index) returns the maximum value that we can generate under a current weight only considering the items from index to the end of the list of items.”
With this definition, it makes it easy for us to rewrite our function to cache the results (and in the next section, these definitions will become invaluable):
Again, we can see that very little change to our original code is required. All we are doing is adding a cache that we check before computing any function. If the value in the cache has been set, then we can return that value without recomputing it.
In terms of the time complexity here, we can turn to the size of our cache. Each value in the cache gets computed at most once, giving us a complexity of O(n*W) .
Turn Around the Solution
The final step of The FAST Method is to take our top-down solution and “turn it around” into a bottom-up solution. This is an optional step, since the top-down and bottom-up solutions will be equivalent in terms of their complexity. However, many prefer bottom-up due to the fact that iterative code tends to run faster than recursive code.
With this step, we are essentially going to invert our top-down solution. Instead of starting with the goal and breaking it down into smaller subproblems, we will start with the smallest version of the subproblem and then build up larger and larger subproblems until we reach our target. This is where the definition from the previous step will come in handy.
To start, let’s recall our subproblem: fib(n) is the nth Fibonacci number.
Remember that we’re going to want to compute the smallest version of our subproblem first. That would be our base cases, or in this case, n = 0 and n = 1 .
Once we have that, we can compute the next biggest subproblem. To get fib(2) , we just look at the subproblems we’ve already computed. Once that’s computed we can compute fib(3) and so on.
So how do we write the code for this? Well, our cache is going to look identical to how it did in the previous step; we’re just going to fill it in from the smallest subproblems to the largest, which we can do iteratively.
Easy peasy lemon squeezy.
Another nice perk of this bottom-up solution is that it is super easy to compute the time complexity. Unlike recursion, with basic iterative code it’s easy to see what’s going on.
And that’s all there is to it. One note with this problem (and some other DP problems) is that we can further optimize the space complexity , but that is outside the scope of this post.
As is becoming a bit of a trend, this problem is much more difficult. However, you now have all the tools you need to solve the Knapsack problem bottom-up.
Recall our subproblem definition: “ knapsack(maxWeight, index) returns the maximum value that we can generate under a current weight only considering the items from index to the end of the list of items.”
To make things a little easier for our bottom-up purposes, we can invert the definition so that rather than looking from the index to the end of the array, our subproblem can solve for the array up to, but not including, the index.
With this, we can start to fill in our base cases. We’ll start by initializing our dp array. Whenever the max weight is 0, knapsack(0, index) has to be 0. Referring back to our subproblem definition, that makes sense. If the weight is 0, then we can’t include any items, and so the value must be 0.
The same holds if index is 0. Since we define our subproblem as the value for all items up to, but not including, the index, if index is 0 we are also including 0 items, which has 0 value.
From there, we can iteratively compute larger subproblems, ultimately reaching our target:
Again, once we solve our solution bottom-up, the time complexity becomes very easy because we have a simple nested for loop.
Dynamic Programming Doesn’t Have to Be Hard!
While dynamic programming seems like a scary and counterintuitive topic, it doesn’t have to be. By applying structure to your solutions, such as with The FAST Method, it is possible to solve any of these problems in a systematic way.
Interviewers love to test candidates on dynamic programming because it is perceived as such a difficult topic, but there is no need to be nervous. Follow the steps and you’ll do great.
If you want to learn more about The FAST Method, check out my free e-book, Dynamic Programming for Interviews .

Control of Autonomous Aerial Vehicles pp 77–103 Cite as
An Improved Differential Dynamic Programming Approach for Computational Guidance
- Xiaobo Zheng 14 ,
- Shaoming He 14 &
- Defu Lin 14
- First Online: 21 November 2023
77 Accesses
Part of the Advances in Industrial Control book series (AIC)
Differential dynamic programming (DDP) is a well-recognized method for computational guidance due to its fast convergence characteristics. However, the original DDP requires a predefined final time and cannot handle nonlinear constraints in optimization. This prohibits the application of DDP to autonomous vehicles due to the heuristic nature of setting a final time beforehand and the existence of inherent physical limits. This chapter revisits DDP by dynamically optimizing the final time via the first-order optimality condition of the value function and using the augmented Lagrangian method to tackle nonlinear constraints. The resultant algorithm is termed flexible final time-constrained differential dynamic programming (FFT-CDDP). Extensive numerical simulations for a three-dimensional guidance problem are used to demonstrate the working of FFT-CDDP. The results indicate that the proposed FFT-CDDP provides much higher computational efficiency and stronger robustness against the initial solution guess, compared with the commercial-off-the-shelf GPOPS toolbox.
This is a preview of subscription content, access via your institution .

Buying options
- Available as PDF
- Read on any device
- Instant download
- Own it forever
- Available as EPUB and PDF
- Durable hardcover edition
- Dispatched in 3 to 5 business days
- Free shipping worldwide - see info
Tax calculation will be finalised at checkout
Purchases are for personal use only
Ryoo C-K, Cho H, Tahk M-J (2005) Optimal guidance laws with terminal impact angle constraint. J Guidance Control Dyn 28(4):724–732
CrossRef Google Scholar
Ryoo C-K, Cho H, Tahk M-J (2006) Time-to-go weighted optimal guidance with impact angle constraints. IEEE Trans Control Syst Technol 14(3):483–492
He S, Lee C-H (2018) Optimality of error dynamics in missile guidance problems. J Guidance Control Dyn 41(7):1624–1633
He S, Lee C-H, Shin H-S, Tsourdos A (2019) Minimum-effort waypoint-following guidance. J Guidance Control Dyn
Google Scholar
Lu P (2017) Introducing computational guidance and control. J Guidance Control Dyn 40(2):193–193
Tsiotras P, Mesbahi M (2017) Toward an algorithmic control theory. J Guidance Control Dyn 40(2):194–196
He S, Shin H-S, Tsourdos A (2019) Trajectory optimization for target localization with bearing-only measurement. IEEE Trans Robot 35(3):653–668
Chai R, Savvaris A, Tsourdos A, Chai S, Xia Y (2019) A review of optimization techniques in spacecraft flight trajectory design. Progr Aerosp Sci 109:1–15
Wang Z, Grant MJ (2017) Constrained trajectory optimization for planetary entry via sequential convex programming. J Guidance Control Dyn 40(10):2603–2615
Hong H, Maity A, Holzapfel F, Tang S (2019) Adaptive trajectory generation based on real-time estimated parameters for impaired aircraft landing. Int J Syst Sci 50(15):2733–2751
CrossRef MathSciNet MATH Google Scholar
He S, Shin H-S, Tsourdos A (2020) Trajectory optimization for multitarget tracking using joint probabilistic data association filter. J Guidance Control Dyn 43(1):170–178
Chai R, Savvaris A, Tsourdos A (2017) Violation learning differential evolution-based hp-adaptive pseudospectral method for trajectory optimization of space maneuver vehicle. IEEE Trans Aerosp Electron Syst 53(4):2031–2044
Tang G, Jiang F, Li J (2018) Fuel-optimal low-thrust trajectory optimization using indirect method and successive convex programming. IEEE Trans Aerosp Electron Syst 54(4):2053–2066
Laad D, Elango P, Mohan R (2020) Fourier pseudospectral method for trajectory optimization with stability requirements. J Guidance Control Dyn 43(11):2073–2090
Hong H, Maity A, Holzapfel F (2021) Free final-time constrained sequential quadratic programming-based flight vehicle guidance. J Guidance Control Dyn 44(1):181–189
Zhang J, Ma K, Meng G, Tian S (2015) Spacecraft maneuvers via singularity-avoidance of control moment gyros based on dual-mode model predictive control. IEEE Trans Aerosp Electron Syst 51(4):2546–2559
Li J, Li C, Zhang Y (2019) Entry trajectory optimization with virtual motion camouflage principle. IEEE Trans Aerosp Electron Syst 56(4):2527–2536
Morgan D, Chung S-J, Hadaegh FY (2014) Model predictive control of swarms of spacecraft using sequential convex programming. J Guidance Control Dyn 37(6):1725–1740
Augugliaro F, Schoellig AP, D’Andrea R (2012) Generation of collision-free trajectories for a quadrocopter fleet: A sequential convex programming approach. In: International conference on Intelligent Robots and Systems. IEEE, pp 1917–1922
Deligiannis A, Amin M, Lambotharan S, Fabrizio G (2018) Optimum sparse subarray design for multitask receivers. IEEE Trans Aerosp Electron Syst 55(2):939–950
Aziz JD, Scheeres DJ, Lantoine G (2019) Hybrid differential dynamic programming in the circular restricted three-body problem. J Guidance Control Dyn 42(5):963–975
Sun W, Pan Y, Lim J, Theodorou EA, Tsiotras P (2018) Min-max differential dynamic programming: Continuous and discrete time formulations. J Guidance Control Dyn 41(12):2568–2580
Ozaki N, Campagnola S, Funase R, Yam CH (2018) Stochastic differential dynamic programming with unscented transform for low-thrust trajectory design. J Guidance Control Dyn 41(2):377–387
Morimoto J, Atkeson CG (2002) Minimax differential dynamic programming: An application to robust biped walking
Li W, Todorov E (2004) Iterative linear quadratic regulator design for nonlinear biological movement systems. In: ICINCO. Citeseer, pp 222–229
Tassa Y, Erez T, Smart WD (2007) Receding horizon differential dynamic programming. In: NIPS. Citeseer, pp 1465–1472
He S, Shin H-S, Tsourdos A (2019) Computational guidance using sparse gauss-hermite quadrature differential dynamic programming. Symp Autom Control Aerosp 52(12):13–18
Bertsekas DP (2019) Reinforcement learning and optimal control. Athena Scientific Belmont, MA
Mayne D (1966) A second-order gradient method for determining optimal trajectories of non-linear discrete-time systems. Int J Control 3(1):85–95
Manchester Z, Kuindersma S (2016) Derivative-free trajectory optimization with unscented dynamic programming. In: Conference on decision and control. IEEE, pp 3642–3647
Maity A, Padhi R, Mallaram S, Rao GM, Manickavasagam M (2016) A robust and high precision optimal explicit guidance scheme for solid motor propelled launch vehicles with thrust and drag uncertainty. Int J Syst Sci 47(13):3078–3097
Tassa Y, Mansard N, Todorov E (2014) Control-limited differential dynamic programming. In: International conference on robotics and automation. IEEE, pp 1168–1175
Xie Z, Liu CK, Hauser K (2017) Differential dynamic programming with nonlinear constraints. In: International conference on robotics and automation. IEEE, pp 695–702
Plancher B, Manchester Z, Kuindersma S (2017) Constrained unscented dynamic programming. In: International conference on intelligent robots and systems. IEEE, pp 5674–5680
Sleiman J-P, Farshidian F, Hutter M (2021) Constraint handling in continuous-time ddp-based model predictive control. ArXiv preprint arXiv:2101.06067
Li H, Wensing PM (2020) Hybrid systems differential dynamic programming for whole-body motion planning of legged robots. IEEE Robot Autom Lett 5(4):5448–5455
Nocedal J, Wright S (2006) Numerical optimization. Springer Science & Business Media
Farshidian F, Neunert M, Winkler AW, Rey G, Buchli J (2017) An efficient optimal planning and control framework for quadrupedal locomotion. In: International conference on robotics and automation. IEEE, pp 93–100
Alcan G, Kyrki V (2022) Differential dynamic programming with nonlinear safety constraints under system uncertainties. IEEE Robot Autom Lett 7(2):1760–1767
Celik O, Abdulsamad H, Peters J (2019) Chance-constrained trajectory optimization for non-linear systems with unknown stochastic dynamics. In: International conference on intelligent robots and systems. IEEE, pp 6828–6833
Patterson MA, Rao AV (2014) GPOPS-II: A Matlab software for solving multiple-phase optimal control problems using hp-adaptive gaussian quadrature collocation methods and sparse nonlinear programming. ACM Trans Math Soft 41(1):1–37
Kee P, Dong L, Siong C (1998) Near optimal midcourse guidance law for flight vehicle. In: Aerospace sciences meeting and exhibit, p 583
Imado F, Kuroda T, Tahk M-J (1998) A new missile guidance algorithm against a maneuvering target. In: Guidance, navigation, and control conference and exhibit, pp 145–153
Download references
Author information
Authors and affiliations.
Beijing Institute of Technology, Beijing, China
Xiaobo Zheng, Shaoming He & Defu Lin
You can also search for this author in PubMed Google Scholar
Corresponding author
Correspondence to Shaoming He .
Editor information
Editors and affiliations.
Department of Industrial and Systems Engineering, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA
Andrea L'Afflitto
School of Aerospace, Transport and Manufacturing, Cranfield University, Cranfield, Bedfordshire, UK
Gokhan Inalhan
Hyo-Sang Shin
Rights and permissions
Reprints and Permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter.
Zheng, X., He, S., Lin, D. (2024). An Improved Differential Dynamic Programming Approach for Computational Guidance. In: L'Afflitto, A., Inalhan, G., Shin, HS. (eds) Control of Autonomous Aerial Vehicles. Advances in Industrial Control. Springer, Cham. https://doi.org/10.1007/978-3-031-39767-7_4
Download citation
DOI : https://doi.org/10.1007/978-3-031-39767-7_4
Published : 21 November 2023
Publisher Name : Springer, Cham
Print ISBN : 978-3-031-39766-0
Online ISBN : 978-3-031-39767-7
eBook Packages : Engineering Engineering (R0)
Share this chapter
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Find a journal
- Publish with us
Master Dynamic Programming: The Simplified Beginner's Guide

Vibha Gupta
Technical Content Writer at almaBetter
Dynamic Programming is a powerful technique used to solve complex optimization problems. It is widely used in computer science, mathematics, and engineering to find the most efficient solution to a problem by breaking it down into smaller subproblems. This guide will provide a comprehensive overview of the Dynamic Programming algorithm, explaining its concept, steps to solve problems using this technique, examples of Dynamic Programming problems, techniques to optimize solutions, and common mistakes to avoid.

Dynamic Programming
Understanding the Concept of Optimization
Optimization is at the core of Dynamic Programming. It involves finding the best solution among a set of possible solutions. In the context of Dynamic Programming, optimization means finding the most efficient way to solve a problem. This can be achieved by breaking down the problem into smaller overlapping subproblems and solving them recursively.
Dynamic Programming utilizes the concept of memoization, which involves storing the solutions to subproblems in a table to avoid redundant computations. By reusing these stored solutions, Dynamic Programming significantly improves the efficiency of solving problems compared to traditional brute-force methods.
Steps to Solve Problems Using Dynamic Programming
To solve a problem using Dynamic Programming, the following steps can be followed:
1. Define the problem: Clearly define the problem and the objective you want to achieve.
2. Identify the base case: Determine the most direct possible input for which the solution is already known.
3. Identify the recurrence relation: Break down the problem into smaller subproblems and find a relation between the original problem's solution and its subproblems' solutions.
4. Create a memoization table: Create a table to store the solutions of the subproblems.
5. Solve the subproblems: Use the recurrence relation to solve the subproblems recursively, starting from the base case and filling up the table with the solutions.
6. Build the final solution: Use the solutions stored in the memoization table to build the final solution to the original problem.
To understand more about Dynamic Programming steps, you can enroll in our Web Development course and unlock the potential to solve complex problems.
You can also check out our free Python tutorials and Javascript tutorials .
Dynamic Programming Examples
Dynamic Programming can be applied to a wide range of problems. Here are a few examples to illustrate its application:
1. Fibonacci sequence: The Fibonacci sequence is a classic example of a problem that can be efficiently solved using Dynamic Programming. By storing the solutions to the subproblems (calculating the Fibonacci numbers for smaller indices) in a table, the time complexity of finding the nth Fibonacci number can be reduced from exponential to linear.
2. Knapsack problem: The Knapsack problem involves selecting a combination of items with maximum value while staying within a given weight limit. Dynamic Programming can solve this problem by categorizing it into subproblems of selecting items individually and calculating the maximum value for different weight limits.
3. Longest common subsequence: Given two sequences, the longest common subsequence problem involves finding the longest subsequence that appears in both sequences in the same relative order. Dynamic Programming can be used to efficiently solve this problem by building a table of solutions for subproblems of smaller subsequence lengths.
Techniques to Optimize Dynamic Programming Solutions
While Dynamic Programming is already an optimization technique itself, there are additional techniques that can further optimize the solutions:
1. Bottom-up approach: Instead of solving the subproblems recursively, the bottom-up approach solves them iteratively, starting from the base case and building up the solutions in a table. This avoids the overhead of function calls and can be more efficient for specific problems.
2. Space optimization: In some cases, optimizing the space complexity of Dynamic Programming solutions by only storing the necessary information in the memoization table is possible. This can be achieved using variables or smaller arrays instead of full tables.
3. State compression: For problems with a large number of possible states, state compression techniques can be used to reduce the memory requirements of the memoization table. This involves finding patterns or properties of the states that allow for a more compact representation.
Common Mistakes to Avoid in Dynamic Programming
While Dynamic Programming languages are powerful techniques, there are some common mistakes that beginners may fall into. Here are a few to avoid:
1. Overcomplicating the problem: Sometimes, a problem can be solved using a simpler approach rather than resorting to Dynamic Programming. It is crucial to analyze the problem and explore alternative solutions before diving into Dynamic Programming.
2. Ignoring the base case: The base case is crucial in Dynamic Programming as it provides the starting point for solving the subproblems. Ignoring or incorrectly defining the base case can lead to incorrect solutions or infinite loops.
3. Not using memoization: Forgetting to use memoization or not correctly updating the memoization table can result in redundant computations and inefficient solutions. Always ensure that the solutions to subproblems are stored and reused whenever possible.
Dynamic Programming is a powerful technique that efficiently solves complex optimization problems. By breaking down problems into smaller overlapping subproblems and utilizing memoization, Dynamic Programming provides a systematic approach to finding the most efficient solution. Understanding the concept of optimization, following the steps to solve problems using Dynamic Programming, learning from examples, and employing optimization techniques can help master this technique. By avoiding common mistakes and continuously practicing, one can proficiently solve complex problems using Dynamic Programming.
Remember, practice and application are key to mastering Dynamic Programming. So, start exploring different problem domains, implement Dynamic Programming solutions, and experience the power of optimization firsthand.
Recommended Courses

- [email protected]
- 08046008400
- Official Address
- 4th floor, 133/2, Janardhan Towers, Residency Road, Bengaluru, Karnataka, 560025
- Communication Address
- 4th floor, 315 Work Avenue, Siddhivinayak Tower, 152, 1st Cross Rd., 1st Block, Koramangala, Bengaluru, Karnataka, 560034

- Success Stories
- Hire From Us
- Certification in Full Stack Data Science and AI
- Certification in Full Stack Web Development
- Masters in CS: Data Science and Artificial Intelligence
- Masters in Computer Science: Software Engineering
- Placement Statistics
- Join AlmaBetter
- Become an Affiliate
- Become A Coach
- Coach Login
- Refer and Earn
- Privacy Statement
- Terms of Use

- Trending Now
- Data Structures
- Foundational Courses
- Data Science
- Practice Problem
- Machine Learning
- Web Development
- Web Browser
- System Design
- Software Development
- Product Management

- Explore Our Geeks Community
- Learn Data Structures and Algorithms | DSA Tutorial
- Data Structures Tutorial
- Array Data Structure
- String Data Structure
- Linked List Data Structure
- Stack Data Structure
- Queue Data Structure
- Tree Data Structure
- Heap Data Structure
- Hashing Data Structure
- Graph Data Structure And Algorithms
- Matrix Data Structure
- Introduction to Set – Data Structure and Algorithm Tutorials
- Introduction to Map – Data Structure and Algorithm Tutorials
- Advanced Data Structures
Algorithm Tutorial
- Searching Algorithms
- Sorting Algorithms
- Recursion Algorithms
- Backtracking Algorithms
- Greedy Algorithms
Dynamic Programming
- Pattern Searching
- Divide and Conquer
- Mathematical Algorithms
- Geometric Algorithms
- Bitwise Algorithms
- Randomized Algorithms
- Branch and Bound Algorithm
- Algorithms Tutorial
Competititve Programming
- Discuss(20+)
Learn Data Structures and Algorithms | DSA Tutorial Learn more about Dynamic Programming in DSA Self Paced Course Practice Problems on Dynamic Programming Top Quizzes on Dynamic Programming
What is Dynamic Programming?
Dynamic Programming is mainly an optimization over plain recursion . Wherever we see a recursive solution that has repeated calls for same inputs, we can optimize it using Dynamic Programming. The idea is to simply store the results of subproblems, so that we do not have to re-compute them when needed later. This simple optimization reduces time complexities from exponential to polynomial.
For example, if we write simple recursive solution for Fibonacci Numbers , we get exponential time complexity and if we optimize it by storing solutions of subproblems, time complexity reduces to linear.

- Basic Concepts
- Advanced Concepts
- Standard Dynamic Programming problems
- Quick Links
Basic Concepts:
- What is memoization? A Complete tutorial
- Introduction to Dynamic Programming – Data Structures and Algorithm Tutorials
- Tabulation vs Memoizatation
- Optimal Substructure Property
- Overlapping Subproblems Property
- How to solve a Dynamic Programming Problem ?
Advanced Concepts:
- Bitmasking and Dynamic Programming | Set 1
Standard problems on Dynamic Programming:
- Fibonacci numbers
- nth Catalan Number
- Bell Numbers (Number of ways to Partition a Set)
- Binomial Coefficient
- Coin change problem
- Subset Sum Problem
- Compute nCr % p
- Cutting a Rod
- Painting Fence Algorithm
- Longest Common Subsequence
- Longest Increasing Subsequence
- Longest subsequence such that difference between adjacents is one
- Maximum size square sub-matrix with all 1s
- Min Cost Path
- Minimum number of jumps to reach end
- Longest Common Substring (Space optimized DP solution)
- Count ways to reach the nth stair using step 1, 2 or 3
- Count all possible paths from top left to bottom right of a mXn matrix
- Unique paths in a Grid with Obstacles
- Floyd Warshall Algorithm
- Bellman–Ford Algorithm
- 0-1 Knapsack Problem
- Printing Items in 0/1 Knapsack
- Unbounded Knapsack (Repetition of items allowed)
- Egg Dropping Puzzle
- Word Break Problem
- Vertex Cover Problem
- Tile Stacking Problem
- Box-Stacking Problem
- Partition Problem
- Travelling Salesman Problem | Set 1 (Naive and Dynamic Programming)
- Longest Palindromic Subsequence
- Longest Common Increasing Subsequence (LCS + LIS)
- Find all distinct subset (or subsequence) sums of an array
- Weighted job scheduling
- Count Derangements (Permutation such that no element appears in its original position)
- Minimum insertions to form a palindrome
- Wildcard Pattern Matching
- Ways to arrange Balls such that adjacent balls are of different types
- Palindrome Partitioning
- Word Wrap Problem
- The painter’s partition problem
- Program for Bridge and Torch problem
- Matrix Chain Multiplication
- Printing brackets in Matrix Chain Multiplication Problem
- Maximum sum rectangle in a 2D matrix
- Maximum profit by buying and selling a share at most k times
- Minimum cost to sort strings using reversal operations of different costs
- Count of AP (Arithmetic Progression) Subsequences in an array
- Introduction to Dynamic Programming on Trees
- Maximum height of Tree when any Node can be considered as Root
- Longest repeating and non-overlapping substring
Quick Links :
- Learn Data Structure and Algorithms | DSA Tutorial
- Top 20 Dynamic Programming Interview Questions
- ‘Practice Problems’ on Dynamic Programming
- ‘Quiz’ on Dynamic Programming
If you like GeeksforGeeks and would like to contribute, you can also write an article and mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
Improve your Coding Skills with Practice

DEV Community

Posted on Jan 19, 2021 • Originally published at yourdevopsguy.com
6 Hard Dynamic Programming Problems Made Easy
In this article , I gave you an introduction to Dynamic Programming with several examples. Here I will solve 6 harder Dynamic Programming problems to show you how to approach them.
Unique Paths
A robot is located at the top-left corner of a m x n grid
The robot can only move either down or right at any point in time. The robot is trying to reach the bottom-right corner of the grid.
How many possible unique paths are there?
Given that the robot can only move to the right or down, we can reach the position {m,n} (bottom-right corner) in two different ways:
- Going to the right from (m, n -1)
- Going down from (m -1, n)
Therefore, the total number of paths to (m, n) will be the number of paths to (m, n-1) plus the number of paths to (m-1, n). These two new problems are just instances of the original problem. Therefore we can use recursion to generate a solution.
Let's call f(m,n) the number of ways to reach the position (m, n).
Recursive solution
Line 5 represents the number of paths to reach any position in the first column or row of the grid (we can only reach them by going all the way right or down, therefore we return 1).
We can improve upon this if we notice that many problems will be computed more than once:
- To compute f(m,n) we need f(m-1, n) and f(m, n-1).
- For f(m-1, n) we need f(m-2, n) and f(m-1, n-1) .
- For f(m, n-1) we need f(m, n-2) and f(m-1, n-1) .
If you suspect a problem might be solved via Dynamic Programming problems, I recommend drawing a tree with all possible paths to see if there are repeated subproblems. If you can derive a recursion and prove that there are repeated subproblems and optimal substructure, you can apply Dynamic Programming.
Top-down Dynamic Programming
Going from the recursive solution to a top-down DP solution is very straightforward. We just need to:
- Check if the results are already in our cache. If so, we return them.
- If they're not, cache them before we return.
In line 8, I check if (-1 == dp[n]) to avoid bugs like if(dp[n] = -1) .
Bottoum-up Dynamic Programming
Now, let's take a different look at the problem. From (0,0) we can go right to (0,1), (0,2), etc. There is only one way to reach these positions. Similarly, there is only one way to reach all the positions in the first column: going down from (0,0). We can start building a table with this information:
This approach will build the full m*n table with all possible results. We return the one we are interested in: the most bottom-right position.
Computing the complexity of the bottom-up approach is trivial. There are two nested loops in which the amount of work is constant, giving an overall time complexity of O( mn ). The space complexity is O(m*n*) as well.
For the top-down, it seems a bit trickier, but the logic is the same. We do constant work for every call (because we cache results) and there are O(mn) calls. Therefore, the time complexity is O( mn ).
This problem can be solved in O(m+n) using basic mathematics. You have m rows, n columns and you can only move to the right and down (states). You can code a path using R to symbolize right and D for down. Then, for m rows and n columns you need to generate a string of (m-1) + (n-1) elements that have 2 states: R or D. In this string, there will be m-1 Ds and n-1 Rs. This problem is equivalent to find all permutations of an array of m+n-2 elements (taking into account all the repetitions):
Unique Paths 2
A robot is located at the top-left corner of a m x n grid (marked 'Start' in the diagram below).
Now consider if some obstacles are added to the grids. How many unique paths would there be?
An obstacle and space is marked as 1 and 0 respectively in the grid.
Try to solve this one without looking at my solution. The code (for any of the approaches I described for the previous problem) is almost the same. You only need to take into account that cells have values and some of them can be blocked. If a cell is blocked, the robot cannot visit it and therefore there are 0 ways in which the robot can reach, from a cell where the is an obstacle, the cell to its right or down.
To avoid duplication, here is the code only for the bottom-up approach.
Coin Change
You are given coins of different denominations and a total amount of money amount . Write a function to compute the fewest number of coins that you need to make up that amount. If that amount of money cannot be made up by any combination of the coins, return -1 .
You may assume that you have an infinite number of each kind of coin.
Imagine you need to solve this problem by hand. At every step, you have an amount of money and a few coins to choose from. Let's say the coins are 1,2 and 5, and the amount is 11. You can take three different paths:
- Take the coin with value 1. You're facing now with the problem of computing the fewest number of coins that you need to get to 10 .
- Choose the coin with value 2. You're facing now with the problem of computing the fewest number of coins that you need to get to 9 .
- Take the coin with value 5. You're facing now with the problem of computing the fewest number of coins that you need to get to 6 .
Which one will produce the desired output (minimum number of coins)? At this stage, we don't know. We need to explore each path in full and return the minimum.
It is pretty obvious now that we can use recursion to solve this. It should look something like this, with f(n) representing the minimum number of coins to reach n
From this it is trivial to get a top-down solution. You only need to add a cache, as we did in the previous problem:
Bottom-up Dynamic Programming
The bottom-up solution takes a different path. The idea is to fill a table with the minimum amount of coins needed to generate the values [0, amount], starting from the "initial conditions":
- From 0, we can use one coin to get to all the values in the array coins . We initialize them to 1.
Important note:
You may have noticed that I have created arrays that seem to have an extra element. The entry sol[0] represents the problem of reaching the amount 0. This is why it seems there is an offset of 1 in the code (we return sol[amount] instead of sol[amount -- 1] because indices represent quantities). You will see this pattern in more places in this article.
Variation :
- How would you solve this problem if there there wasn't an infinite number of each kind of coin.
Coin Change 2
You are given coins of different denominations and a total amount of money. Write a function to compute the number of combinations that make up that amount.
You may assume that you have infinite number of each kind of coin.
Let's review our previous example: amount = 11 and coins = [1,2,5]. Again, we have the same three choices.
In this case, instead of getting the minimum of the three, we return the sum of the three paths. If we arrive to an instance where amount = 0, we return 1 (it is a solution we need to count). If amount is negative, that path doesn't lead to a solution and we return 0.
Since this is just a variation on the previous problem, I will only write the C++ implementation of the bottom-up approach. You have the tools to generate recursive and top-down solutions.
Edit Distance
Given two strings word1 and word2 , return the minimum number of operations required to convert word1 to word2 .
You have the following three operations permitted on a word:
- Insert a character
- Delete a character
- Replace a character
- Input: word1 = "horse", word2 = "ros"
This problem seems tricky at first, so we need to be systematic and try to work our way into a solution from a good example. Let's take "horse" and "ros", and focus on the last character of each word:
- If "e" and "s" were the same, we could "forget about them" and keep working with "hors" and "ro"
- Replace "e" with "s" (or "s" with "e"), and now they're the same
- Delete "e" (or "s") and see and continue with the rest of the string "hors" and "ros" to see
- Insert a character ("s" or "e" respectively) to make the last characters equal
Each possibility will generate another subproblem of smaller size, therefore we can code this recursively. We need to find the minimum of these 3 branches and return it (plus 1, since taking any of these paths indicates that there is at least one edit operation ).
If you play around with this example you'll see that this problem has optimal substructure, so I'll just show you the code for the solution top-down here:
And bottom-up here:
Given a non-empty string s and a dictionary wordDict containing a list of non-empty words, determine if s can be segmented into a space-separated sequence of one or more dictionary words.
- The same word in the dictionary may be reused multiple times in the segmentation.
- You may assume the dictionary does not contain duplicate words.
- Input: s = "leetcode", wordDict = ["leet", "code"]
- Output: true
This problem has a relatively straightforward solution. If we were to solve this by hand, we would start taking prefixes of our input and see if they belong to our dictionary. As soon as we find one, we repeat the process with the remaining of the string. In other words, we solve a smaller instance of the same problem. Let's try to define the recursion first and later see if there are overlapping subproblems.
Let's use the following example to illustrate this algorithm:
- We start taking prefixes of s until we find one that belongs to dict. In this case, the first prefix that meets this condition is dog .
- Our new string s' is "andcats". Repeating 1 we get to sand.
- Now, s" is "cats". The prefix cats belongs to the dictionary.
- Since s"' is empty, we return true.
This snippet of code solves this problem recursively:
Coming back to the example, you can see that bot "dogs" + "and" and "dog" + "sand" produce the same suffix: "cat". There are overlapping subproblems and we can use memoization .
The bottom-up approach builds a table, where the empty string is initialized to true and every other entry to false. From there, the logic is the same:
- We build the solution for all the substrings of s, from length 0 to the length of s. This is why we have a loop in the range 1,n and another internal loop in the range [i-1,0] to move around all prefixes for that length
- If the problem has a solution for the prefix s 0,j , we "follow the recursion" on the suffix s[j+1,:): if this is a valid word, there is a solution for the problem represented by dp[i]
This is harder to explain in words than in code, so here it is:
Conclusions
As we have seen, dynamic programming problems can be solved in a systematic way:
- Start with small examples and see if their solution can be derived from smaller instances of the same problem. If so, recursion is a natural choice.
- Draw a tree with the possible paths you can take depending on the available choices. If you reach the same parameters more than once, you can improve your solution using a cache.
- From the recursion, arriving at a top-down solution is mechanical: you just need to add a cache.
- The bottom-up approach generates a table based on "the initial conditions" for the recursion.
I hope you found this article helpful. I recommend sharing it with a friend or the rest of the community because you could help someone pass their exams or their next job interview.
For more content, visit my blog , and connect with me on Twitter .
Top comments (0)

Templates let you quickly answer FAQs or store snippets for re-use.
Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink .
Hide child comments as well
For further actions, you may consider blocking this person and/or reporting abuse

Multipass: Ubuntu Virtual Machines Made Easy
Mario García - Nov 15

Front-End Mastery: Conquer Your Interview with These Top Questions
Tutort Academy - Nov 15

Simplifying Localhost HTTPS Setup with mkcert and stunnel
ℵi✗✗ - Nov 27

Como retornar dados paginados no Spring Boot
Bruno Barbosa - Nov 14
Once suspended, codinglanguages will not be able to comment or publish posts until their suspension is removed.
Once unsuspended, codinglanguages will be able to comment and publish posts again.
Once unpublished, all posts by codinglanguages will become hidden and only accessible to themselves.
If codinglanguages is not suspended, they can still re-publish their posts from their dashboard.
Once unpublished, this post will become invisible to the public and only accessible to Your DevOps Guy.
They can still re-publish the post if they are not suspended.
Thanks for keeping DEV Community safe. Here is what you can do to flag codinglanguages:
codinglanguages consistently posts content that violates DEV Community's code of conduct because it is harassing, offensive or spammy.
Unflagging codinglanguages will restore default visibility to their posts.

We're a place where coders share, stay up-to-date and grow their careers.

Release Summary
Gurobi 11.0 equips customers to identify globally optimal solutions to complex nonlinear problems.
- #DecisionIntelligence
- #optimization
Social Media Profiles
- Gurobi on Facebook
- Gurobi on Twitter
- Gurobi on LinkedIn

IMAGES
VIDEO
COMMENTS
Understanding Dynamic Programming can help you solve complex programming problems faster. These methods can help you ace programming interview questions about data structures and algorithms. And they can improve your day-to-day coding as well. We released a 5-hour course on Dynamic Programming on the freeCodeCamp.org YouTube channel.
Dynamic programming problems can catch you off-guard in an interview or exam. Check out the most common problems and the solutions here. Readers like you help support MUO. When you make a purchase using links on our site, we may earn an affiliate commission. Read More.
Steps to solve a Dynamic programming problem: Identify if it is a Dynamic programming problem. Decide a state expression with the Least parameters. Formulate state and transition relationship. Apply tabulation or memorization. Step 1: How to classify a problem as a Dynamic Programming Problem?
Dynamic programming is a powerful problem-solving technique that can be used to solve a wide range of problems across different fields. By breaking down a complex problem into smaller subproblems ...
Dynamic programming is a useful way to efficiently solve certain types of problems you'll encounter in computer science. This guide introduces you to the its basic principles and steps. Written by Artturi Jalli Published on Nov. 21, 2022 Image: Shutterstock / Built In Sometimes in computer science, you will run into problems.
Dynamic programming is a powerful technique for solving complex optimization problems. In this chapter of the textbook "Analytic Methods for Optimization", you will learn the basic concepts, methods, and applications of dynamic programming, with examples and exercises. Download the pdf from MIT's website and discover how dynamic programming can help you solve challenging problems.
Our first approach involves looking at the first value, then reviewing each subsequent value to determine if it will provide the difference needed to solve the question. For example, once our algorithm checks the value of the first array item, 8, it will then scan the remaining values for 3 (e.g., 11 - 8 = 3).
The dynamic programming solution is much more concise and a natural fit for the problem definition, so we'll skip creating an unnecessarily complicated naive solution and jump straight to the DP solution. def find_lis(seq): n = len (seq) max_length = 1. best_seq_end = -1 # keep a chain of the values of the lis.
Dynamic programming refers to a problem-solving approach, in which we precompute and store simpler, similar subproblems, in order to build up the solution to a complex problem. It is similar to recursion, in which calculating the base cases allows us to inductively determine the final value.
Dynamic programming is a very powerful algorithmic paradigm in which a problem is solved by identifying a collection of subproblems and tackling them one by one, smallest rst, using the answers to small problems to help gure out larger ones, until the whole lot of them is solved. In dynamic programming we are not given a dag; the dag is ...
It can help you solve complex programming problems, such as those often seen in programmin... Learn how to use Dynamic Programming in this course for beginners.
Dynamic Programming 11.1 Overview Dynamic Programming is a powerful technique that allows one to solve many different types of problems in time O(n2) or O(n3) for which a naive approach would take exponential time. In this lecture, we discuss this technique, and present a few key examples. Topics in this lecture include: •The basic idea of ...
7 Steps to solve a Dynamic Programming problem In the rest of this post, I will go over a recipe that you can follow to figure out if a problem is a "DP problem", as well as to figure out a solution to such a problem. Specifically, I will go through the following steps: How to recognize a DP problem Identify problem variables
Dynamic Programming (DP) is defined as a technique that solves some particular type of problems in Polynomial Time. Dynamic Programming solutions are faster than the exponential brute method and can be easily proved their correctness. Characteristics of Dynamic Programming Algorithm:
We can solve dynamic programming problems with the bottom-up or top-down approach. Regardless of which, we need to define and come out with the base case of the problem. Generally, the top-down approach starts by looking at the big picture. From the high level, it gradually drills down into the solution for smaller sub-problems, and finally ...
Dynamic Programming is an approach where the main problem is divided into smaller sub-problems, but these sub-problems are not solved independently. For a problem to be solved using dynamic programming, the sub-problems must be overlapping. This means that two or more sub-problems will evaluate to give the same result.
Extensibility: Dynamic programming algorithms can be easily extended to solve more complex problems by adding additional sub-problems and modifying the objectives of the problem. When it comes to solving optimization problems, dynamic programming is a very useful tool to ensure efficiency in solutions.
Dynamic programming can be applied when there is a complex problem that is able to be divided into sub-problems of the same type and these sub-problems overlap, be it completely or partially. The ...
Simply put, dynamic programming is an optimization technique that we can use to solve problems where the same work is being repeated over and over. You know how a web server may use caching? Dynamic programming is basically that. However, dynamic programming doesn't work for every problem.
However, finding a solution analytically is in general intractable due to the inherent nonlinearity, and, hence, numerical algorithms are usually leveraged to solve this problem. 4.2.2 Differential Dynamic Programming. DDP is an approximate DP algorithm that has been well-recognized for solving nonlinear optimization problems.
Steps to Solve Problems Using Dynamic Programming. To solve a problem using Dynamic Programming, the following steps can be followed: 1. Define the problem: Clearly define the problem and the objective you want to achieve. 2. Identify the base case: Determine the most direct possible input for which the solution is already known.
Dynamic Programming is a method for solving a complex problem by breaking it down into a collection of simpler subproblems, solving each of those subproblems just once, and storing their...
How to solve a Dynamic Programming Problem ? Advanced Concepts: Bitmasking and Dynamic Programming | Set 1 Bitmasking and Dynamic Programming | Set-2 (TSP) Digit DP | Introduction Sum over Subsets | Dynamic Programming Standard problems on Dynamic Programming: Easy: Fibonacci numbers nth Catalan Number
Dynamic Algorithm. A general class of algorithms which solve problems by solving smaller versions of the problem, saving the solutions to the small problems and then combining them to solve the larger problem. Data Structures and Algorithms Course Notes, PLDS210 University of Western Australia.
Here I will solve 6 harder Dynamic Programming problems to show you how to approach them. ... As we have seen, dynamic programming problems can be solved in a systematic way: Start with small examples and see if their solution can be derived from smaller instances of the same problem. If so, recursion is a natural choice.
Mixed-Integer Nonlinear Programming (MINLP) Problems: Quickly identify precise, globally optimal solutions to complex nonlinear problems. Dynamic Distributed Tuning: With this key enhancement to ...