top of page

Dynamic programming introduction

If break up a problem into a series of overlapping sub-problems, and build up solutions to larger and larger sub-problems. If a problem can be broken down into smaller sub-problems, and these smaller sub-problems can still be broken into smaller ones - and if manage to find out that there are some over-lapping sub-problems, then a DP problem is coming.

The core idea of Dynamic Programming is to avoid repeated work by remembering partial results and this concept finds its application in a lot of real-life situations.

In programming, Dynamic Programming is a powerful technique that allows one to solve different types of problems in time O(n2) or O(n3) for which a naive approach would take exponential time.

Dynamic programming is basically, recursion plus using common sense. What it means is that recursion allows you to express the value of a function in terms of other values of that function. Where common sense tells you that if you implement your function in a way that the recursive calls are done in advance, and stored for easy access, it will make your program faster. This is what we call Memoization - it is memorizing the results of some specific states, which can then be later accessed to solve other sub-problems.

The intuition behind dynamic programming is that we trade space for time, i.e. to say that instead of calculating all the states taking a lot of time but no space, we take up space to store the results of all the sub-problems to save time later.

Majority of the Dynamic Programming problems can be categorized into two types:

1. Optimization problems.

2. Combinatorial problems.

The optimization problems expect you to select a feasible solution so that the value of the required function is minimized or maximized. Combinatorial problems expect you to figure out the number of ways to do something or the probability of some event happening.

Every Dynamic Programming problem has a schema to be followed:

  • Show that the problem can be broken down into optimal sub-problems.

  • Recursively define the value of the solution by expressing it in terms of optimal solutions for smaller sub-problems.

  • Compute the value of the optimal solution in a bottom-up fashion.

  • Construct an optimal solution from the computed information.

Bottom-up vs. Top Down:

  • Bottom-Up - I'm going to learn to program. Then, I will start practicing. Then, I will start taking part in contests. Then, I'll practice even more and try to improve. After working hard like crazy, I'll be an amazing coder.

  • Top-Down - I will be an amazing coder. How? I will work hard like crazy. How? I'll practice more and try to improve. How? I'll start taking part in contests. Then? I'll practice. How? I'm going to learn to program.

Memoization is very easy to code and might be your first line of approach for a while. Though with dynamic programming, you don't risk blowing stack space, you end up with lots of liberty of when you can throw calculations away. The downside is that you have to come up with an ordering of a solution that works.

Someone about DP:

"Imagine you have a collection of N wines placed next to each other on a shelf. For simplicity, let's the number the wines from left to right as they are standing on the shelf with integers from 1 to N, respectively. The price of the ith wine is pi. (prices of different wines can be different).

Because the wines get better every year, supposing today is the year 1, on year y the price of the ith wine will be y*pi, i.e. y-times the value that current year.

You want to sell all the wines you have, but you want to sell exactly one wine per year, starting this year. One more constraint - on each year you are allowed to sell only either the leftmost or the rightmost wine on the shelf and you are not allowed to reorder the wines on the shelf (i.e. they must stay in the same order as they are in the beginning).

You want to find out, what is the maximum profit you can get if you sell the wines in optimal order?"

When coming up with the memoization solution for a problem, start with a backtrack solution that finds the correct answer. The backtrack solution enumerates all the valid answers for the problem and chooses the best one.

Here are some restrictions on the backtrack solution:

  • It should be a function, calculating the answer using recursion.

  • It should return the answer with a return statement, i.e., not store it somewhere.

  • All the non-local variables that the function uses should be used as read-only, i.e. the function can modify only local variables and its arguments.

To sum it up, if you identify that a problem can be solved using DP, try to create a backtrack function that calculates the correct answer. Try to avoid redundant arguments, minimize the range of possible values of function arguments and also try to optimize the time complexity of one function call (remember, you can treat recursive calls as they would run in O(1) time). Finally, you can memorize the values and don't calculate the same things twice.

Recent Posts

See All

Linear search

A linear search is used on a collection of items. It relies on the technique of traversing a list from start to end by exploring properties of all the elements that are found on the way. For example,


bottom of page