I'm trying to upsolve a basic dynamic programming problem from cses and I would prefer to not look up for the solution at the moment. I've been keeping on trying to solve it for a few days so far. I came up with some idea,but ended up with nothing so far. Any advice? You can send me a message for further information.

Thanks

Auto comment: topic has been updated by fabbiucciello (previous revision, new revision, compare).1) Try and get a recursive solution, something like $$$f_n = f_{n-1} + f_{n-2}$$$ for the Fibonacci numbers, or $$$g_{i,j} = g_{i-1,j} + g_{i,j-1}$$$ for the number of paths on an $$$n \times m$$$ grid.

2) Implement that recursive solution and add memoization, if it passes, you're done!

3) Convert that to iterative dp, if it passes, you're done!

4) You need to formulate a better dp

I would say that it's easier to think about an iterative solution from the very beginning, at least for non-trivial problems. My lecture https://youtu.be/YBSt1jYwVfU

Of course, but for some problems it's easier to start with recursive.

And often for beginners starting out, it can be confusing to see the iterative version and think "How did we get here?". So it might be better to start by learning the recursive version.

Thanks,I'm going over it :)

Thanks,buddy

the easiest way is to do recursion + memoization

namely, you solve a recursion problem for a state, you save it somewhere, then if you have to solve that problem again, instead of running through the whole function, you just need to return the value you have saved

Thanks,for advicing

It is very hard for me to generate completely new idea in DP problem. So the only way (step) for me to solve a DP problem is to remember similar problem that I knew how to solve before. For example, there are about $$$350$$$ standard dynamic programming problems. If you read and understand every article from here, you will be much better at DP.

Thanks

I think that what helps is memoization, and you try to understand what are the minimum requirements to uniquely define a state of the memoization, while also reducing the number of transitions.

thanks,bro

Think about state and result of the state, that being the information you want to know given a state.

Think about transition (the way states interact to get a result)

If needed, try to optimize number of states or complexity of transition. Sometimes, creating an additional state for transition is useful.

This is very abstract, I know. But this is not ironically more or less the way I think about these problems.

There is also some nuance in the way of thinking about transitions, especially if you want to code iteratively. For example, in a knapsack kind of problem then you might say dp[i][j] = max(dp[i-1][j], dp[i-1][j-a[i-1]] — b[i-1]). I don't use that one usually, more natural to me is: dp[i][j] goes to dp[i+1][j] with cost 0 and goes to dp[i+1][j+a[i]] with cost b[i]. Some problems are more natural to optimize thinking in a certain way though. State definition for this would be something like "dp[i][j] is the maximum cost possible for using sum of weights exactly j, considering taking or not the i first elements", exactly might be less than or equal to depending on the base case.

Some people might think of both as the same thing because in the end you basically do the same thing but I find them way different, at least when I'm trying to solve some problem. I think the second way of thinking is the solution to the problem that Genius3435 mentioned, it looks less magical and more approachable than the first way.

It goes without saying, I consider recursive dp a third way of thinking about this. I think there are more ways that are slightly different, but these are the 3 main ways of thinking about transition in my opinion. Also, I think recursive dp is easier when you start out but getting used to iterative solutions is a necessary thing to do if you want to solve some tougher problems with the nice bonus of it being way shorter to code.

One question I ask myself when trying to find things to define a state is "what matters? what's useless? and what do I want in this problem". What matters is the essential information that defines a state, what's useless is things that you might use as extra when coding a brute force solution but aren't needed and what I want is the result. Example of useless thing: we can solve knapsack with something like brute(index, curWeight, curCost) by returning the cost only in the ending but we can also solve it with brute(index, curWeight) by adding the cost between the calls. I think this is the thing that troubles the most beginners when they're told "dp is bruteforce with memoization" and they try to apply it to bruteforce solutions that keep too many things as parameters.

Reading this might help you or not, the most important thing in the end is reaching your own understanding about things and usually that comes with experience (practice). Also related to what you said about not looking at the solution: in my opinion CP life is too short to care about that, you will never be able to solve every problem ever so it's best to learn from others and that includes editorials/solutions, just don't copy the solution without understanding it in your own way of thinking.

I forgot about the state space and transitions. That's very important, and can often be hard in a problem.

(I'm sure you know this, but I'm adding for others)

The two ways of thinking are called

`pull-dp`

and`push-dp`

, respectively.With pull-dp, you're 'pulling' the values that you already calculated to get the current value. This is like a recursive formula, usually when you can create one, it's better to use pull-dp.

With push-dp, you're 'pushing' the values from this state to other states. This one is often useful for problems where the amount that you push a value depends on the index (I remember one where you jump forward by a certain amount depending on some array $$$a_i$$$). There are other examples too, but I can't think of any off the top of my head.

Also, this is my first time being summoned!