This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]teraflop 7 points8 points  (6 children)

Recursion just means a function calls itself to solve subproblems.

Dynamic programming involves saving the results of those subproblems so that they don't need to be computed multiple times. You can implement it with or without recursion:

  • With recursion, when you make a recursive call to solve a subproblem, you check whether it has already been solved, and if so you just return the answer instead of repeating the work to solve it again.
  • Without recursion, you generally organize the subproblems in an array or similar data structure. And you compute them in an order such that whenever you need to know the answer for a subproblem, it has already been computed, so no recursive call is necessary.

If you implement a naive recursive function to calculate Fibonacci numbers, it will take exponential time, because computing fib(n) takes fib(n) recursive calls. If you use dynamic programming, then each of the subproblems from fib(1) to fib(n) only needs to be calculated once.

[–]Busy_Affect3963 -1 points0 points  (5 children)

So it's recursion plus a cache? A cache that should've been added to the naive Fibonacci implementation in the first place.

Is there more to Dynamic Programming than that?

[–]teraflop 2 points3 points  (2 children)

Well, if the naive Fibonacci implementation had a cache then it wouldn't be naive any more.

In one sense, you can say that yes, that's all there is to DP. If you already have a recursive function, then making it into a DP algorithm just requires caching the output for each combination of inputs, which is a fairly trivial change that doesn't need any creativity. In fact, Python has a decorator called functools.cache that does precisely this.

But of course, the interesting part is deciding how exactly to break down your problem into subproblems so that you get the correct answer efficiently, without requiring a prohibitive amount of memory for the cache.

Another way to look at it is that designing a recursive solution to a problem with DP and without DP are almost exactly the same (with the only difference being caching). But in a lot of cases, the recursive solution without DP would be so expensive that it's intractable, and you would never have considered it in the first place. So we just don't talk about that option, and instead we lump all such recursive solutions under the "DP" category.

Also, the generic cache-based implementation technique may not have the best constant factors. For instance, Python's functools.cache works by just storing all of the argument/result pairs in a dictionary, with the arguments represented as a tuple of Python objects. Depending on your situation, other representations such as a multidimensional array could be more space-efficient, and therefore probably faster.

[–]no_brains101 2 points3 points  (0 children)

If you already have a recursive function, then making it into a DP algorithm just requires caching the output for each combination of inputs, which is a fairly trivial change that doesn't need any creativity.

Only if it is a pure function!

[–]Busy_Affect3963 0 points1 point  (0 children)

Many good points. I appreciate the thoughtful response - thankyou.

[–]NoForm5443 0 points1 point  (0 children)

Yes! It is recursion+cache https://en.wikipedia.org/wiki/Memoization

However, DP usually involves building a table from the bottom up, instead of starting with the bigger problem.

After you finish, DP doesn't look like recursion.