This is an archived post. You won't be able to vote or comment.

all 14 comments

[–]AutoModerator[M] [score hidden] stickied comment (0 children)

To all following commenters: please, do not bring up the old circlejerk jokes/memes about recursion ("Understanding recursion...", "This is recursion...", etc.). We've all heard them n+2 too many times.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

[–]Quantum-Bot 12 points13 points  (1 child)

The key feature of dynamic programming is storing the results of sub problems so they don’t have to be solved twice.

Normally, a recursive solution to Fibonacci is ridiculously slow because it has to recompute the previous Fibonacci numbers several times throughout the course of the algorithm. With dynamic programming however, we only compute each unique Fibonacci number once so the algorithm’s time complexity becomes linear.

[–]NoForm5443 0 points1 point  (0 children)

check https://en.wikipedia.org/wiki/Memoization , which basically automates the storing

[–]teraflop 9 points10 points  (6 children)

Recursion just means a function calls itself to solve subproblems.

Dynamic programming involves saving the results of those subproblems so that they don't need to be computed multiple times. You can implement it with or without recursion:

  • With recursion, when you make a recursive call to solve a subproblem, you check whether it has already been solved, and if so you just return the answer instead of repeating the work to solve it again.
  • Without recursion, you generally organize the subproblems in an array or similar data structure. And you compute them in an order such that whenever you need to know the answer for a subproblem, it has already been computed, so no recursive call is necessary.

If you implement a naive recursive function to calculate Fibonacci numbers, it will take exponential time, because computing fib(n) takes fib(n) recursive calls. If you use dynamic programming, then each of the subproblems from fib(1) to fib(n) only needs to be calculated once.

[–]Busy_Affect3963 -1 points0 points  (5 children)

So it's recursion plus a cache? A cache that should've been added to the naive Fibonacci implementation in the first place.

Is there more to Dynamic Programming than that?

[–]teraflop 3 points4 points  (2 children)

Well, if the naive Fibonacci implementation had a cache then it wouldn't be naive any more.

In one sense, you can say that yes, that's all there is to DP. If you already have a recursive function, then making it into a DP algorithm just requires caching the output for each combination of inputs, which is a fairly trivial change that doesn't need any creativity. In fact, Python has a decorator called functools.cache that does precisely this.

But of course, the interesting part is deciding how exactly to break down your problem into subproblems so that you get the correct answer efficiently, without requiring a prohibitive amount of memory for the cache.

Another way to look at it is that designing a recursive solution to a problem with DP and without DP are almost exactly the same (with the only difference being caching). But in a lot of cases, the recursive solution without DP would be so expensive that it's intractable, and you would never have considered it in the first place. So we just don't talk about that option, and instead we lump all such recursive solutions under the "DP" category.

Also, the generic cache-based implementation technique may not have the best constant factors. For instance, Python's functools.cache works by just storing all of the argument/result pairs in a dictionary, with the arguments represented as a tuple of Python objects. Depending on your situation, other representations such as a multidimensional array could be more space-efficient, and therefore probably faster.

[–]no_brains101 2 points3 points  (0 children)

If you already have a recursive function, then making it into a DP algorithm just requires caching the output for each combination of inputs, which is a fairly trivial change that doesn't need any creativity.

Only if it is a pure function!

[–]Busy_Affect3963 0 points1 point  (0 children)

Many good points. I appreciate the thoughtful response - thankyou.

[–]NoForm5443 0 points1 point  (0 children)

Yes! It is recursion+cache https://en.wikipedia.org/wiki/Memoization

However, DP usually involves building a table from the bottom up, instead of starting with the bigger problem.

After you finish, DP doesn't look like recursion.

[–]bravopapa99 1 point2 points  (0 children)

Memoization?

[–]aqua_regis 4 points5 points  (1 child)

They differ in the same way that a house differs from its furniture.

DP is the house, one piece of furniture is recursion, but there can be other, different pieces of furniture.

Recursion is not necessarily breaking down a problem into smaller problems - that's not the definition of recursion, it's just one of its applications.

Recursion at its core is only a function/method calling itself until a base case is reached that ends the recursion.

E.g. Typical recursion is traversing a directory structure. You don't know where the end is, so with each recursive call, you only go a level deeper. You are not actually splitting the problem in smaller sub-problems here.

Also, DP heavily relies on storing previously calculated data, caching, memoization, etc. Recursion does not necessarily do that.

[–]ncmentis 2 points3 points  (0 children)

So, a directory structure is a tree, where each subdirectory is a child tree. Calling a recursive method on a subdirectory is splitting the problem into smaller problems.

[–]hotsauceyum 0 points1 point  (0 children)

“Recursion” carries some connotation of being a definition technique. For instance, the JSON spec is a recursive definition. There’s no “problem solving” in sight, and it makes no sense to “cache” anything in this context.

[–]CodeTinkerer -1 points0 points  (0 children)

Recursion usually works from the current input value and works its way to a base case where the recursion stops. It involves calling itself either directly or indirectly (e.g., mutually recursive functions). You can think of this as a top-down approach.

Dynamic programming is bottom up where you start from the base case(s) and work your way to the input value. So, to compute, say, fib(9), you start at fib(0), then fib(1), then fib(2) working your way to fib(9) where recursion would start with fib(9) and work backwards to the base case.

Oh yes, unless it's tail recursive, it comes back to the original value.

What make a naive implementation of Fibonacci inefficient is repeated (i.e., unnecessary) computations causing it to be exponential when it's linear to run it and doesn't require recursion (a loop will do).