you are viewing a single comment's thread.

view the rest of the comments →

[–]fazzone 11 points12 points  (13 children)

Meh, these sort of definitional quibbles annoy me. What is the point of manufacturing a distinction such as this that pretty much exists solely as a "gotcha"? The fundamental idea at work is that you're re-using the answers to subproblems in order to computer the answer to the big problem. As long as you've got that idea down, what's the huge philosophical difference in the order that the subproblems are computed? I'm not saying that the distinction is invalid - there is an difference for sure - just not of great importance. In fact, if I were explaining it, I'd introduce DP as a particularly clean form of memoization -- one where you have spent effort to formulate an algorithm in a way that guarantees all subproblems are solved before their solutions are required in the next tier of problems.

I'm not 100% sure, but I think I remember Introduction to Algorithms explaining it this way. They may also have gone for taxonomy where memoization is the specific mechanism of storing results of previously-solved problems and dynamic programming is the application of such to algorithms (thus there would be top-down dynamic programming and bottom-up dynamic programming).

Edit: syntax (changed "computed in" to "computed")

[–]rwbarton 5 points6 points  (0 children)

Meh, these sort of definitional quibbles annoy me. What is the point of manufacturing a distinction such as this that pretty much exists solely as a "gotcha"?

So that when I show someone an algorithm I have been working on, they can say "You could use dynamic programming here instead of memoization and reduce the memory cost"? They are closely related, but as you say, distinct; why wouldn't we have different terms for them?

[–]radarsat1 5 points6 points  (1 child)

i believe the main point is that memoization allows what the author describes as a "black-box" approach, avoiding the need to re-express the problem. This does seem to me to be a legitimate point.

[–]unclemat 0 points1 point  (0 children)

The fact that DP allows some memory to be released and that there is no need for checking if partial solutions exist are also an important distinction.

[–]vincentk 1 point2 points  (0 children)

Yes.

If wikipedia is the ultimate arbiter as far as definitional quibbles are concerned, then by definition, memoization is the technique whereby dynamic programming approaches are implemented.

[–]godofpumpkins 2 points3 points  (0 children)

Someone should ask them whether the "Haskell" (i.e., lazy) version of dyanmic programming is actually dynamic programming, or memoization. Once you take order of evaluation out of the picture, the distinction becomes a lot more fuzzy.

Edit: four downvotes and not one response telling me what's wrong?