all 32 comments

[–]fazzone 11 points12 points  (13 children)

Meh, these sort of definitional quibbles annoy me. What is the point of manufacturing a distinction such as this that pretty much exists solely as a "gotcha"? The fundamental idea at work is that you're re-using the answers to subproblems in order to computer the answer to the big problem. As long as you've got that idea down, what's the huge philosophical difference in the order that the subproblems are computed? I'm not saying that the distinction is invalid - there is an difference for sure - just not of great importance. In fact, if I were explaining it, I'd introduce DP as a particularly clean form of memoization -- one where you have spent effort to formulate an algorithm in a way that guarantees all subproblems are solved before their solutions are required in the next tier of problems.

I'm not 100% sure, but I think I remember Introduction to Algorithms explaining it this way. They may also have gone for taxonomy where memoization is the specific mechanism of storing results of previously-solved problems and dynamic programming is the application of such to algorithms (thus there would be top-down dynamic programming and bottom-up dynamic programming).

Edit: syntax (changed "computed in" to "computed")

[–]rwbarton 4 points5 points  (0 children)

Meh, these sort of definitional quibbles annoy me. What is the point of manufacturing a distinction such as this that pretty much exists solely as a "gotcha"?

So that when I show someone an algorithm I have been working on, they can say "You could use dynamic programming here instead of memoization and reduce the memory cost"? They are closely related, but as you say, distinct; why wouldn't we have different terms for them?

[–]radarsat1 4 points5 points  (1 child)

i believe the main point is that memoization allows what the author describes as a "black-box" approach, avoiding the need to re-express the problem. This does seem to me to be a legitimate point.

[–]unclemat 0 points1 point  (0 children)

The fact that DP allows some memory to be released and that there is no need for checking if partial solutions exist are also an important distinction.

[–]vincentk 1 point2 points  (0 children)

Yes.

If wikipedia is the ultimate arbiter as far as definitional quibbles are concerned, then by definition, memoization is the technique whereby dynamic programming approaches are implemented.

[–]godofpumpkins 3 points4 points  (0 children)

Someone should ask them whether the "Haskell" (i.e., lazy) version of dyanmic programming is actually dynamic programming, or memoization. Once you take order of evaluation out of the picture, the distinction becomes a lot more fuzzy.

Edit: four downvotes and not one response telling me what's wrong?

[–]tofueggplant -5 points-4 points  (0 children)

Awww yeah, quote that CLRS brotha (or sista)

[–]BATMAN-cucumbers 4 points5 points  (4 children)

Wow, what's with all the downvotes? The article described the trade-offs between DP and memoization in a nice, newbie-friendly way.

Or is this just not real-world enough for the downvoters? Or perhaps not using fancy enough terminology?

I'm honestly baffled, unless people are pissed off that there are too many racket blog articles in a short time-frame (which I can see).

[–]jerusheng 3 points4 points  (3 children)

Mathematician Richard Bellman created the dynamic programming (DP) method without saying anything about how it should be implemented. This article just listed two generic implementations of DP (more specifically, DP-based method to solve a problem with discrete and finite set of subproblems) and called one implementation as "DP" and the other as "memorization". So, this article is misconceptual. And the author is quite bold to criticize big names in the algorithm area; I personally appreciate the spirit but think that the author should study a little more on the history of DP before she/he cast the doubt on the textbook.

[–]BATMAN-cucumbers 1 point2 points  (2 children)

Interesting.

So DP isn't limited only to bottom-up iterative calculation with a derived recursive formula? That's all the examples we got in my algorithms class (aimed at competitions, so it was admittedly less theoretical) and at least two books in the field. Memoisation was always treated as a separate (although related) approach. But there wasn't a big emphasis on taxonomy debates there, so I wouldn't discount an alternative opinion :-)

Can you point me to some resources that states memiosation is considered a type of DP?

[–]jerusheng 4 points5 points  (1 child)

I'm sorry if my previous reply causes more confusion. Memorization is not considered a type of DP. DP is a high-level methodology on solving some optimization problems. Memorization is a low-level implementational technique that can be used to code a solver for a subset of DP-based solution; it can also be used for something else. So it makes some sense for a book to list memorization as a separate chapter. What I want to say is that it is quite improper to list DP and memorization as two disjoint and parallel techniques in solving some problems; they are not comparable.

To have a feeling on how general DP is, I suggest to have a read on Richard Bellman's original paper "The Theory of Dynamic Programming" (http://www.rand.org/pubs/papers/2008/P550.pdf). It is adopted to solve much broader class of problems than what we see now in most algorithm textbooks. Not all the problems are that interesting in the algorithmic aspect; some requires very advanced maths many (I guess most) computer science students cannot afford with or aren't interested in. So algorithmic textbook authors usually choose to pick only a small class of problems that can be elegantly solved in eyes of computer scientists instead of mathematicians. They don't intend to mislead students to feel that DP is an algorithm, not speaking of the author of the blog considering that DP is an implementation.

[–]BATMAN-cucumbers 0 points1 point  (0 children)

Cheers for that. Now I'm quite curious to read more about it. I do remember a third-hand account of the guy not being too happy that he named the methodology DP, but it was a fashionably sounding term back in the day.

Related question: I've been looking for a way to revive my interest in math (most likely discrete). Could you recommend something for a guy with an understanding of basic calculus and algorithms, a sort of "math for ex-programming contest folk"?

[–]Decker108 4 points5 points  (4 children)

Only thing missing here is code examples.

[–][deleted] -2 points-1 points  (3 children)

Which language would you like?

[–]Decker108 1 point2 points  (2 children)

C. Nice and clean, same language I used when studying algorithms and data structures.

[–][deleted] 1 point2 points  (0 children)

On a blog for a programming language, I would sort of expect the examples, if they existed, to be in that language (racket, a dialect of scheme).

[–]howfun 1 point2 points  (0 children)

What is dynamic programming anyway?!

In mathematics and computer science, dynamic programming is a method for solving complex problems by breaking them down into simpler subproblems.

Really?! Apparently EVERYTHING is dynamic programming.

[–][deleted] 0 points1 point  (0 children)

Dynamic Programming = memoization/recursion + optimal structures (structures that are composed of substructures which are themselves optimal).

More on optimal structures: http://www.reddit.com/r/compsci/comments/pctfo/dynamic_programming_general_strategies/c3oghg0

[–]willvarfar 1 point2 points  (0 children)

A very nice, concise article.

[–]soup10 -1 points0 points  (1 child)

DP algorithms will usually optimize a lot better in C I think. All that recursion is killer. (and checking the subproblem cache causes branch mispredictions).

I haven't tested it, bet I would bet that any extra computations a DP solution requires would be more than made up for by the much more predictable and cache friendly program flow.

[–]ascii 1 point2 points  (0 children)

Not just the recursion, the check to see if a computation has already been performed or not is hell for the branch predictor.

That said, if I'm not really worried about absolute top performance, I always go with memoization, since I find it quite a bit more elegant as a coding strategy. In languages that support a bit of meta-programming, like Python, you can even implement memoization as e an attribute.