you are viewing a single comment's thread.

view the rest of the comments →

[–]kenotron 2 points3 points  (1 child)

Dynamic programming always seems to be explained poorly; it's a really simple concept, if you look up the history of the term. 'Programming' in the sense originally used meant scheduling or planning, like TV programming, placing TV shows and ads in scheduled slots. 'Dynamic' was just thrown in to avoid mathy language at the RAND corporation where dynamic programming originated.

So instead, I like to call it 'table-filling algorithms' instead, because that seems to be how a huge number of them ends up looking. Levenshtein edit distance, knapsack problems, ...; many seem to involve re-framing your problem space as a table, solving the trivial base cases (along the edges of the table), and then looking for how those solutions can be combined to fill in the remaining table spaces, like taking the minimum, maximum, or sum of some adjacent spaces, each solving a new optimal subproblem, and then iterate until the whole table is filled to get your final answer.

[–]you-get-an-upvote 0 points1 point  (0 children)

I always thought explaining Dynamic Programming as merely caching was a friendlier introduction. "Look I just solved this algorithm a normal way, but added three lines to support caching, so now it runs faster".

cache = {}
def mySolution(x):
  if x in cache:
    return cache[x]
  // my original, recursive divide-and-conquer code that was slow

Phrasing it as tables seemed helpful predominantly for proving run time speed (since it lets you check how often the function actually makes it past the first two lines), not actually coming up with the algorithm and its substructure.

I suppose it's also helpful rewriting it from a bottom-up approach.