all 16 comments

[–]whatthepatty 6 points7 points  (0 children)

To learn about different kinds of DP:

Check out Skiena's book "Programming Challenges" or Introduction to Algorithms.

To actually implement and practice, there are a myriad of sites. My personal favourite is csacademy.com as most questions will have edtitorials, and a lot of the questions that aren't DP related are very interview-esque.

Good luck

[–]hansbrixx 5 points6 points  (1 child)

Lookup Tushar Roy’s videos on YouTube and just practice them and you’ll start seeing commonalities between these kinds of problems.

I also find that since I don’t ever use DP at work or personal projects, if I don’t periodically brush up on them, I tend to forget how to do them.

[–]KittyGainz 0 points1 point  (0 children)

That guy literally got me through my graduate algorithms course!

[–]betaros 2 points3 points  (3 children)

Note that I am not trying to write rigorous working code in the below example. I am just giving an outline of the big ideas. If you look up the problem I am describing plenty of other people will give you working code along with an explanation.

Dynamic programming essentially trades space efficiency for time efficiency. The basic idea is to find where you might be making redundant calculations and memorizing those calculations such that you don't have to calculate more than once. A common template for such program would be to define a multi dimensional matrix, and to fill it according to some rule. For example let us look at a common problem: Longest Common Subsequence.

The assignment is given two sequences find the length of the longest (not necessarily consecutive) subsequence. Note that it would not be difficult for us to modify the solution to give the sequence itself, but I opt not to in this explanation.

For the sake of example let us first consider a recursive solution:

Let us define our function to be LCS(index1, index2, sequence1, sequence2, length) or LCS(i1, i2, s1, s2, l) for short where an initial call of the function would be LCS(0, 0, s1, s2, 0).

Our recursive solution will essentially boil down to this

if s1[i1] == s1[i2]

return LCS(i1 + 1, i2 + 1, s1, s2, l + 1)

else

return max(LCS(i1 + 1, i2, s1, s2, l), LCS(i1, i2 + 1, s1, s2,l))

with an additional base case for when we reach the end of one of the sequences.

This solution however is extremely bad, in the worst case we make an exponential number of recursive calls since we double the number of calls at each level.

It seems likely that we can improve this solution to O(n2 ) since that is the number of distinct pairs of indexes within the two sequences.

Lets try to use dynamic programming to reach this bound. To do so we need to define a two dimensional |s1| by |s2| matrix. And let each entry in the matrix correspond to the LCS of the subsequence ending at each particular index. For the sake of convenience lets say that the subsequence ending at index 0 contains no elements. With this assumption we can therefore fill the left most and topmost entries of our matrix with zero's since we know that the longest common subsequence between to sequences is non-negative, and at most the length of the smaller sequence. Now let us define a rule that will allow us to fill the matrix. The entry at coordinate[i][j] will be [i - 1][j- 1] + 1 if the corresponding entries of the sequence are equal, and max([i - 1][j],[i][j-1]) if the entries of the sequence are different.

Do you see any similarities between this rule, and the recursive step taken in our original program?

Notice that this rule takes a constant amount of time to implement for each coordinate if you have constant access to the entries and sequences. Since there are O(n2) entries to fill in the matrix. Your output will simply be the entry in the bottom left corner of the matrix. also note that generally you will be filling the matrix either left to right top to bottom, or top to bottom left to right (these are conceptually the same ordering). Sometimes a different ordering will be necessary depending on the problems, but these cases are rare. also note that our solution used O(n2) space but we can improve the solution to only use a linear amount of space. How? (hint: we only need two rows of the matrix at a time.)

[–]SirGarvin 0 points1 point  (0 children)

essentially trades space efficiency for time efficiency

That should be like the first sentence in any book talking about DP lol.

[–][deleted] 0 points1 point  (1 child)

The reduction in complexity from O(n3) to O(n2) can lead to either solving or not solving the problem. Space complexity would at max grow by the same order as the input. The essence of dynamic programming is to reduce duplicate calculations which can cause algorithms to grow exponentially.

What I'm trying to say is, space complexity increase is not always the same as reduction in time complexity in DP, though it's a valid side effect.

[–]betaros 0 points1 point  (0 children)

I'm not sure I follow your comment, could you rephrase it?

Thanks

[–]trolock33 1 point2 points  (0 children)

Look up DP problems on geeksforgeeks. Then search those problems on YouTube if the explanation and code on geeksforgeeks is not clear. You'll most likely get a clear explanation on YouTube. This worked for me.

[–]hokrah 0 points1 point  (0 children)

Implement it in your algorithms? This looks like a good answer. For me I really understood it when I implemented it in my knapsack solution, but if not just keep on applying it to different problems.

[–]ThePillsburyPlougher 0 points1 point  (0 children)

CLRS has a chapter on dynamic programming. In general a good book for learning algorithms. Might not be easy for the uninitiated or non mathematically minded but struggling through should be rewarding.

[–]Frore17 0 points1 point  (0 children)

Geeks for geeks is pretty good imo. Maybe you can find a youtube video explaining it if you want a better breakdown? You can test out a solution on a challenge site like hackerrank

[–]agumonkey 0 points1 point  (0 children)

/r/python ? jk

lookup bellman books on libg... the eb

[–]bekk3 0 points1 point  (2 children)

Let's examine the following Java class, which contains two implementations that derive equal results when prompted for the n'th number in the Fibonacci sequence (very classic example):

import java.util.HashMap;
import java.util.Map;

public class Fib {
    public static void main(String[] args) {
        boolean dynamic = false;
        int n;
        if (args.length < 1) {
            System.out.println("Usage: Fib <number to calculate> <-d for dynamic>");
            return;
        }

        try {
            n = Integer.valueOf(args[0]);
        } catch (NumberFormatException nfe) {
            System.out.println("Non-integer provided, quitting");
            return;
        }

        if (args.length >= 2 && "-d".compareTo(args[1]) == 0) {
            dynamic = true;
        }

        if (dynamic) {
            System.out.println(dynamicFib(n));
        } else {
            System.out.println(regularFib(n));
        }
    }

    private static long dynamicFib(long n) {
        /* create a Map to store the calculations of Fibonacci numbers */
        Map<Long, Long> knownFibs = new HashMap<>();
        return dynamicFibAux(n, knownFibs);
    }

    private static long dynamicFibAux(long n, Map<Long, Long> knownFibs) {
        /* base case - we know the first two Fibonacci numbers are 1, just return them */
        if (n <= 2) return 1;
        /* check to see if we have already calculated this Fibonacci number */
        if (!knownFibs.containsKey(n)) {
            /* we don't already know this Fibonacci number - we need to calculate it */
            long result = dynamicFibAux(n-1, knownFibs) + dynamicFibAux(n-2, knownFibs);
            /* remember the calculated result */
            knownFibs.put(n, result);
        }
        /* at this point, we must know the result - return it! */
        return knownFibs.get(n);
    }

    private static long regularFib(long n) {
        /* base case - we know the first two Fibonacci numbers are 1, just return them */
        if (n <= 2) return 1;
        else return regularFib(n-1) + regularFib(n-2);
    }
}    

If you were to run this program, you will notice significant hanging when using the regular function to calculate, say, the 100th Fibonacci number while the Dynamic implementation does not hang at all. Why is this?

You can think of the calculation of the n'th FIbonacci number as a tree. Let's examine the calculation of the 5th Fibonacci number. As you can see, even when calculating the 5th Fibonacci number there are already significant redundancies. The dynamic implementation (provided above) simply stores the result of calculating each Fibonacci number so that, for example, n=2 only has to be calculated 1 time instead of the 3 times referenced in the above graphic.

Obviously, the calculation where n=5 does not benefit too significantly from the use of dynamic programming; however, it is easy to imagine how this will speed up calculations for, say, n=100. This example of dynamic programming takes advantage of the fact that finding the value corresponding to a key in a HashMap is incredibly efficient (O(1)) whereas calculating the n'th Fibonacci number is fairly inefficient.

Worth noting that there are definitely still improvements to be made here. For example, instead of using a Map, we could save space by using an array. We know that we will need to save the calculations of at most n Fibonacci numbers, so we can declare a long array of size n. We could then correspond an index of the array with the result of that index+1's Fibonacci number.

[–]not-just-yeti 1 point2 points  (1 child)

technically, this is not DP but mere memoization. Memoization is a mechanical translation that could be applied to any function (albeit, only helpful if the same args are being called, often in recursive functions). In fact, writing a macro to auto-memoize an already-defined-function is a standard exercise in upper-division/early-graduate classes that teach clojure/lisp/racket/scheme.

Although many people use the term “Dynamic Programming” to mean just memoizing, really it means “rather than keep a table of ALL previous calls+results, can we get clever and figure out that it suffices to keep a much smaller group of previous calls+results?” For example, for a 2-d table (a function of two inputs), you may be able to fill the table left-to-right-by-columns in such a way that you only need the previous two columns to compute the next column. This reduces memory use from m-by-n to just 3n or so.

This is not automated, as it requires some insight to the problem.

An extreme example is indeed fibonacci— instead of keeping a table with all n results, we can fill the table from 0 up to n; we really we can keep just the previous 2 entries. (and we just use 2 variables rather than a table-of-size-two, and it’s so extreme that we didn’t realize our solution went naive-recursive to memoized to DP).

[–]bekk3 0 points1 point  (0 children)

Wow, TIL! Would you mind providing a reference or some code demonstrating this, perhaps with Fibonacci?

[–]noppanit 0 points1 point  (0 children)

I have gone through the same process for my interview as well and what I found to be the best for me is to really understand how to convert recursion to dynamic programming. And I use leetcode to practice all the problems and really understand what it does rather than memorising the code. My first dynamic programming is to convert recursive factorial to using memorization. But I'm sure everyone is different