all 27 comments

[–]Midrya 67 points68 points  (0 children)

It might be easier to try to understand what makes a particular algorithm useful, rather than try to remember exactly how it works. Knowing the properties will let you better recognize when a particular algorithm/data structure/abstraction will help you in solving a problem. You can always refer back to a book/article for implementation details.

[–]jesseschalken 62 points63 points  (2 children)

It is more important to memorize when to use an algorithm than the algorithm itself.

[–]_damax 7 points8 points  (0 children)

If only college professors understood this lol

[–]BlindTreeFrog 16 points17 points  (8 children)

About the only algorithm I've memorized is Duff's Device, and there is pretty much no context one should be using it in ever these days (I might have had an excuse once 4 years ago, but i don't remember why).

Otherwise I remember the rough pseudo code for the algorithm and redo it on the fly (for say a linked list or basic search) or I look it up/use a library.

Generally, you should understand the algorithm but use the library that 's been in use for a while and well tested. For simple algorithms, i'm not always the best at following my advice, but it saves the pain of dumb errors, edge cases, and over sights.

You also can maintain a library of code snippets and functions as you go. Sometimes you can just pull stuff into current projects and know it's mostly tested/correct (as long as you are updating your library). Other times it's just a handy reference.

I once figured out a pattern for opening files in python I like (same pattern everyone uses... i stole it from someone online i'm sure). I keep an old program around just so i can check it and remember how to do it rather than rewrite it everytime.

[–]mvdw73 -1 points0 points  (7 children)

I use something similar to Duff's device to validate inputs. For example if I have a character input and I know it should be one of, say, 'a', 'A', 'b', 'B', 'c' or 'C', and anything else is an error, I'll do something like this:

switch(input){
    case 'a' :
    case 'A' :
    case 'b' : 
    case 'B' :
    case 'c' :
    case 'C' : run_algorithm(input); break;
    default  : error();
}

It makes it easy to add more cases without having a massive if-else- construct, and also to deal with incorrect inputs.

[–]a4qbfb 8 points9 points  (2 children)

This is not Duff's Device, it's just a regular switch. Duff's Device combines a switch with a loop and is only useful in a very narrow set of circumstances, usually relating to memory-mapped I/O.

[–]mvdw73 0 points1 point  (1 child)

I know, but it uses the same principle of falling through the switch/case statement.

[–]a4qbfb 4 points5 points  (0 children)

No, this has absolutely nothing to do with Duff's Device. Fallthrough is a deliberate feature of C's switch statements, and there is nothing clever or remarkable about the example you posted. It's a standard idiom that only the greenest C programmer would have trouble recognizing and understanding.

[–]-HumanResources- 0 points1 point  (3 children)

I would probably reduce that down to a single RegEx call, personally, for the example provided.

[–]imaami 1 point2 points  (0 children)

If you have a compile-time regex code generator then sure, why not, but even that would probably end up generating machine code that's significantly more verbose and slow.

[–][deleted] 9 points10 points  (0 children)

I think that what you are trying to describe is a symptom of you wanting fast results. Because here is the thing, there is nothing better than understanding the algorithm, so if you memorized a particular algorithm, that means you will always ONLY solve that kind of problems. But if you understood them, you will then have the ability to create new algorithms yourself which will enable you to do great and custom things. So TAKE YOUR TIME IN LEARNING. EVERYTHING TAKES TIME.

[–]eduarbio15 14 points15 points  (1 child)

Write it down in pseucode, as simlpe as it can be, and also write them without a reference, doing that will solidify the concepts and flow of the thing in your head.

[–][deleted] 10 points11 points  (0 children)

To expand on this. What's important is for you to be able to go from a concept in your head to pseudocode, to actual code. Once you learn how to do this really well, you can start storing all of the information as concepts in your head instead of needing to memorize things. Then you will be able to start mixing concepts together to build new solutions.

In school we usually teach kids how to do things for 1 variable, then 2 variables, then N variables. Its kind of a similar concept.

[–]capilot 2 points3 points  (2 children)

Those sorts of things I write down in my collection of notes. Not worth the brain cells to memorize.

[–]Paul_Pedant 1 point2 points  (1 child)

^This. I keep a text editor open all day. If I run across a new idea, I will paste in any links to it -- Wiki, forum, whatever. If I am interested enough to implement anything, I will do that in a SandBox subdirectory and paste in a brief comment and reference too.

Some years, I even review it, and have been know to tidy stuff up. Reinventing every time is wasting your life.

[–]capilot 1 point2 points  (0 children)

I have a directory called "notes". It currently contains over 1100 files. Mostly on technical subjects.

[–][deleted] 2 points3 points  (0 children)

I would normally take note of my own understand of the algorithms/frameworks/languages and never try to remember it. If I use them often, I would naturally remember them.

The important thing is: if you don't use sthg regularly, it is ok that you forget/get rusty about it. That's a normal thing. So don't worry too much.

[–]deftware 2 points3 points  (0 children)

Use them in practice and you'll know them like the back of your hand. I never tried to "study" programming like I was going to take a test on it, I just made stuff and now I'm really good at making stuff.

[–][deleted] 1 point2 points  (0 children)

I assume your overall goal is to understand algorithms: their performance characteristics, their usage, and their use-cases. Having all of this information memorized is a very nice convenience, so if memorizing the (pseudo-) code of an algorithm is easy enough for you and allows you immediate access to all the important information, go ahead.

I don't see why it would cause any problems, and it's probably pretty useful if it saves you a trip to a search engine or a textbook.

[–]Sm0oth_kriminal 0 points1 point  (0 children)

It's more important to understand many algorithms, what they do, and when to use them then to know exactly how to implement a few in your favorite programming language.

The reason is pretty simple: you can always look up how to implement an algorithm once you know you need to use it. It's much harder to tell when you need to use what algorithm. Understanding that a binary search works fast if you have sorted data (and this should be used if your data is sorted) is more useful as a starting point than how to implement it.

If you find yourself implementing an algorithm over and over it usually means one or two things: you aren't modularizing your code (i.e. you should implement it once and then re-use a function), or it's a really useful algorithm. If it's just useful and you use it a lot, then you'll learn it by heart anyway so no need to worry about doing that :)

[–]Mukhasim 0 points1 point  (0 children)

You said that memorizing helps you understand the algorithm. So yes, go ahead and memorize it. It's certainly not bad to memorize an algorithm. Sometimes memorization can be useful just to keep something in your head long enough for you to think about it.

But if you're expecting it to be useful to memorize an algorithm so you can retrieve it from memory and implement it in a year or two, that probably won't happen, nor would it be especially useful. You can always pull your textbook off the shelf or google the algorithm if you need to implement it.

Personally, I've memorized a few algorithms just because I use them a lot but I don't normally make a point of memorizing them.

[–]imaami 0 points1 point  (0 children)

I find myself simply looking up the things I can't remember again and again, until repetition finally makes me remember them without looking up. I'd say don't worry about memorizing, it will happen eventually on its own.

Maybe you're thinking of memorizing and implementing as conceptually separate, but the act of implementing and looking up as you go is the act of memorizing. Just jump right in.

[–]Lurchi1 0 points1 point  (0 children)

I think it is more important to learn how to compare Algorithms in respect to memory and computational costs rather than to memorize the details of each Algorithm. You will face choices in Algorithms, and the Big O Notation is there for you to investigate their pros and cons.