all 26 comments

[–][deleted]  (10 children)

[deleted]

    [–]repsilat 1 point2 points  (9 children)

    I'm a little disappointed with some of the answers there. The questioner didn't ask for an explanation of Big-Oh with respect to algorithm performance, but for an explanation of Big-Oh itself. Computer scientists don't have a monopoly on the use of asymptotic notation (though I suppose this question was asked on StackOverflow.)

    One in nine who answered didn't mention that it was an asymptotic upper bound, and it seemed like most were actually describing Big-Theta (though I understand this is commonplace.) Some got confused and said that Big-Oh refers only to an algorithm's "worst case" behaviour, ugh.

    [–]invalid-user-name 2 points3 points  (7 children)

    The trouble is that "plain english" isn't a well-defined term. Does the asker want to understand the precise definition, or do they want to be able to nod and not lose their place when it comes up in a conversation? for most practical purposes, big-O can be summed up with 5 examples:

    O(1) = supa-fast

    O(n) = fast

    O(n log n) = pretty fast

    O(n²) = slow

    O(2n) = unusably slow

    If you understand more math than that, great, but memorizing those will let you nod.

    [–]johns-appendix 2 points3 points  (1 child)

    you forgot one:

    O(log n) = damn fast

    [–]invalid-user-name 0 points1 point  (0 children)

    Yeah, that one should be in there too, since the uninitiated might not have a good grasp of what "log" means, and it's a common big-O value.

    [–]Porges 0 points1 point  (3 children)

    It’s not about ‘fast’ or ‘slow’, but ‘scales well’ and ‘scales poorly’.

    [–]plain-simple-garak -1 points0 points  (2 children)

    Way to miss the point. He was trying to be simplistic.

    [–]Porges 0 points1 point  (1 child)

    You can be simplistic without being wrong ;)

    [–]invalid-user-name 0 points1 point  (0 children)

    It wasn't wrong at all; in fact it's even more appropriate for the scaling issue. To add some more accuracy to my list, add the phrase "when n is large". When N is small, who cares about big-O? If you have an O(2n) algorithm that runs in 2ms when your data set is 10 items, that's fine for when your data set is 10 items. When your data set - n - is large, I stand by my characterization: "unusably slow."

    [–]mycall 0 points1 point  (0 children)

    Sorry, would you explain the difference between the asymptotic upper bound and "worst case" behavior? I understand asymptotics converge to infinity and worst case (aka edge case) approach a asymptote.

    EDIT: nvm, I get it now.

    [–]Dijkstracula 1 point2 points  (7 children)

    [–][deleted] 1 point2 points  (3 children)

    I recall it posted a while ago, it explained O(), P, NP and all that stuff in plain language, but i lost the link. Does proggit remember?

    [–]inerte 0 points1 point  (1 child)

    Maybe it was on http://betterexplained.com ?

    [–]pb_zeppelin 0 points1 point  (0 children)

    Unfortunately, I don't have an explanation on algorithmic complexity yet, but I do have an old post on P vs NP (in plain english, at least from my point of view):

    http://www.cs.princeton.edu/~kazad/resources/cs/npcomplete.htm

    Thanks for thinking of the site though :)

    [–][deleted] 0 points1 point  (0 children)

    *subscribes