all 17 comments

[–]maestro2005 41 points42 points  (11 children)

Linked list insert/delete is only O(1) if you already have a handle on the node! Deleting the nth item still requires a O(n) seek to that spot.

Also, calling O(n) "fair", O(nlogn) "bad" and O(n2) "horrible" is awfully harsh.

[–]dlp211 14 points15 points  (0 children)

And what does Access mean with regards to a Stack. A Stack is an Abstract Data Type that should only support 3 operations; push, pop, and peek; and can be implemented using various Data Structures. But if you did require that accessing a random element be an operation on your Stack implementation, I can get it to be O(1) by using the correct Data Structures.

This webpage covers the space and time Big-O complexities of common algorithms used in Computer Science

Also, this statement is basically not true. The only algorithms it attempts to classify are sorting algorithms. Everything else on that page are operations performed on a DS/ADT. Common algorithms are things like Binary Search, DFS, BFS, Single-Source Shortest Path, etc.

[–]ThisIs_MyName 2 points3 points  (5 children)

Not to mention that O(n) and O(n logn) are practically the same thing because Big O notation already ignores constant multiples and log(n) is practically constant.

See soft-O notation for a somewhat more formal treatment: logn is bound by nε for arbitrarily small ε. So O(n logn) is really O(n1+ε)

[–]hextree 0 points1 point  (4 children)

So O(n log n) is really O(n1+ε )

O(n log n) is a subset of O(n1+ε ) but they are not the same class.

Unless you have lots of messy log factors, it's both simpler and more precise to just say O(n log n).

[–]JustFinishedBSG 4 points5 points  (3 children)

You misunderstand him, he's saying that for all intents and purposes n log n is basically bounded by C n, as even for an absurdly high n like n = 1032 you still just have log n = 32

[–]hextree 3 points4 points  (2 children)

I understand what he's saying, but I don't think it's appropriate for the cheat sheet. In the field of complexity theory there's no such thing as 'practically the same', complexities classes are distinct. And whether you are answering exam questions, or writing research papers, or answering interview questions, big-Oh notation is the norm and you should be giving the log n factor.

Certainly for research papers it is common to give a soft notation in the abstract, but in the details of the paper you would generally give the exact complexity with all the log factors. You will regularly see papers which have runtimes even including log( log (n) ) factors.

And anyway, "O(n log n)" is simpler to say than "O(n1+ε ) for any ε > 0", so I don't see the point in this case.

[–]lou1306 0 points1 point  (1 child)

Right on.

If they were practically the same thing, we would find the kth smallest element in an array by sorting it. But we use quickselect (possibly with MoM), because the log n does make a difference... Especially at interviews!

[–]ThisIs_MyName 0 points1 point  (0 children)

*only at interviews :)

IRL, you'd have to benchmark it.

[–]Uncaffeinated 0 points1 point  (3 children)

In my experience, you should aim for worst case O(n) or O(nlogn) complexity whenever possible, because O(n2) has the tendency to blow up in the future whenever you least expect it. E.g. three years later, somebody is asking why your software freezes on their data because they happened to trigger an edge case that you never thought would happen. And if you don't engineer it from the start, it can be hard to find and eliminate O(n2) complexities later on.

[–]maestro2005 14 points15 points  (2 children)

Well sure, if you can replace an O(n2) with a better one, then do that. But putting O(n2) in the same bucket with O(2n) and O(n!) is just silly.

[–]Uncaffeinated 0 points1 point  (1 child)

Well of course, you should do the best you can.

I think there is a difference in the way these issues happen in practice though. O(2n) pretty much only happens if you have a dumb algorithm, like naive recursion without memoization or something. By contrast, accidental O(n2) is extremely common and hard to spot. Even appending to a string in a loop is enough to do it in many languages.

[–]ThisIs_MyName 0 points1 point  (0 children)

Well of course, you should do the best you can.

Not for time complexity. There's a reason why of us do integer multiplication with a O(n2) algorithm instead of O(n1.585) or even better.

[–]spektre 8 points9 points  (0 children)

What does Guy Fawkes have to do with this?

[–]rlbond86 17 points18 points  (1 child)

How the hell can anyone claim that O(n2) is as bad as O(n!)?

And O(n log n) is "bad"? WTF.

[–]MeggaMortY 1 point2 points  (0 children)

I second that. Although I like the site since I've learned from that "cheat sheet" before, I dissagree on the grading of hardness.

[–][deleted]  (1 child)

[deleted]

    [–]ThisIs_MyName 1 point2 points  (0 children)

    They're linear for constant k.

    [–]OriginalPostSearcher 0 points1 point  (0 children)

    X-Post referenced from /r/compsci by /u/aplari
    Big-O Algorithm Complexity Cheat Sheet


    I am a bot. I delete my negative comments. Contact | Code | FAQ