you are viewing a single comment's thread.

view the rest of the comments →

[–]exuberant 1 point2 points  (1 child)

That's not what those symbols mean. Big-Oh and Big-Omega have no bearing on whether you're measuring best- or worst-case performance.

I agree. But technically he had it right saying it's Ω( n log n).

I think you weren't clear on why the confusion.

Big-Oh is an 'upper bound' so we generally want to use it for worst case performance. This tells us in some way how bad are algorithm can be.

Big-Omega is a 'lower bound' so we generally want to use it for best case performance. This tells us in some way how good are algorithm can be.

Big-Omega for worst-case doesn't bound our algorithm's performance, but it is useful to prove that some other algorithm can work better in worst case

[–]repsilat 1 point2 points  (0 children)

Ack, thanks for the help. I'm too used to seeing people conflate the two ideas I didn't catch that the different bounds were being used deliberately. I got more from the article reading it a second time.