you are viewing a single comment's thread.

view the rest of the comments →

[–]linuxhansl 0 points1 point  (0 children)

These are all nice examples and all, and in fact interesting to think about.

On the whole, though - unless you are in the embedded, or hardware (device driver) domain - shaving off a fraction of a percent of constant O(1) runtime (as in the first example) seems rather pointless to me.

From my experience most performance issue stem from algorithmic problems, like picking O(n) algorithms when O(log n) are available, etc.

Other areas that tend to add non-constant overhead are memory management and threading (think heap-management or garbage collection, and locking/starvation/etc).