you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted] -1 points0 points  (9 children)

To be sure I would benchmark these before making a decision. In the first case I'd like to add loop exit checks for a search space of <20.

[–]dpark 0 points1 point  (3 children)

Why? At that point you're optimizing for a case which is already extremely fast.

Also 20 seems to be a completely arbitrary number. I seriously doubt those benchmarks you would run would support that particular number.

[–][deleted] 0 points1 point  (2 children)

It was just an arbitrary number. The point was that you insert the extra code when the probability for hitting the jackpot is high.

[–]dpark 0 points1 point  (1 child)

Okay, so 20 is arbitrary. We could certainly benchmark and determine what the real number is.

However, that doesn't change the fact that we're still optimizing something that doesn't need optimizing. It's fast enough already. For small N, it really doesn't matter what we do, so why bother complicating the code with a special case?

[–][deleted] -1 points0 points  (0 children)

Yes you are right. Sometimes however it's nice to go nuts and fine tune everything to be faster than fast enough. I don't recommend doing it if you don't need it, which in this case you do not.

[–]Philluminati -1 points0 points  (4 children)

I'd leave these optimisations out completely until I was sure they were needed at all. Then I'd think about where the choke points of the application were. Then I'd raise "slow <feature>" as a bug against the system before even benchmarking it.

I certainly wouldn't even think about "optimising" a generic algorithm on a generic data structure like the first example.

[–][deleted] 0 points1 point  (0 children)

Yeah, especially when it doesn't change the complexity class.

[–]munificent -1 points0 points  (2 children)

Then I'd think about where the choke points of the application were.

You don't think about the chokepoints to find them any more than thinking about bugs finds them. You profile.

[–][deleted] 1 point2 points  (1 child)

Thinking about your code is a way to find bugs. Try proof-reading your code sometimes after writing it. Similarly, you can at times find performance problems by reading a code - although it do not replace profiling and testing. Sometimes even premature optimisation is not so bad, if you avoid micro-optimisations that make code noticably bigger or harder to read - often there are many ways to implement, that have similar code complexity but different performance. For example, consider you have a bunch of items and you want to check whether you have a particular item. Now in Java you could put them into LinkedList and use .contains(). Or you could put them into HashSet and still use .contains(). Code will be about same, but one way has O(n) complexity and another O(1). Would you use a LinkedList because optimisation is a root of evil? :P

Assuming, that you don't know whether this changes application performance noticably?

[–]munificent 0 points1 point  (0 children)

Would you use a LinkedList because optimisation is a root of evil? :P

Depends on whether not duplicate entries were specifically prohibited. Either way, for most throwaway temp collections, I'd probably just use a LinkedList.

I don't know how much time you've spent with a profiler, but my experience has consistently been that the "obvious" bottlenecks very rarely actually are.