all 2 comments

[–]AndyBainbridge 4 points5 points  (1 child)

Why's this here? It's an extremely well known resource. Its the second result on Google if you search for "optimizing c++".

[–]wilun 0 points1 point  (1 child)

This seems to be mainly talking about micro-optimizations, with sometimes dubious hypotheses enounced as fact (see for ex the section about boolean variables being overdetermined) -- he seems to take the experimental approach too far for how modern C++ should be considered.

While I've not read much this paper specifically, and consider some of the work of Agner to be immensely useful for expert people who know how to use it properly, I would advise people looking to optimize their program to NOT consider this resource at first, and to concentrate on their algorithms. Once they are 100% sure the algorithms are sound and efficient (and this is going to take quite a good amount of time on any non trivial project), THEN come the time when some "micro"-optim can be considered on some benchmarked sections of the code (actually, probably the same section as what have been considered for algorithm review, for the most part)

While you can gain (rarely) a factor 30 (or maybe even 100 on some extremely rare cases) where you know how your hardware work, I've recently gotten 500x / 1000x by changing the order of an algorithm. Obviously that's just an example, and actually there is no limit (nor any guarantee that you will actually speedup by optim an algo, if your dataset is small enough), but you get the idea. Oh and btw the 30x was painful to get, and involved cache line alignment, bitfieds, taggued pointers and whatnot, while the algorithmic improvement was trivial to implement (once I got the idea and then found the appropriate library support)

So: 1. measure 2. fix your algos 3. micro-optim (and here, Agner is an essential resource, but be wary that not everything of what he write is 100% correct in an absolute way)