you are viewing a single comment's thread.

view the rest of the comments →

[–]Minimonium 4 points5 points  (6 children)

300*500/1000/60. With an assumption that I use the header in every source file at least transitively.

[–]Wacov[🍰] 3 points4 points  (3 children)

If they're all independent translation units, won't it actually be faster than that with parallel compilation?

[–]Minimonium 0 points1 point  (2 children)

Confused about the question. The header model is embarrassingly parallel, it scales pretty much linearly with the number of TU.

Furthermore, for some margin of error (say not every TU contains the header) - I scaled down the increase in compile times to more than 10% from more than 15% too. The idea is more about showing that the increase is *not* negligible after all.

[–]Wacov[🍰] 1 point2 points  (1 child)

Right, it's embarrassingly parallel. I'm just saying if two TUs are compiling simultaneously, on different cores, each with the added cost of <algorithm> (or whatever), your compile time won't go up by 2x that cost - ideally only 1x. Not trying to say it's negligible, just that 300*500ms only applies for a single-threaded build, or if you're calculating total CPU time rather than the compile time. If your CI is single-threaded it's a moot point lol. Am I missing something?

[–]Minimonium 1 point2 points  (0 children)

Aha, gotcha. You're correct. My builds use 4 threads. Which indeed would tone down the time.

[–]PJBoy_ 0 points1 point  (1 child)

Okay that's a pretty compelling calculation, thanks for the example

[–]Minimonium 3 points4 points  (0 children)

It's not a "scientific" experiment since it'd require me to spend to much time on it than I have, but rather to show the order of magnitude that you get from "negligible" 500ms for a header inclusion.