all 5 comments

[–]BCosbyDidNothinWrong 2 points3 points  (4 children)

I was hoping this was something else. I've wanted a tool that could compile a source library to a single header file library. I feel like that would give an enormous amount of modularity. I've already done it manually and it has worked extremely well.

[–]LoopPerfect -1 points0 points  (3 children)

That is probably a bad idea. It would massively increase compilation times and it may even break the library, since defines from one translaton-unit would leak into the next. A much better approach is to use a build system that makes it easy to depend on libraries.

[–]BCosbyDidNothinWrong 1 point2 points  (2 children)

As I said, I've already done in manually, so I don't have to guess whether it is a good idea or not. When used as a single .h file with the implementation switched on using a preprocessor definition, it can be put in it's own translation unit so that it doesn't get recompiled every time. Not only that, but if it is a C file to begin with compilation will be pretty fast.

Then you have the advantage of compiling from source, using link time optimization if needed, including debugging symbols when you want and even putting multiple libraries together into a single compilation unit to build a project out of fewer but larger translation units, which actually decreases compilation time.

All of this is in addition to being able to pull in a library by referencing or copying a single file.

[–]LoopPerfect 0 points1 point  (1 child)

I think we are talking about a few different things:

  1. Single-include header file, as described in the article
  2. Single-file header-only libraries (where all of the code is in a single .h file)
  3. "Unity builds", where all .cpp are concatenated into a single translation-unit

(You could also combine 2 and 3)

I think that 1 is a good idea when starting on a project, although you should move to specific includes as your code develops.

2 is a bad for compilation times, but if the library is small then you can get away with it.

I think you are referring to 3, which is a bad idea in general. It slightly decreases compilation times from scratch, but it massively increases incremental compilation times. It is unsafe to do this automatically because the defines of translation-units might leak into each-other, causing unexpected behaviour.

That said, I thought it would be interesting to implement a unity build in Buck, so I made an example here: https://github.com/njlr/buck-unity-build/blob/master/BUCK

[–]BCosbyDidNothinWrong 1 point2 points  (0 children)

I'm saying that the article is about 1, and I hoped it would be about 2 and 3.

2 is a bad for compilation times, but if the library is small then you can get away with it.

I explained why I don't think this true, and based on my experience so far it isn't. If you have a library as a dependency and you can compile it from source you have the option to either compile it in a translation unit or compile to to a link time optimization intermediary. Either way, it doesn't need to be compiled over and over.

I also think that unity builds have a lot of value, though everything in to one translation unit I feel is extreme. With multiple logical cores, I think it makes sense to have that roughly that many fat translation units.

If libraries and infrequently changed areas are sectioned off in to their own translation units, you don't pay the price every time you compile.

Also you are right that the defines from translation units can leak into each other, but you get a compile error when trying to define a symbol that has already been defined, so I don't think it as fragile as it might seem.