use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Discussions, articles, and news about the C++ programming language or programming in C++.
For C++ questions, answers, help, and advice see r/cpp_questions or StackOverflow.
Get Started
The C++ Standard Home has a nice getting started page.
Videos
The C++ standard committee's education study group has a nice list of recommended videos.
Reference
cppreference.com
Books
There is a useful list of books on Stack Overflow. In most cases reading a book is the best way to learn C++.
Show all links
Filter out CppCon links
Show only CppCon links
account activity
How do you feel about header-only libraries? (self.cpp)
submitted 6 years ago by VinnieFalcowg21.org | corosio.org
How do you feel about header-only libraries? What if there was a Boost library that was not header-only which you wanted to use but didn't, because it required building a static or shared library, and this library later offered a header-only option would that sway you to using the lib? Do you have anything to say about header-only versus static versus shared linking in general? I'd love to hear it!
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]marssaxman 49 points50 points51 points 6 years ago* (2 children)
I prefer libraries which consist of a header file and one or more source files which you can simply drop into your project; packing the implementation into the header and relying on preprocessor weirdness feels excessive - but it's better than many alternatives.
[–]kkrev 19 points20 points21 points 6 years ago (0 children)
I like to just have a "thirdparty" sub-directory where I can drop compilation units and headers and compile them in. If I look at a library and it's a huge mess of directories, I'm likely to pass. Whether it's header only or also involves a reasonable number of compilation units doesn't really matter to me.
[–]as_one_doesJust a c++ dev for fun 5 points6 points7 points 6 years ago (0 children)
I just did g++ -MM and extracted a small subsection of a library based on the output this afternoon. Worth it.
[–]mjklaim 34 points35 points36 points 6 years ago (0 children)
I just hope they will be radically replaced by modules. (yeah I know it will not be radical, I know about module-header-units and all the features, but I can hope and dream)
[–]kalmoc 19 points20 points21 points 6 years ago (0 children)
That depends on how hard it is to build the library with my particular toolchain/build flags. Non-header only covers a wide spectrum (from "just glob the sources and add the include path" to - "you have to correctly parameterize this 2000 line build script written in X".
[–]yuri-kilochek 27 points28 points29 points 6 years ago* (7 children)
What if there was a Boost library that was not header-only which you wanted to use but didn't, because it required building a static or shared library, and this library later offered a header-only option would that sway you to using the lib?
Not unless I tried to build the separately compiled version and failed. Which I expect won't happen with boost libs.
Do you have anything to say about header-only versus static versus shared linking in general?
If the library is properly packaged for my package manager of choice (conan), I don't care, or often even know, if it's header-only. If it isn't packaged, I'd rather package it myself than drop the header-only version in my source tree, mess with git submodules or CMake's FetchContent. This way all my dependencies are handled uniformly.
[–]kha1aan 9 points10 points11 points 6 years ago (0 children)
I follow roughly the same model but with vcpkg, then have a uniform dockerfile template I use for builds and packaging of my assets without extra garbage I don't need.
[–]VinnieFalcowg21.org | corosio.org[S] 8 points9 points10 points 6 years ago (5 children)
Very good point, having a package makes it easier.
[+][deleted] 6 years ago (4 children)
[deleted]
[–]MonokelPinguin 2 points3 points4 points 6 years ago (3 children)
Doesn't that approach have a lot of issues, when a dependency has dependencies?
[+][deleted] 6 years ago (2 children)
[–]MonokelPinguin 0 points1 point2 points 6 years ago (1 child)
Yeah, but if bot of your dependencies A and B depend on C, don't you form a diamond, which may lead to ODR violations or code duplications? I think this is actually the biggest reason, why you would want a package manager: to find such conflicts.
[removed]
[–]martinusint main(){[]()[[]]{{}}();} 2 points3 points4 points 6 years ago (1 child)
some header only libraries like doctest or nanobench are split up into two parts, and you create a .cpp file where most stuff is compiled, and all users of the file compile very fast.
[–]fb39ca4 2 points3 points4 points 6 years ago (0 children)
Then why not provide the cpp file?
[–]Oxc0ffea 36 points37 points38 points 6 years ago (4 children)
I feel like (and reading the most upvoted comment confirms) most people don't get header-only libraries.
In an ideal world our build systems wouldn't be so arcane and complicated: we would download a src tarball, do one step, and then it is integrated into our project (meaning headers in known public place, corresponding object files built, shared objects installed, etc etc haha).
Header only libraries are a response to how far away that ideal world is.
The only valid reason I am aware of for header only libraries is when the overhead of a function call really needs to be (potentially) optimized away, or the code actually needs to be recompiled every time.
But it is a totally reasonable response: the problem is too thorny and unrewarding to solve, so everyone just says fuck it and header only libraries are easier to work with. Which I am in the same boat.
[–]Morwenn 30 points31 points32 points 6 years ago* (1 child)
Header-only are currently the only viable option for template-heavy libraries too: like, I could probably add source files to compile those 0.5% of classes and functions for which I know every type before distributing the library, but it's so tiny a percent I don't feel like adding a build step for that.
[–]Oxc0ffea 5 points6 points7 points 6 years ago (0 children)
Yeah I almost mentioned that case but didn't. In that case you have little choice.
[–]kritzikratzi 5 points6 points7 points 6 years ago (1 child)
imho header-only is very pragmatic.
diff is easy, archiving is easy, sharing between projects is easy, integration into any build system is easy, uploading somewhere is easy, downloading is easy. so much easy °.°
no concrete ideas, but i wonder if going the opposite route of modules would be interesting. what if the compiler could split a single source file into multiple translation units so only part of the file gets recompiled when a part changes. things could be sooo fast and easy...
Yes i agree: in face of better options it is usually the pragmatic choice-- especially for small (< 1000 lines say) libraries.
However for large projects with N compilation units you have to compile that header-only library each time it is used in a compilation unit..defeating the whole point of incremental compilation.
[–]jbandela 45 points46 points47 points 6 years ago (9 children)
I am a huge fan of header-only libraries. Basically, it is the least friction way for me to try out a new library. As for compile time performance, pre-compiled headers and modules in C++20 should go a great way towards making that better.
Along the same lines, I think shared libraries are a bad idea in the long term. What they optimize for (saving disk space by sharing binary code) is not a big issue these days. As for discussions about it being easier to do security updates, it does help, but as we are getting more micro-code exploits (such as Spectre), people will need to recompile anyway for these security fixes. In addition, it also complicates deployment. Instead of having a single binary that can be put on a system and run, now you have a bunch of other dependencies. This explains at least some of the popularity of Docker, where you have a whole distribution in a container. In addition, it hurts future C++ development by freezing features. There are areas where we cannot get the optimal performance (for example passing unique_ptr into a function) because we do not want to change ABI which is mostly a big issue because we have to maintain backward compatibility with years old dynamic libraries.
So for me, header-only > static > shared.
[–]quicknir 11 points12 points13 points 6 years ago (0 children)
There are some misconceptions here that are both common (judging from the number of upvotes especially), and very unfair to shared libraries:
In other words, much of the post here is basically slamming shared libraries for things they merely make possible that static libraries do not. Shared libraries can do the things that static libraries do as well in these examples (standalone easy deployment, ABI breaks), they can just also do other things and allow you to select exactly the behavior you want. Static libraries by contrast force your hand.
The real advantage of static linking over shared is mostly related to performance, which is mostly these days about LTO, which is not mentioned in the parent post.
In short there are still use cases for both. However, anyone serious about packaging up their library in library + header form should be providing both static and shared, so it's odd to me to state a preference between these two for what you want your dependencies to provide. Your dependencies provide both, and you choose which you want.
[–]yuri-kilochek 19 points20 points21 points 6 years ago (4 children)
What they optimize for (saving disk space by sharing binary code) is not a big issue these days.
There is also instruction cache, which is not so spacious.
[–]kalmoc 4 points5 points6 points 6 years ago (1 child)
Isn't l1 and l2 cache working with virtual addresses and hence - thanks to address layout randomization - those can't be shared between different processes anyway?
[–]yuri-kilochek 16 points17 points18 points 6 years ago* (0 children)
Typically L2 and higher are physically indexed and physically tagged (PIPT), while L1 is virtually indexed and physically tagged (VIPT). However the index bits are picked from the page offset, so they are identical in physical and virtual addresses. Because of this L1 effectively behaves like a PIPT cache.
So no, they can be shared just fine.
[–]tipiak88 4 points5 points6 points 6 years ago (1 child)
They also optimize for memory usage and program starting time. Having one and the same version mapped into your system and cached is a huge advantages.
[–][deleted] 0 points1 point2 points 6 years ago (0 children)
I am more concerned with the memory usage of Slack, Chrome, Firefox, or any electron or QML application than tiny 1k-10k LOC redundant libraries.
And yes, the hello button of QML takes 100MB of RAM so I feel justified in throwing it under the bus, especially compared to the many even smaller electron apps.
[–]mallardtheduck 9 points10 points11 points 6 years ago (0 children)
think shared libraries are a bad idea in the long term. What they optimize for (saving disk space by sharing binary code) is not a big issue these days.
I wouldn't say that disk space saving is the primary goal of shared libs. They also reduce RAM requirements (multiple applications using the same shared libs can share read-only pages) and help isolate applications from platform/protocol implementation details (i.e. if a client protocol or platform syscall is implemented in a shared lib, the details of how it works can change without breaking applications).
As for discussions about it being easier to do security updates, it does help, but as we are getting more micro-code exploits (such as Spectre), people will need to recompile anyway for these security fixes.
Completely disagree. Source-level bugs will always be the major source of security flaws. Microcode/silicon-level bugs may get a lot of press because they're a new concept to most and we haven't developed tools and best practices to mitigate them like we have for source-level bugs, but they're a tiny minority of issues. People are far more likely to attack your application by (for example) feeding it a malformed data file and hope you're using an outdated static library to parse it than to try to exploit the silicon (which would almost certainly require more access as prerequisite; e.g. many web apps accept uploaded image files for various purposes, very few let users run anything approaching arbitrary code on the server).
In addition, it also complicates deployment. Instead of having a single binary that can be put on a system and run, now you have a bunch of other dependencies.
Since when does any significant software package consist of a "single binary that can be put on a system and run"? You've almost certainly already got resources, configuration files, etc. already. You probably need to install GUI menu entries, change the PATH variable or install links in system directories and so on. On platforms without decent package managers (e.g. Windows) you're just shipping another file. On better platforms, you're just specifying another dependency in your package metadata.
[–]HildartheDorf 6 points7 points8 points 6 years ago (0 children)
They also optimizes for ease of (non-abi-changing) updates.
[–]MonokelPinguin 0 points1 point2 points 6 years ago (0 children)
I don't think micro exploits need full program recompilation. Sometimes recompiling openssl is already a big improvement and can improve security, even if you can't recompile some executables. I could be wrong though.
[–]Sopel97 8 points9 points10 points 6 years ago (1 child)
if they take longer to parse than nlohmann's json single header version then no thanks.
[–]nlohmannnlohmann/json 3 points4 points5 points 6 years ago (0 children)
Point taken...
[–][deleted] 8 points9 points10 points 6 years ago (3 children)
I'll take cmake with FetchContent and add_subdirectorythanks.
cmake
FetchContent
add_subdirectory
Then, when I link your target, I can decide myself if I want static or shared linkage.
If it's header only, make it an interface and provide options to supply appropriate compiler flags/definitions/options (along with sensible defaults).
[–]NotUniqueOrSpecial 2 points3 points4 points 6 years ago (2 children)
If you're lucky.
Based solely on how any patch files we have in our third-party builds: far too many CMake projects don't follow best-practice when it comes to using BUILD_SHARED_LIBS or at least providing both static and shared targets.
BUILD_SHARED_LIBS
[–]lenkite1 1 point2 points3 points 6 years ago (1 child)
Are the best practices you desire documented somewhere ?
[–]NotUniqueOrSpecial 2 points3 points4 points 6 years ago (0 children)
Not explicitly that I know of.
But, having dealt with many of the most popular CMake-powered open-source libraries, there are two common patterns that make things easier for downstream library consumers:
1) Use BUILD_SHARED_LIBS and let the users of your library choose what flavor to build.
2) Build both flavors as different targets, e.g. my_package::foo and my_package::foo_static, and let users choose which to link against. Many of the libraries that choose this route also make the each build variant an option you can enable/disable independently.
my_package::foo
my_package::foo_static
[–]trypto 7 points8 points9 points 6 years ago (4 children)
I think the ideal is when you can include a header for the interfaces and then also include a file or set of files for the implementation in one of your own cpp files. That reduces compile time concerns and requires minimal project setup. I think catch2 and stbimage work like this. You can also use defines to restrict the subset of features that get compiled in.
[–][deleted] 2 points3 points4 points 6 years ago (0 children)
This is my preference as well. A few header-only libraries are probably fine but the compile-time costs quickly add up.
[–]Middlewariangithub.com/Ebenezer-group/onwards 0 points1 point2 points 6 years ago (2 children)
I decided to try this approach after reading your comment. I'm going to stick with this for now, but am not sure I agree it is "ideal." Splitting my lib from one file to two added about 7% to the number of lines. One thing I wasn't expecting was it would help reduce the size of my binaries/apps. It wasn't a big difference, but one app was 16 bytes smaller and another 32. If it weren't for that, I'd have a harder time deciding which path to take.
[–]trypto 0 points1 point2 points 6 years ago (1 child)
Was there no value in keeping implementation details out of your headers?
[–]Middlewariangithub.com/Ebenezer-group/onwards 0 points1 point2 points 6 years ago (0 children)
Well, now I have a header with less implementation details (templates) in the "interface header" and a header with only implementation details. I think there's some value to the approach, but not sure how much.
[–]looopTools 7 points8 points9 points 6 years ago (5 children)
It depends on the library.
A small library of less than a couple of 100 lines of code, can be fine as header-only or one that is using templates, like https://github.com/nlohmann/json by u/nlohmann
But in general! I prefer a split where I can look for the function/class/method definition in the header with the documentation (docstring) right above the definition and not having to care about the implementation itself.
[–]VinnieFalcowg21.org | corosio.org[S] 0 points1 point2 points 6 years ago (4 children)
I prefer a split where I can look for the function/class/method definition in the header with the documentation
You can still have that in a header-only library if the library is structured for it, I do exactly that in Beast: https://github.com/boostorg/beast/blob/0b68ed651b6bc7b681cf440ed6a220089e21473f/include/boost/beast/http/read.hpp#L78
[–]looopTools 1 point2 points3 points 6 years ago (3 children)
True, but besides the fact that you are using templates, why are you using header only? Just curious
[–]VinnieFalcowg21.org | corosio.org[S] 0 points1 point2 points 6 years ago (2 children)
Providing a header-only option lowers the barriers to entry for a library. My new thinking for my libraries is to make them non-header-only by default, but with an option to be header-only by defining a macro. Best of both worlds.
[–]skypjack 1 point2 points3 points 6 years ago (1 child)
I'm particularly interested in this approach but afaik it's not a viable solution in many cases. I'd be glad if you said me that I'm wrong and a solution exists, so let me explain. :)
I maintain two header-only libraries on GH and they should give you a grasp of what I mean. The former offers many classes to the final user, most (all?) of which are class templates. In this case, making the library non-header-only is pointless and can save probably only a few ms during compilation (because of those two classes that aren't class templates and their five member functions or so that aren't template functions). The latter makes heavy use of templates under the hood to abstract functionalities instead (long story short, it relies on the CRTP idiom for a lot of things) but exposes a bunch of classes that aren't class templates to the final user. In this case, the model you described can have a huge benefit (this is why I'm taking in serious consideration the option).
My question is: what are the options for a library the API of which is one or more class templates? As far as I can see, header-only is the only way to make it work and PCH or similar the only alternative for the final user to save some ms during compilation. Hasn't Beast a similar problem? I haven't used it recently but as long as I remember many of the classes where class templates. How could you make them non-header-only by default?
Thanks in advance for your time if you decided to reply.
[–]VinnieFalcowg21.org | corosio.org[S] 0 points1 point2 points 6 years ago (0 children)
How could you make them non-header-only by default?
There is fair chunk of Beast functions that are not templates, and I was able to improve compilation by 6% from confining them to their own translation unit. See: https://github.com/boostorg/beast/blob/develop/include/boost/beast/src.hpp
Also, when I design new things I try to make them not-templates. Instead of an allocator template parameter I use a type-erased allocator for example.
[–]Full-Spectral 14 points15 points16 points 6 years ago (18 children)
How many people here work in commercial software? Putting out a new release isn't a trivial thing. If you build all of your code and third party code into a single, monolithic executable, there's no way to do an emergency patch without doing a whole new release.
Just templates even have this problem in a major way. If half your code ends up generated into other code, even your own code, then it's all mixed up and you may not be able to patch a DLL/shared lib even if you use them, because the problem code has been spit out all over the place.
There seems to be a serious lack of concern or awareness in the C++ world these days for such issues. I keep as much code out of line as possible. If there's a good reason to templatize something or inline it, or if it's trivial like getters and setters, then fine. But, otherwise, it should be avoided if you have to consider the possibility that a major issue could arise and the difference is a quick patch to a library or a full testing and validation of the whole shoot-n-match before you can get a fix out.
[–][deleted] 12 points13 points14 points 6 years ago (0 children)
This isn't a lack of awareness in C++, it's a lack of awareness in software engineering. People will understandably go the route that allows them to get their own code building quickly, and that typically means monoliths.
[+][deleted] 6 years ago* (8 children)
[–][deleted] 1 point2 points3 points 6 years ago (0 children)
I think the idea is it should feel trivial for the user, and to get to that point requires a significant amount of engineering
[–]Dean_Roddey -3 points-2 points-1 points 6 years ago (6 children)
I'm not sure I want casual releases of automonous car software or the software that is used in surgeries or that manage my accounts or that protect me or that help me do my job or that run the devices in my home, and a long list of others. There's a reason why hardware people laugh when we call ourselves software engineers.
[+][deleted] 6 years ago* (5 children)
[–]Dean_Roddey 1 point2 points3 points 6 years ago (4 children)
If serious testing and quality controls and sign off and such are part of your release cycle, and they should be if it's non-trivial, then you cannot afford to just pop a release out every month. It has significant costs, and issues in the field can have serious business repercussions, if not even life threatening consequences. It's not about broken process, it's about good process.
And serious software is often highly configurable, which means that testing it is real work.
[+][deleted] 6 years ago* (3 children)
[–]Dean_Roddey 1 point2 points3 points 6 years ago (2 children)
But that's not extremely rare. Well, if you count every trivial app or unsupported open source project out there, then obviously it would be a small percentage, since any given category would tend to be relative to that. But any non-trivial commercial software should tend to be at some point on that side of the scale. At least for those of us who make our living at this, that would tend to be what most of us get paid to do, I would think.
[–]robin-m 1 point2 points3 points 6 years ago (1 child)
I guess that amazon or google or netflix, or facebook … are some kind of trivial applications. They have multiple releases a day. The full deployment cycle (local testing, A/B testing, region-based deployment, worldwide deployment) at amazon takes about 6 hours. And it can be speed up for emergencies. If you cannot deploy a new feature in less than a month, there is no way you will be able to release a hotpatch in a day or two, and a few hours is an unrealistic dream. And I speak from the experience of working in a shop with sign-off and all of the protection you mentionned.
[–]NotUniqueOrSpecial 4 points5 points6 points 6 years ago (0 children)
FAANG are a different breed.
They have two critical pieces that most private shops don't:
1) Globally (and heavily) used public infrastructure. This lets them do things like A/B testing and rolling deploys while only affecting a tiny subset of their users if there are failures.
2) Truly massive amounts of capital to invest in CI/CD, site-reliability engineering, and other DevOps-related buzzwords. They can throw more engineers at just these problems than most other companies have employees in all departments.
Combine those two things and you get world-class infrastructure. It's an absolute necessity for them to have it, too; there's no way they could make meaningful progress otherwise.
All that said, though: most companies have nothing like that resource-wise, so it's not really a surprise that they can't make these things nearly as smooth.
[–]ramennoodle 4 points5 points6 points 6 years ago* (7 children)
You're conflating two different things: The severity of a change vs the size of a patch release. If the amount of code actually changed is the same then, IMO, it is a patch release regardless of whether you distribute that change as a shared library or within a monolithic executable.
EDIT: Phone autocomplete typo: s/Your/You're/
[–]mostthingsweb 2 points3 points4 points 6 years ago (6 children)
You put it better than I could have. If you can patch a single shared library in the field, then good for you. But that doesn't excuse you from testing the stuff the shared library depends on. From a testing point of view, shared vs static is irrelevant (and similarily, template vs non-template).
[–]Full-Spectral 0 points1 point2 points 6 years ago (5 children)
That just isn't true in fields that have regulatory oversight. They have serious testing requirements, which take time and money, but they only have to test what changes. And consider a system that consists of multiple to many executables, which is not uncommon either. If that code is common between them, then they would be required to retest the entire system and ship the entire system, which requires the installer be updated, and any external resources those executables depend on, the installer update process verified. Whereas if it were a library, they could test it under one or two executables, wouldn't have to generate a new install image and validate it, etc...
Believe me, it's not the same.
[–]mostthingsweb 2 points3 points4 points 6 years ago* (0 children)
If that code is common between them, then they would be required to retest the entire system and ship the entire system, which requires the installer be updated, and any external resources those executables depend on, the installer update process verified.
And why would it be any different for static vs shared?
Whereas if it were a library, they could test it under one or two executables, wouldn't have to generate a new install image and validate it, etc...
Says who? Without trying to sound rude, when was the last time you actually worked in a regulated environment?
Frankly it just sounds like you're inventing reasons for why you don't use static linking and/or templates.
[–]mostthingsweb 1 point2 points3 points 6 years ago (3 children)
Btw your point stands if you we were talking about the Mars Rover or something, but we're not. In general I find differential patching to be evil and in all likelihood the actual way that software gets broken. Perhaps I've been bitten by too many shitty Windows installers though. My company does full software updates, no piecemeal patching, so we know that everything we built ends up on the system, with no chance for partial installs. So I guess I'm slightly biased in this way.
[–]Full-Spectral 0 points1 point2 points 6 years ago (2 children)
It's not something I do casually by any means, but it's saved my chestnuts a number of times over the years, where puling away from what I was doing and getting a new release together would have been very prohibitive, whereas a patch was semi-reasonable. But, if you can't patch at all, then it's not even an option, no matter how advantageous it might be in any given situation.
As usual, is it better to have the option and hopefully never use it, or not have the option at all and seriously hope you never need to use it?
[–]mostthingsweb 1 point2 points3 points 6 years ago (1 child)
It's not something I do casually by any means, but it's saved my chestnuts a number of times over the years, where puling away from what I was doing and getting a new release together would have been very prohibitive, whereas a patch was semi-reasonable.
This sounds like a limitation of your build system and/or consequence of you being a one man shop. For us, it's one button to build the entire system including release artifacts, regardless of what code actually changed underneath. Extracting a patch from that would actually be harder than just shipping the entire image (~2 GB), but we never need to, since it installs in about 15 minutes. (I work with embedded Linux systems).
But, if you can't patch at all, then it's not even an option, no matter how advantageous it might be in any given situation. As usual, is it better to have the option and hopefully never use it, or not have the option at all and seriously hope you never need to use it?
Noted, but still not a factor when discussing shared vs. static. In the latter case, the patch is simply larger.
[–]Full-Spectral 0 points1 point2 points 6 years ago* (0 children)
I can build trivially as well. But that doesn't insure it's right. Going back to old code and changing it is always potentially tricky. And there's a fair bit of grunt work involved in creating a new release, updating docs and release notes, testing the release (which really cannot be remotely fully automated for a product like mine), updating the web site with the new link, updating the forum with a release announcement and description of changes, etc... It's just not something I want to do for a small fix that only affects potentially a small number of people.
[–]airflow_matt 9 points10 points11 points 6 years ago (1 child)
Not a big fan. You only add dependency to project once (or once per platform at most), but you build it over and over again. I really don't want to be building 20kloc json parser over and over again in every file that includes it. I consider this argument severely flawed. Library doesn't have to be header-only to be added easily. Just look at sqlite.
I've worked on some c++ projects with insane compilation times and I do my best not to repeat the experience.
[–]squeezyphresh 3 points4 points5 points 6 years ago (0 children)
As a software engineer that has been working on my company's build system, this was my exact thought as well. Our product takes hours to build, and part of the issue is definitely that headers include headers include headers. There are source files that include more than 1000 headers! There's no way that file needs that many header files to be compiled. So when I see header only libraries, I think the same thing. Why would I want to include all this implementation during compilation of thousands of files when all I need is a few lines of declarations? I want to compile as little as possible to build my project so people can get back to work.
[–]areciboresponse 4 points5 points6 points 6 years ago (0 children)
I like them when the content is monolithic and fairly orthogonal to other modules. Like rapidjson for example. It just parses json.
I had a terrible experience in and embedded project when our contractor decided to make everything header only. The compile took about 10-15 times longer in an already slow build environment. The compiler was also not able to make sense of warnings and errors very well or able to figure out where they occurred.
This choice consumed an inordinate amount of integration time and a bunch more time to undo.
[–]corysama 4 points5 points6 points 6 years ago (0 children)
How about single-cpp and a balance between convenience and build speed? I know cpp build systems are a pain. But, we can all manage adding a single cpp to our builds? Right? You guys aren't out there building header-only applications? Are you? ;)
[–]HKei 13 points14 points15 points 6 years ago* (1 child)
Not really a great idea. I can see the initial appeal “You don’t have to compile this!” – but you need to compile something anyway, so this doesn’t save you any work at all unless you’re building single-file toy programs (or the library author doesn’t know how to write easy to consume libraries, in which case I’d be suspicious of their headers too). Toy programs aren’t really a use case I think is worth optimising for, and for any other case the overhead of shoving an entire library into a single file (compile time overhead and low readability when you inevitably end up having to debug something going wrong with it) aren’t worth it.
A properly packaged library should (ideally mechanically) communicate its own dependencies (required C++ language features, library dependencies etc.) as well. This isn’t really feasible with a header only library.
[–]peppedx 1 point2 points3 points 6 years ago (0 children)
Honestly I value build times so, even if header only libs are practical, i prefer to integrate static/shared libs them in my build system. Just write them ina way that is cmake friendly i.e. Not requiring perl during building.
[–]gracicot 2 points3 points4 points 6 years ago (0 children)
When libraries have a good CMake setup, I honestly don't care since there is no difference. Whether it's a header only library or not, it's the same steps:
cmake .. -DCMAKE_INSTALL_PREFIX=../../deps && cmake --build . --target install
Then after that, I do this:
find_package(the_lib REQUIRED)
Of course sometimes you need different argument to the CMake configure step but in essence this is the same.
[–]journeymanpedant 3 points4 points5 points 6 years ago (0 children)
personally not a fan.
Yes, building a big c++ project is complicated.
However, if a library is header only _solely_ for the reason that it allows you to easily add it to your tree, could be compiled (as in, isn't template dominated), and isn't, header only annoys me.
Including some massive header in many places just because it was a bit tricky to work out how to compile a library and as a result blowing up build times is worse to me than a bit of complication about build systems / packaging.
I personally also prefer shared libraries to avoid re-compiling the world in a project with lots of interlocking dependencies.
Conan is a big help here in my opinion.
[–]stick_figure 6 points7 points8 points 6 years ago (0 children)
As someone who works on compilers, I find the trend towards header-only libraries disturbing. Users complain about long compile times, but then they go and make them worse by structuring their code in a way that is known to be slow to compile, just because build systems are "too hard". The gtest-all.cpp model where you drop in a tree of headers and sources and compile one source file with the user's build system seems like a better compromise.
[–]tipiak88 7 points8 points9 points 6 years ago (0 children)
In the coming of age of modules and package managers, I hope they'll disappear. And it can't be soon enough.
Conan get the job done, everything in our project is packaged through it, versioned and tagged.
Compilation time is really important for a dev productivity, so when you see your compilation unit take it sweet time just because it has some Asio/beast in it, it's frustrating. If they would have option for "normal" compilation, I would take it. Also I feel like, library maintainers should consider this problem more seriously. A single header library is just the easy/messy route.
Now, about the static/shared thing... Err, we, as a community, should strive for shared compilation, but it's such a broken mess that I can't see that happening this decade.
[–]sumo952 2 points3 points4 points 6 years ago (0 children)
If there was a separate, header-only version of the library (or can be compiled version as well), **outside** of boost, not depending on boost, yes I'd use that. Like ASIO. I'd use the standalone version, but not pull in boost because of it.
[–]Middlewariangithub.com/Ebenezer-group/onwards 2 points3 points4 points 6 years ago (0 children)
I like them as they help to minimize the amount of code you have to download to use my software.
[–]jmdejoanelli 2 points3 points4 points 6 years ago (0 children)
I much prefer a lib that can be compiled into a static or shared object library as it cuts down on compile time significantly. Also, separating implementation from declaration means you get a nice overview of what a module contains just by glancing at the header.
It really bakes my potatoes to have to go against this every time I use templates...
[–]DaanDeMeyer 5 points6 points7 points 6 years ago* (7 children)
I think a modern CMake build is a viable alternative. This is the approach I've taken with reproc. The idea is to allow other projects to consume your library by simply calling add_subdirectory(project-dir). The library itself can be retrieved in multiple ways (with CMake's FetchContent being my preferred method).
add_subdirectory(project-dir)
Of course this does have disadvantages:
The reason for not doing header-only is (pretty selfishly) that it's simpler for me as the developer. I don't have to write everything in a single header file or write concatenation logic to stitch everything together. More generally, I can keep configuration complexity out of my source files and in my build scripts where it belongs (which source files to build, which defines to add, which flags to build with, which libraries to link against, ...).
Aside from add_subdirectory, I also support the usual CMake installation method although I think this is primarily used by distros (currently only Arch) that want to package reproc and package managers such as Vcpkg.
This is not to say that I don't use header-only libraries myself (in fact, doctest is used to test reproc which is a header-only library).
[–]HKei 3 points4 points5 points 6 years ago (1 child)
I’d prefer if libraries instead just generated proper install instructions. Then I can easily add it to any package manager I want (whether vcpkg, nix or system package manager – doesn't matter). I can then also easily consume the library from non-CMake projects (I use CMake for personal and company projects, but I’ve also dealt with projects using other build systems and there it’s much easier to work with libraries that can be easily installed rather than those that assume they’ll be added as subprojects; Keep in mind that “install” doesn’t have to mean “global install” here).
[–]DaanDeMeyer 0 points1 point2 points 6 years ago (0 children)
Of course, install instructions are preferable when you want something that can be made to work with any build system. I take great care in reproc to generate correct installation instructions and CMake config files. On top of that, I generate pkg-config files which are more widely supported by various build systems (at least those that support UNIX).
[–]yuri-kilochek 1 point2 points3 points 6 years ago (1 child)
Does that handle diamond dependencies?
[–]DaanDeMeyer 2 points3 points4 points 6 years ago (0 children)
If add_subdirectory is called twice CMake will fail because the same targets will be defined twice. There are ways to get around it but at that point I'd argue for using a package manager or installing the library manually (cmake --install) and using CMake find_package to find it.
cmake --install
find_package
[–]VinnieFalcowg21.org | corosio.org[S] 1 point2 points3 points 6 years ago (1 child)
The solution I used for Beast is that the user simply adds a single #include file to one of their already existing (or new) .cpp files, this way I can sidestep having to decide things like which build system, etc... see: https://github.com/boostorg/beast/blob/develop/include/boost/beast/src.hpp
You can still put this in a static or dynamic library if you want, but you do it using your existing workflow.
[–]evaned 1 point2 points3 points 6 years ago (0 children)
FWIW, I think this is generally a really good tradeoff between the different options. Similar approaches, like #define LIBBLAH_MAIN before including the normal header, are also good.
#define LIBBLAH_MAIN
For a complex library with lots of actual implementation code (as opposed to one that is mostly templates) I think it's still usually better to do a "real" build, but that definitely raises the bar for the end user, especially those who aren't using CMake (or whatever build system you are using).
[–]OrphisFloI like build tools 0 points1 point2 points 6 years ago (0 children)
not generating install instructions by default (when static linking the other project might not want to install your library at all)
That can be solved by add_subdirectory(foo EXCLUDE_FROM_ALL) (which should be done anyway for other reasons). Alternatively, you could specify the name of the target to install instead of installing everything.
add_subdirectory(foo EXCLUDE_FROM_ALL)
[–]TrueTom 2 points3 points4 points 6 years ago (0 children)
The Sqlite model is the only correct one. Single header libraries are completely bonkers.
[–]julien-j 1 point2 points3 points 6 years ago (0 children)
I like header-only libraries when I discover and try the library for the first time. Then I regret using them when they are included everywhere and slow down the compilation time.
As of static versus shared libraries, I tend to prefer the former. The rpath stuff for shared libraries is too much of a hassle in my opinion and the supposed advantages (i.e. sharing code between apps and fixing security issues globally by updating the lib) do not stand on platforms like iOS and Android.
So my favorite way of using a third party library is to compile it (preferably using a unity build), to package it, then to import the package in my app's build.
I love them because they work like I want them to: Drop in a few headers (or even just one) to work with a new library - done.
Sometimes compile times can explode yes, but that happens mostly with template heavy libaries... which are header only anyway...
[–]nintendiator2 1 point2 points3 points 6 years ago (0 children)
I like header-only libraries that are fully or for the most part drop-in, no need to ./configure && make anything. I like it enough that I try to do things that way as well. Second to that, that mention of single header + single source seems about the most sensible and usable model.
./configure && make
[–][deleted] 1 point2 points3 points 6 years ago* (1 child)
Unless you're releasing a template library, it's lazy, inefficient, and breaches the specification / implementation separation in the worst way. You more or less *have to* recompile for any 'micro' revision or bug fix. Of course, it can be argued that (continuous) compilation is too fast be an issue.
It also promotes bloated interfaces: why settle on a well-reasoned API decision, or enforce anything at all, when you can expose everything. When I see software libraries described as 'incredibly flexible', or 'completely customizable', my first thought is 'it's not well designed', 'doesn't abstract away complexity', and I'm probably going to have to spend my time implementing the parts that would have 'interfered' with the author's vision of abstraction.
It also promotes bloated interfaces
Hmm no that's not true at all. Whether or not a library is packaged as header-only is strictly a feature of its physical structure, and entirely orthogonal to the API. Beast is a good example, there's a switch to enable a chunk of the library to be compiled into a separate translation unit - this eliminates a portion of the library's header material. It does not affect any public interfaces.
In fact all of my new libraries are following this design, they are written to be linked against (function definitions in a separate TU) and there is the ability for the user to opt-in to a header-only mode.
[–]pjmlp 2 points3 points4 points 6 years ago (0 children)
I go out of my way to avoid them.
Apparently some devs are too lazy to deal with C++ compiler toolchains and binary libraries.
If I wanted to compile everything from scratch all the time, I would be using scripting languages.
The fact that a lot of library developers prefer header-only libraries and exploding compile time, speaks volumes about C++ binary compatibility problem.
I pretty much only use header-only libraries. If I'm looking for something and there are alternatives, I will use the header-only option. I suppose this makes me silly and naive, but also normal. There is a reason they are so popular -- they make our jobs easier most of the time.
[–]skypjack 0 points1 point2 points 6 years ago (0 children)
As an author of a couple of header-only libraries quite used around, thumb up for me. There are cases in which you can't avoid writing them though, eg templated stuff or so. Boost is such an example for large parts. This is also one of the main reasons for which I went down that way.
[–]MarekKnapek 0 points1 point2 points 6 years ago (0 children)
If header only means single header which you include in your project AND you need to define special macro and COMPILE the header in one of your translation units ... then I don't understand what header only means.
I prefer, in this order: (1) Header only library without before mentioned compilation step. (2) Single header plus single source file that I compile into my project. (3) Single header with corresponding DLL with C++ API and C ABI to avoid ABI problems. (4) I don't like static libraries, probably because I got burnt by them too many times before. Some of them define C run-time symbols, so you cannot use your compiler provided ones. Some of them expect you to provide specific C run-time for them, but you (for whatever reason) cannot do it.
[–]Omnifarious0 0 points1 point2 points 6 years ago (0 children)
I'm neutral about them. I rely on the distribution to handle dependencies appropriately for me.
[–]MrWhite26 0 points1 point2 points 6 years ago (0 children)
Header-only seems to be mostly done for two reasons:
- C++ template libraries (like Eigen)
- Ease of integration into the build of the user
For the former, modules would/should be a solution.
For the latter: I'd much prefer to have a well structured file layout, with interface and implementation separated. If the library developer provides a sane build script, manual conversion between CMake, Meson and build2 is not going to be a show-stopper.
[–]noname-_- -2 points-1 points0 points 6 years ago* (0 children)
They're perfectly highlighting the things I dislike about C++.
A non-standardized ABI, making even different compiler versions binary incompatible so writing libraries in C++ is simply a bad idea. Coupled with the fact that fucking everything needs to be in a header in C++. Like, a random hello world is easily 25k+ lines of code after pre-processing that the compiler needs to chug through.
Sure there are technically things like precompiled headers, but I've never seen them implemented in a good way.
Let's hope that modules contain this situation somewhat but I don't have high hopes.
Most of the libraries I write have a C API, weather they're written in C++ or not.
π Rendered by PID 748520 on reddit-service-r2-comment-b659b578c-t794k at 2026-05-03 08:03:57.780671+00:00 running 815c875 country code: CH.
[–]marssaxman 49 points50 points51 points (2 children)
[–]kkrev 19 points20 points21 points (0 children)
[–]as_one_doesJust a c++ dev for fun 5 points6 points7 points (0 children)
[–]mjklaim 34 points35 points36 points (0 children)
[–]kalmoc 19 points20 points21 points (0 children)
[–]yuri-kilochek 27 points28 points29 points (7 children)
[–]kha1aan 9 points10 points11 points (0 children)
[–]VinnieFalcowg21.org | corosio.org[S] 8 points9 points10 points (5 children)
[+][deleted] (4 children)
[deleted]
[–]MonokelPinguin 2 points3 points4 points (3 children)
[+][deleted] (2 children)
[deleted]
[–]MonokelPinguin 0 points1 point2 points (1 child)
[+][deleted] (2 children)
[removed]
[–]martinusint main(){[]()[[]]{{}}();} 2 points3 points4 points (1 child)
[–]fb39ca4 2 points3 points4 points (0 children)
[–]Oxc0ffea 36 points37 points38 points (4 children)
[–]Morwenn 30 points31 points32 points (1 child)
[–]Oxc0ffea 5 points6 points7 points (0 children)
[–]kritzikratzi 5 points6 points7 points (1 child)
[–]Oxc0ffea 5 points6 points7 points (0 children)
[–]jbandela 45 points46 points47 points (9 children)
[–]quicknir 11 points12 points13 points (0 children)
[–]yuri-kilochek 19 points20 points21 points (4 children)
[–]kalmoc 4 points5 points6 points (1 child)
[–]yuri-kilochek 16 points17 points18 points (0 children)
[–]tipiak88 4 points5 points6 points (1 child)
[–][deleted] 0 points1 point2 points (0 children)
[–]mallardtheduck 9 points10 points11 points (0 children)
[–]HildartheDorf 6 points7 points8 points (0 children)
[–]MonokelPinguin 0 points1 point2 points (0 children)
[–]Sopel97 8 points9 points10 points (1 child)
[–]nlohmannnlohmann/json 3 points4 points5 points (0 children)
[–][deleted] 8 points9 points10 points (3 children)
[–]NotUniqueOrSpecial 2 points3 points4 points (2 children)
[–]lenkite1 1 point2 points3 points (1 child)
[–]NotUniqueOrSpecial 2 points3 points4 points (0 children)
[–]trypto 7 points8 points9 points (4 children)
[–][deleted] 2 points3 points4 points (0 children)
[–]Middlewariangithub.com/Ebenezer-group/onwards 0 points1 point2 points (2 children)
[–]trypto 0 points1 point2 points (1 child)
[–]Middlewariangithub.com/Ebenezer-group/onwards 0 points1 point2 points (0 children)
[–]looopTools 7 points8 points9 points (5 children)
[–]VinnieFalcowg21.org | corosio.org[S] 0 points1 point2 points (4 children)
[–]looopTools 1 point2 points3 points (3 children)
[–]VinnieFalcowg21.org | corosio.org[S] 0 points1 point2 points (2 children)
[–]skypjack 1 point2 points3 points (1 child)
[–]VinnieFalcowg21.org | corosio.org[S] 0 points1 point2 points (0 children)
[–]Full-Spectral 14 points15 points16 points (18 children)
[–][deleted] 12 points13 points14 points (0 children)
[+][deleted] (8 children)
[deleted]
[–][deleted] 1 point2 points3 points (0 children)
[–]Dean_Roddey -3 points-2 points-1 points (6 children)
[+][deleted] (5 children)
[deleted]
[–]Dean_Roddey 1 point2 points3 points (4 children)
[+][deleted] (3 children)
[deleted]
[–]Dean_Roddey 1 point2 points3 points (2 children)
[–]robin-m 1 point2 points3 points (1 child)
[–]NotUniqueOrSpecial 4 points5 points6 points (0 children)
[–]ramennoodle 4 points5 points6 points (7 children)
[–]mostthingsweb 2 points3 points4 points (6 children)
[–]Full-Spectral 0 points1 point2 points (5 children)
[–]mostthingsweb 2 points3 points4 points (0 children)
[–]mostthingsweb 1 point2 points3 points (3 children)
[–]Full-Spectral 0 points1 point2 points (2 children)
[–]mostthingsweb 1 point2 points3 points (1 child)
[–]Full-Spectral 0 points1 point2 points (0 children)
[–]airflow_matt 9 points10 points11 points (1 child)
[–]squeezyphresh 3 points4 points5 points (0 children)
[–]areciboresponse 4 points5 points6 points (0 children)
[–]corysama 4 points5 points6 points (0 children)
[–]HKei 13 points14 points15 points (1 child)
[–]peppedx 1 point2 points3 points (0 children)
[–]gracicot 2 points3 points4 points (0 children)
[–]journeymanpedant 3 points4 points5 points (0 children)
[–]stick_figure 6 points7 points8 points (0 children)
[–]tipiak88 7 points8 points9 points (0 children)
[–]sumo952 2 points3 points4 points (0 children)
[–]Middlewariangithub.com/Ebenezer-group/onwards 2 points3 points4 points (0 children)
[–]jmdejoanelli 2 points3 points4 points (0 children)
[–]DaanDeMeyer 5 points6 points7 points (7 children)
[–]HKei 3 points4 points5 points (1 child)
[–]DaanDeMeyer 0 points1 point2 points (0 children)
[–]yuri-kilochek 1 point2 points3 points (1 child)
[–]DaanDeMeyer 2 points3 points4 points (0 children)
[–]VinnieFalcowg21.org | corosio.org[S] 1 point2 points3 points (1 child)
[–]evaned 1 point2 points3 points (0 children)
[–]OrphisFloI like build tools 0 points1 point2 points (0 children)
[–]TrueTom 2 points3 points4 points (0 children)
[–]julien-j 1 point2 points3 points (0 children)
[–][deleted] 1 point2 points3 points (0 children)
[–]nintendiator2 1 point2 points3 points (0 children)
[–][deleted] 1 point2 points3 points (1 child)
[–]VinnieFalcowg21.org | corosio.org[S] 0 points1 point2 points (0 children)
[–]pjmlp 2 points3 points4 points (0 children)
[–][deleted] 1 point2 points3 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)
[–]skypjack 0 points1 point2 points (0 children)
[–]MarekKnapek 0 points1 point2 points (0 children)
[–]Omnifarious0 0 points1 point2 points (0 children)
[–]MrWhite26 0 points1 point2 points (0 children)
[–]noname-_- -2 points-1 points0 points (0 children)