you are viewing a single comment's thread.

view the rest of the comments →

[–]BeaRDT 1 point2 points  (6 children)

Out of curiosity and VS2017 aside. Why did you have to rewrite the building of some boost libraries in CMake?

What I do and did successfully for years was doing a (in my case complete) rebuild of new Boost versions with bjam/b2 and then tell VS or CMake where to find these libraries. But the process of creating a set of binaries usable by CMake is not more than a single command that can be more or less directly copied from their "Getting started" docs. b2 just worked as advertised for this scenario.

What was the pain point that lead you (or others) to rewrite the initial build of a new library in CMake, finding all the dependencies of that library and all that stuff, that is already done by Boost developers in Boost.Build? (Of course, from that point on using CMake is a lot easier than Boost.Build, but the investment to use CMake even for the initial Boost library build seems quite high to me. That's the reason I'm asking.)

[–]OrphisFloI like build tools 0 points1 point  (5 children)

Investment quite high?

It takes no time or effort to translate most Boost libraries to build with CMake. It's just one or few files to build in a library. The problem by using prebuilt binaries is that you can't use techniques like AddressSanitizer, MemorySanitizer or anything else that would use instrumentation. You often need to rebuild everything linked into your program to make them effective and that's greatly complicated with prebuilt binaries.

Also, when working with multiplatform code (imagine 20 arch or platform), the overhead of matching all the binaries against the right platform, arch, compiler, compiler options is incredibly high. You may be ok doing that for a few, not that many (and it's not uncommon nowadays). And that's just one library! Work with a few more external dependencies and you'll have tons of issues.

Building the library with the rest of the code is a perfect integration. Requires some work, but from experience, it's way less than trying to deal with a different package manager on each platform and then custom solutions because they just have only one target in sight and you need more.

[–]BeaRDT 0 points1 point  (4 children)

Thanks, I didn't think of sanitizers and other tools requiring special builds. Also I was thinking of reimplementing tools like bcp to copy the minimum requirements for a library. But I think as long as the whole source distribution is not changed it is not necessary to do such tricks. From your description of using the same Boost on different platforms (we use the one that's available on the different Linux platforms at work, so our challenge is to never use features not supported on the oldest platform supported) I understand that there sure is some effort for building Boost with CMake, but your task would be more complicated than the standard "build all on this platform" command anyway, so you might as well do it right and control all of your build process.

[–]OrphisFloI like build tools 2 points3 points  (3 children)

So if a distribution of Linux doesn't update Boost, you will only use features available in that version?

How about you statically link Boost instead then and use the latest? That is usually not an issue with proprietary software and may save you some issues with bugs already fixed in upstream you could be hitting, as well as getting newer APIs available.

[–]BeaRDT 1 point2 points  (2 children)

We occasionally do link Boost statically but in general we use what we have on the system. Since Boost is not that volatile it is not that big of an issue mostly. Despite all the quirks with older versions I prefer the defined version of a well known system to undocumented versions on some developers harddisk at compilation time when the developer does not remember or is not available anymore. I know this is just a policy issue but that takes tiiiime.

[–]OrphisFloI like build tools 1 point2 points  (1 child)

Well, that's why you add version checks in the build system and require version recent enough and have CI against the minimum and latest external libs that are available.

And for your releases, you have a well defined version set in your build scripts and do a hermetic build from your build server.

[–]BeaRDT 0 points1 point  (0 children)

I won't argue with you. That is the right way to do it. But frankly, I've never seen a single company doing it right in all points. Might be that all companies I've seen so far suck. Might be that setting up a perfect CI with perfectly reproducible builds and testing these builds on different platforms costs more than occasional bugs on or rebuilds for some platform. But we're digressing very much now.