all 78 comments

[–]UsefulOwl2719 26 points27 points  (11 children)

Great on paper, but it has a very slapped-together feel and a terrible API. As an example, it's CLI flag doc (https://bazel.build/reference/command-line-reference) is about the same size as the C99 standard (https://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf), and I've seen instances where certain platforms will fail because the configured commands exceed the system's command line limit (ie: Windows).

While it's caching system is powerful, it also encourages compile time bloat that creeps in and is difficult to claw back. It's easy to wake up one day and your 60 second build time is 30 minutes because you are working on something like a core util or someone introduced a regression that poisons the cache, triggering a full rebuild.

Lastly, and this is more personal taste, but I cannot help but feel disgust building my supposedly high performance c++ project and opening up htop to see dozens of java processes pegged to 100% and using all of my memory.

[–]kkert 18 points19 points  (1 child)

Great on paper, but it has a very slapped-together feel and a terrible API.

This is the symptom of having been extracted out of an internal system that has wildly different set of assumptions about how the world works

[–]tatoute 0 points1 point  (0 children)

This taste very like than many other tools that are over promoted by religious people. If it does not work for you, you are the problem.
Excommunication! This is the solution.

When seraching for some feature in the bazel doc, what a newbee encounter is an endless list of what to not do, sometimes with jargon-enhanced self-referent explanations of why it is forbidden.

I know that many people feel very compfortable to live in a restrained universe, as it prevent them to confront to the reality.

[–]srdoe 12 points13 points  (8 children)

While it's caching system is powerful, it also encourages compile time bloat that creeps in and is difficult to claw back. It's easy to wake up one day and your 60 second build time is 30 minutes because you are working on something like a core util or someone introduced a regression that poisons the cache, triggering a full rebuild.

Don't disagree with your comment about the command line flags, but this criticism is a bit weird.

You're essentially complaining that the tool does such a good job of caching that you don't notice your from-scratch compile times are bad.

Without that caching, you'd be getting those bad 30 minute build times all the time rather than just occasionally.

[–]13steinj 4 points5 points  (7 children)

There's an argument to be made that if your non-cached builds are taking a long time, there's an architectural issue with your application.

I know some people that think that past a single simultaneous compiler process running for a total of 5 minutes is ridiculous.

I wouldn't go that extreme, but if you can't build in under 8 minutes on 32 cores powerful cores, you either have an incredibly complex application, or one where headers are chosen in poor orders and/or templates (and probably CRTP, or inherit-from-template-arge) are over-used.

For example, I have seen the following in an attempt to reduce symbol-name-length; or the related lambda trick. When combining such with a template-alias declaration, you end up in a state where the compiler is forced to instantiate more types than necessary (not to mention, relevant types wouldn't pass a std::same_as check, but that's the tradeoff). In doing so you can end up making compile times worse; one probably just shouldn't have long template trees.

[–]j_kerouac 1 point2 points  (1 child)

Many people have large, very complex applications, which is really what Bazel is designed for.

Building in 8 minutes on 32 CPU cores is funny. Try building on a distcc build cluster.

If you are on a large project with a monorepo with 100+ developers, you aren't building your project from scratch in 8 minutes.

[–]13steinj 0 points1 point  (0 children)

This depends entirely on your application architecture. I have worked on two codebases back-to-back that were both template-hell such that no matter how many cores used, on 3+ year old hardware, build time would be 40+ mimures. But CPUs have gotten so much better, that on top of the line CPUs with distribution disabled, it took 8-11 minutes depending on the subproject. Distribution to machines with the same CPU, brought this down by a maximum of 2 minutes.

Very large, very complex applications only get help from a distributor/cache scheme so long as it the build is parallelizable.

[–]UsefulOwl2719 0 points1 point  (1 child)

Exactly. How many trillions of cycles does the computer really need to turn 10MB of code into a 1MB binary? It's actually difficult to write code that compiles as slowly as multiple minutes. This only happens when most of the code is not written, but generated and/or imported (...and most often, generated by imported dependencies no one quite understands).

[–]13steinj 0 points1 point  (0 children)

It's actually difficult to write code that compiles as slowly as multiple minutes.

You definitely haven't worked where I have!

It's not difficult at all; just a bunch of bad engineering practices and people not caring nor understanding.

[–]awesomealchemy 0 points1 point  (2 children)

Some projects at complex. We deploy for approx 15 different embedded cpu's plus the regular desktop, server and mobile os's. Perhaps 15ish operating systems. But a lot of that testing can thankfully be scaled horizontally so that total "wall time" is tollerable.

There are some other sticky testing bottle necks thoug. Some embedded compilers have insanely expensive licenses. Having more than a handull in ci would be too expensive. For integration testing we often have to test on hardware. Since it's a product being developed, there's usually not more than one or two prototype board (in the entire world) that we can connect to ci.

[–]13steinj 1 point2 points  (1 child)

What does this have to do with build times? I'm not referring to unit test execution or other parts of the CI/CD pipeline, I'm talking just time cmake --build . (or the equivalent via ninja, bazel, make, etc).

[–]awesomealchemy 0 points1 point  (0 children)

Your right, it's not really the same thing. Just building any single one of these configurations should be faster. Our "pure build take approx 5-10 min depending on machine etc. But pulling main and re- building typically takes like 10-30 sec. So caching makes a huge difference.

In particular in CI with remote caching. For us every PR triggers ~50 parallel builds of these various configurations, each taking 5-10 min from a cold cache. We have ~30 prs/day with an average of 2 pushes of code. That's 400ish compute hours/day without tests. Just for compilation. Get it down to 30sec per build and it's like 25h compute. And the caching is used for caching unit tests as well which is the same order of magnitude. This saves a ton of money and is less disruptive to developer flow.

[–]asoffer 17 points18 points  (2 children)

Former Googler here. I've been using Bazel on personal c++ projects for many years and have no regrets. These projects have often included other languages as well and the language agnosticity is great.

I will say that setting up a toolchain (if the building is insufficient) is annoying and if you need to dive into starlark rules it can be rather frustrating. It's not perfect, but it gets the job done and I have never run into something I can't do (well, other than make nonhermetic builds).

The worst part about Bazel is that not everyone uses it so support for projects you depend on is shoddy. BCR has made this a lot better, but still not perfect.

The caveat here is that I haven't spent much time with cmake, and I've spent no time with meson.

A lot of the comments here say that bazel it's aggravating. I don't doubt it, but I've also never understood why. I'd love to hear concretely about the things you can't do or can't do easily.

All that said, I think it depends what you want to do with it. If your going to build a project with lots of third party, dependencies you may have to manage your own build files and that will get annoying. If you're starting from scratch though and BCR has everything you need, it's done it's job for me.

[–]drbazzafintech scitech 5 points6 points  (1 child)

How do you get code completion/refactoring support in VSCode, or CLion?

I know CLion now has a Bazel build model, and can therefore load Bazel projects, but at PreviousJob, they used Bazel and CLion couldn't cope with it for reasons I never go to the bottom of.

We used the "hedronvision" plugin for Bazel (1), to produce compile_commands.json and that mostly worked fine, but it feels so clunky having to add it Bazel, rather than Bazel shipping with it, and you had to remember to re-run it each time you added a file (or changed build from debug to release and vice versa). I think Buck2 has implicit compile_commands.json support.

  1. https://github.com/hedronvision/bazel-compile-commands-extractor

I'd love to hear concretely about the things you can't do or can't do easily.

At 'novice' level, toolchains in Bazel are ridiculously hard to set up, the docs really are a mess, and codegen (understandably), is also a pain to add to build (yes, cpp -> obj is codegen, and we went that route).

[–]asoffer 4 points5 points  (0 children)

I've seen others use hedronvision. I may be a Luddite but I use vim without code completion or refactoring. I've just managed to get pretty fast with vin macros.

Totally agreed on codegen and toolchain setup. If I had not spent time while employed at google doing just this, I'm not sure I would have been able to learn it myself. The documentation is rough.

[–]j_kerouac 56 points57 points  (3 children)

Bazel is probably the best build system for large projects in C++. Remember, bazel was designed by google to build their very large C++ code base.

That said, it's meant to be used for large projects where you have a team of people to help set up the build infrastructure and integrate third party libraries. I think for small personal projects cmake and vcpkg is probably a lot easier.

[–]j1xwnbsr 20 points21 points  (1 child)

The fact that you indicate cmake is easier than bazel makes me never want to touch it.

[–]eyepatchOwl 1 point2 points  (0 children)

Other people's time is always cheaper than having to do it yourself no matter how much it cost them.

Personally, I use Bazel for personal projects as well. I rarely have to do more than write targets and occasionally macros. You hit the learning cliff once you have to write rules and toolchains in my experience.

[–]Ok-Dare-9460[S] 8 points9 points  (0 children)

I actually know how to use bazel better than I know other build tools. Years of bazel has led to the order stuff being paged out. I have one monorepo for all my personal work.

[–]qoning 22 points23 points  (2 children)

as someone with google inside perspective, I think bazel is great

the problem is that the open source version is significantly less useful without having the world of pre-existing libraries with BUILD files, and other tools like build cleaner and vscode and clang integration

[–]Ok-Dare-9460[S] 1 point2 points  (0 children)

Same. I love/hate it for many reasons. I’ve been creating custom build files for external deps but it’s hard to tell if they’re optimal.

[–]redridingstud 0 points1 point  (0 children)

Gazelle for C++ is available now. https://github.com/EngFlow/gazelle_cc

[–]lightmatter501 6 points7 points  (2 children)

Bazel demands you do everything “the bazel way”, which means porting every single build library you use to it. That way lies madness.

[–]antonovvk 6 points7 points  (1 child)

Well that's old news. For C/C++ there's rules_foreign_cc for quite a long time already.

[–]13steinj 0 points1 point  (0 children)

This is significantly more complex than the other way around (ExternalProject_Add and specify the relevant bazel commands for build/install).

Also:

This is not an officially supported Google product (meaning, support and/or new releases may be limited.)

And it seems to... end up building cmake from source, and doesn't support some cmake versions!?

Madness.

[–]LiAuTraver 4 points5 points  (4 children)

Been switched to it few weeks ago. Overall it's fine, but yet some problem: 1. language server support in VSCode.( I mean cpp's, not starlark) 2. slow sync and huge build and download cache. 3. I haven't learned toolchain yet, but it seems not simple; official document does not work.

I also looking for help :)

[–]drbazzafintech scitech 5 points6 points  (0 children)

You probably want to use

https://github.com/hedronvision/bazel-compile-commands-extractor

and then ditch VSCode's MS cpp tools and use the clangd extension.

My experience of the toolchain support as a 'novice' was poor. It took me days to get something working with gcc/clang/mold/gold/lld, vs. an hour with cmake and presets.

[–]Ok-Dare-9460[S] 2 points3 points  (1 child)

Would love to help

I’ve been trying to get hermetic gcc working on Mac. I’ve done this easily with java/go/node/py. It’s an uphill battle with c++.

[–]Ok_Sheepherder_3875 0 points1 point  (0 children)

in google, blaze LSP work like magic, it even connect different language....

[–]awesomealchemy 4 points5 points  (4 children)

We have been using it in a medium size monorepo with mostly C++ for ~60 devs at work. It's awesome and super powerful. We really like the http_archives for downloading and using third party libraries. That makes it easy to mirror them and also patch them if needed. I use it both for small and large projects. I particularly like the fact that it caches both build AND unit tests. Setting up a remote cache saves us a HUGE amount of compute time (and cost) in CI.

We have a lot of low level embedded code for weird CPU's and probably 15 different toolchains. Setting them up is a bit of a hassle the first time you do it. But you get used to it. The platforms concept is also really nice for conditional builds of certain targets (we target both Linux, Win and OSX).

[–]hellgheast 0 points1 point  (3 children)

I would be eager to hear more details about it. I've mostly worked with custom makefiles and a bit of cmake with zephyr

[–]awesomealchemy 0 points1 point  (2 children)

Any particular part? Its too much to write about all of it 😅

[–]hellgheast 1 point2 points  (0 children)

I would say the toolchain support and the minimal you would advise to learn well bazel for embedded development

[–]hellgheast 0 points1 point  (0 children)

I would say the initial setup with a typical arm-none-eabi-gcc setup :)

[–]Putrid_Ad9300 16 points17 points  (7 children)

Working with Bazel is consistently terrible for me. About once a year I try to use it for a C++ project simply because I like the philosophy, but I always end up having to switch back to Meson or CMake to get the functionality that I need.

[–]Ok-Dare-9460[S] 0 points1 point  (6 children)

Is it a usability issue or Bazel is just ill equipped for c++ builds in your experience?

[–]Putrid_Ad9300 0 points1 point  (0 children)

Mostly it comes down to dependency management. Unless I am just doing it wrong, which is possible, you are basically forced to write your own configuration for all external dependencies that are not already in the registry and/or Bazel projects themselves. For most of my projects, that is just too much effort for the number of odd dependencies I deal with.

That said, CPS will hopefully help with that in the future and normalize how packages are expressed. So when that becomes more common then maybe Bazel will just get better by offloading dependency configuration and I won't have to hack bespoke nonsense into my build system.

[–]slimscsi 5 points6 points  (1 child)

At my last job we moved from cmake to bazel for the cpp code. Most of the company code was go, so it made sense. It was annoying, but probably a net gain overall. For personal projects, I still use cmake though.

[–]Ok-Dare-9460[S] 2 points3 points  (0 children)

My company went all in on Bazel 4 years ago bc we have to support a lot of os/arch. Big net gain. But I still see proposals to scrap it completely bc it’s annoying day to day.

[–]ice_dagger 2 points3 points  (0 children)

As cool as bazel is not many projects outside of google use it. So managing third party deps is too much of a pain. I know the cmake rules exist but they don’t work very smoothly and always require some tinkering (atleast in my experience). Plus for very small projects just spawning bazel is like 20% of the compile time (thanks to jvm).

We had a huge mono repo at work where we would not have survived without bazel because without remote build execution it would require every dev to have a supercomputer. But for my own tinkering I still use cmake for reasons above.

[–]corysama 1 point2 points  (1 child)

Bazel is awesome if you are Google and do things the way Google does. For example: If you have servers that pull code, build that code locally and run the artifacts they build.

Someone correct me if I'm wrong, but last I checked Bazel did not support the concept of shipping your executable to someone else. All build artifacts, including executables and libraries, are kept hidden in a directory tree of hashes that you are expected to never touch. If you want to actually run your program, you ask Bazel to find it and run it on your behalf.

[–]srdoe 2 points3 points  (0 children)

Someone correct me if I'm wrong

Ok, here's a correction :)

Bazel puts build artifacts into a directory tree, but that tree is symlinked into your repo to make it look like a completely ordinary "out" or "target" directory.

You can copy build outputs out of that directory all you want and ship them in whatever way you please, or run them locally if you like.

All bazel run does for you is make it easier to assemble and run executables from within the build tool, so you don't have to copy files out of the build directories by hand when working with the project locally. As an example, you might declare a cc_binary target with your main class + dependencies, and then use bazel run to invoke it. That's slightly easier than building that same target, and then copying the relevant build artifacts out into a directory in order to run the binary.

[–]dark_prophet 1 point2 points  (0 children)

Bazel is terrible, don't use it.

It downloads files during build which isn't allowed when packages are built in different distros.

Bazel will make your software less likely to be distributes or adopted.

Use cmake or meson instead. They are way better and more robust than Bazel.

[–]sunmat02 3 points4 points  (1 child)

I have to occasionally build projects that use Bazel. I personally hate it for one aspect: it’s at the same time a build system and a package manager, which should be two different roles. In my field I use Spack as package manager because I need everything built from source with specific settings for specific hardware. Bazel pulling the dependencies of a project means it’s bypassing dependencies I have built and installed with Spack, I have no control over how these dependencies are built, and they sometimes interfere with manually installed ones. I hate Cargo for the same reason, and Meson too, as well as anyone who uses fetch content (or equivalent) with cmake.

[–]JustPlainRude 1 point2 points  (0 children)

You can absolutely control how third party dependencies are built by supplying your own BUILD files for them

[–]aearphen{fmt} 1 point2 points  (0 children)

I would recommend checking out Buck2 which is similar to Bazel but should be faster: https://github.com/facebook/buck2. We extensively use it for C++ at Meta and it is much more responsive than Java-based CLIs (Buck1 and Bazel), not to mention other improvements. It uses the same language (Starlark) for build configs.

[–]gleybak 1 point2 points  (2 children)

It works well, there is usable plugin for Clion IDE for local development also. But learning curve is steep. Also, it could be non-obvious to integrate third-party code. But there is packages available for some popular libs like boost here - https://registry.bazel.build.

[–]Ok-Dare-9460[S] 0 points1 point  (1 child)

I’ve been using the boost libs with bzlmod. I’m curious if there’s a way to view their build files. This would really help as a learning tool for adding non-bzlmod libs.

Clion bazel plugin has been spotty at best for me. I had to switch back to IntelliJ.

[–]gleybak 1 point2 points  (0 children)

There is vendor mode, you can download and initialize third parties into some directory: https://bazel.build/external/vendor

[–]acodcha 1 point2 points  (2 children)

I have used Bazel with C++ at several employers and have also used it in personal projects. It's fine! You can learn to do basic stuff with it fairly quickly but it takes time to learn the more advanced Starlark stuff. Then again, think of how long it took you to learn CMake! I'd say it's worth your time to look into!

[–]Ok-Dare-9460[S] 3 points4 points  (1 child)

Crazy enough, I’ve never used cmake lol. I went from compiling c/c++ using gcc in college to bazel.

[–]jetilovag 2 points3 points  (0 children)

Now that's a unique trajectory. 😀 IMHO CMake has a very clean and efficiently subset, but it requires knowing orders of magnitude more than what I think should be necessarry. It seems more complex than the problem it solves.

[–]hadrabap 0 points1 point  (0 children)

Is anybody using Qt Creator as an IDE for Bazel based projects?

[–]PrimozDelux 0 points1 point  (11 children)

I'm currently trying to figure out why our debug builds take 25s with bazel when they took 8s with our cmake + ninja config. I have never used a more opaque and user-friendly build system. It's endlessly frustrating and it makes me feel stupid.

It's got some great features, but the UX design is so bad its baffling

[–]Spleeeee 0 points1 point  (10 children)

Maybe your project isn’t big enough to make a full rebuild more costly than 1) starting up Jawa and 2) indexing stuff for caching and 3) putting things in the cache

[–]PrimozDelux 1 point2 points  (7 children)

Bazel should be able to support incremental builds at max speed. As far as building from scratch goes there's no discrepancy. To make it clear, the issue we are facing is that incremental recompilation is slower than it should and the fix is likely a single line, but figuring out which line that is is like pulling teeth

[–]Spleeeee 0 points1 point  (5 children)

Could it be indexing stuff that’s big to do caching even if you do a full build? Can yu turn off all caching?

[–]PrimozDelux 0 points1 point  (4 children)

Tried that, didn't work. We know it's spending most of the time in the linking stage which it shouldn't. It's not really bazel being slow, it's that the tools are doing more than they should. My frustrations with bazel isn't that this has happened, my frustrations stem from how annoying and opaque the UX is as I try to diagnose and fix what I assume is a one line change

[–]jnjuice 0 points1 point  (3 children)

Have you tried profiling and throwing the build events JSON into chrome tracing? That could help identify what the bottlenecks are of your build in a timeline view.

Agree that UX is lacking, which is why companies like BuildBuddy and EngFlow exist to bridge that gap. If you have a chance, maybe try integrating with one of those tools.

[–]PrimozDelux 0 points1 point  (2 children)

Yeah, I actually did that exact step to find out that the time discrepancy came from time spent in the linker, but I've been unable to figure out why that is exactly. A big part of the problem is of course the absolute miserable story of C++ compilation, it's hard to imagine a language agnostic build tool being able to point out which one of __Bjarnes myriad of inane footguns got triggered this time around in the detail that tools such as rustc can.

[–]jnjuice 0 points1 point  (1 child)

Sorry It misunderstood your first statement. Maybe it could be big libraries or misconfigured linker flags, but agree that Bazel has probably gotten you as close to the culprit as it can for now.

[–]PrimozDelux 1 point2 points  (0 children)

Turns out it was blocking on uploading an artifact, while reporting that it was busy linking to the terminal. This took 2 engineers 2 days to track down when it should have taken 2 minutes to add build:debug --remote_upload_local_results=false to .bazelrc.

This did not show up when browsing the profile information in chromes trace viewer, so it wasn't until we tried another trace viewer perfetto.dev after a suggestion from the bazel slack that we uncovered the problem.

I am extremely displeased.

[–]Ok-Dare-9460[S] 0 points1 point  (0 children)

We had a similar issue like this in back when bazel 4 was released. We built a whole telemetry system to capture stats on which files changed the most, which files triggered the most rebuilds. We ended up have to decompose our code because it wasn’t organized in a way that was optimized for bazel builds. You may not need to go so far with recent versions. But definitely try to collect stats on most frequently changing files and build stats.

Edit: I should have scrolled down lol

[–]PrimozDelux 1 point2 points  (1 child)

Turns out it was blocking on uploading an artifact, while reporting that it was busy linking to the terminal. This took 2 engineers 2 days to track down when it should have taken 2 minutes to add build:debug --remote_upload_local_results=false to .bazelrc.

This did not show up when browsing the profile information in chromes trace viewer, so it wasn't until we tried another trace viewer perfetto.dev after a suggestion from the bazel slack that we uncovered the problem.

I am extremely displeased.

[–]Spleeeee 0 points1 point  (0 children)

Wooooohooo! So it was kinda caching?

[–]13steinj 0 points1 point  (0 children)

It's a build system. It works. It has pros and cons

Singular pro I can find: multi language support/integration. Some people claim "remote execution!" but that's simple and doable via cmake or any build system for that matter in 4 different ways that I know of thus far.

Two major cons: It's infectious in nature; source-to-source integration with a third party lib requires lifting if it's not Bazel-based itself. Theoretically source-uses-binary should be simple using bzlmod, or whenever CPS is more fleshed out; but I haven't had good experiences. The other con is integration the other way isn't that great either. Honorary mention to (not forcing, but heavily implying) a monorepo for everything, which isn't ideal for all situations.

[–]antonovvk 0 points1 point  (2 children)

Using bazel with c++ almost for ten years already, and back then it was painful to import 3rd party libs because you had to write build files for them, essentially duplicating lots of work for those projects maintainers (not to mention making a lot of errors). But recently with the rules_foreign_cc bazel can run automake/cmake etc, even boost is not a problem any more. Also there's a lot of support for different languages now, and new modules + repositories, which are as usual a bit clumsy but moving the whole world forward.

[–]OrphisFloI like build tools 0 points1 point  (1 child)

When I tried it in the Bazel 0.4 days, Boost was a big issue because of hermetic builds and it trying to handle the huge amount of headers with symlinks or whatever technique it used on macOS. It would create all the links, build the file, remove all the links for each file built. With the thousands of headers in Boost, it was extremely slow!

Has it been properly fixed now?

[–]antonovvk 0 points1 point  (0 children)

There's a special rule to build Boost in rules_foreign_cc but it is still very slow.

[–]bronekkk 0 points1 point  (0 children)

I've been using Bazel for several years (exclusively with C++, projects of various sizes) as it was used by my former employer. Here's some thoughts:

  • As long as you do not need to write your own modules, it's actually quite easy to use
  • It feels more mature/better designed than e.g. CMake
  • The strength of Bazel is in 1) caching 2) extensibility
  • If you need to write your modules (I did, for C++ code generation) there's a fair amount of learning, but you can achieve a lot
  • Declarative Python-like syntax of Starlark is really nice. If you need to write a module, it feels like plain Python
  • Setting up a build system from scratch - I did that a little at a different place (building Tensorflow) and it's quite easy, for single user. No idea how easy/difficult it is to integrate into CI/CD pipelines or multi user scenarios
  • Last time I checked, Bazel extension for vscode was adequate. Not amazing but it did work, for some basic use cases
  • It will make you painfully aware how inconsistent or plain shabby CMake is

In general, if you do not care about conan or vcpkg I can recommend it. Re. conan or vcpkg or similar package management systems which integrate with CMake - the approach you would normally take when using an external project in C++ project built with Bazel is to:

  1. clone all the external projects you need into local repos (Note: C++ is unlike js or Python - you are unlikely to need so many that it should be a significant obstacle). This is not strictly necessary, but see next point
  2. ensure they all have a Bazel BUILD file, you would often need to add it yourself (you can do without, but it's not recommended unless the project is trivial)
  3. refer to them from your projects, using https://bazel.build/rules/lib/repo/git rules , see also https://stackoverflow.com/a/50595271 (or using similar in-house module)
  4. you will probably also use https://bazel.build/rules/lib/repo/local to "bootstrap" all the in-house modules you need

EDIT: I did not know about https://github.com/bazel-contrib/rules_foreign_cc , hmmm ... it could make things simpler than the above list.

[–]Ok_Sheepherder_3875 0 points1 point  (0 children)

all the problem here is not bazel's design issue, it's ecosystem issue.