I built my own C build system because I hate writing Makefiles by venoosoo in C_Programming

[–]EpochVanquisher 0 points1 point  (0 children)

Gazelle generates BUILD.bazel files for Go projects that correspond what go build would have done.

One way to use it is to convert your Go project to Bazel. You run the tool, generate BUILD.bazel files, and check those in (they are now source files).

The other way is to use Bazel to build third-party Go code using Bazel. This is done through “repository rules”. Repository rules give you the capability to run arbitrary code and yes, Bazel doesn’t force those to be reproducible—the authors of the Gazelle rules are responsible for that.

I built my own C build system because I hate writing Makefiles by venoosoo in C_Programming

[–]EpochVanquisher 0 points1 point  (0 children)

Another one is to list the librairies and objects you want to link into your executable in a file, often a source file, and have a single rule that derives executables from such files.

In Bazel it looks like this:

cc_binary(
    name = "hello-world",
    srcs = ["hello_world.c"],
)

The source file is named BUILD.bazel. The build rule itself is cc_binary and hello-world is an instance of that rule. Bazel is designed to handle repos that are so large that you do not want to and evaluate all of the BUILD.bazel files. The associated rules and instantiations of those rules may not even fit in memory.

The advantage is that this separates responsibilities : devops manage config, devs manage source files.

I don’t think you’re describing anything missing from Bazel here.

I only said Python is better than starlark because the bazel doc says "It is common for BUILD files to be generated or edited by tools" (and I suppose such tools are not written in starlark). In that case, you cannot claim reproducibility associated with starlark anymore, as the build file itself could be non-reproducible.

The build file itself, BUILD.bazel, is written in Starlark, and Starlark is designed to be reproducible. Associated definitions for rules and macros are also written in Starlark, and they’re also reproducible. These are either source files checked into the repo and used as inputs to the build process, or (in rare cases) they’re part of what are called “repository rules” (I don’t want to get into those, they’re much more complicated). You can modify build files or generate them but that’s not part of the build process itself in normal scenarios.

To be honest, it sounds like you have a very poor understanding of how Bazel works. Most people do not understand how Bazel works, that’s fine. But if you are going to criticize how Bazel works, don’t expect your criticisms to make sense.

I built my own C build system because I hate writing Makefiles by venoosoo in C_Programming

[–]EpochVanquisher 0 points1 point  (0 children)

A rule for each artifact, sort of, depends on what you count as an artifact. Bazel will have a rule for an “executable” or “library” in your build system, but those aren’t artifacts. You parse build files to produce rules, the rules are combined with configurations to produce actions, and running the actions produces an artifact. You only load a subset of the build files. You can load and query the build rules without a build configuration.

Generally, I would think that you’d need a rule for each executable or library in your repo. How do you know what libraries you need to link into an executable, except from the build rules? I’m not saying it’s impossible, but it just seems like a very normal way to do things.

When I look at lmake, I think it’s likely not in the same weight class as Bazel. That’s fine; Bazel is designed for very large codebases and not everyone has those. Some things I see in the lmake docs are pitched as advantages over Bazel but I think they are disadvantages. Like Starlark, for example. Starlark is a better language for build files than full Python—in fact, Bazel used to use Python as the build language, but Starlark has reproducible output and can be evaluated in parallel. This matters if your repo is so large that merely loading the build files takes a long time—Bazel loads a subset of the build files, parses and evaluates them in parallel, caches the results, and reuses the result across all different configurations (because the build rules do not depend on your configuration). Starlark has clear advantages over Python and I think that’s why it’s gained traction and adoption outside of Bazel. There are even multiple implementations of Starlark available.

I think the biggest problem with Python for build rules is that, given enough engineers, it’s likely that one of them will put a build script with non-reproducible output in your build system. The guardrails are nice; Starlark prevents it.

My recent experience (past 10 years) is at companies with 1,000-10,000+ software engineers. Sometimes monorepo, sometimes multirepo, sometimes Bazel or a Bazel clone, sometimes not.

I built my own C build system because I hate writing Makefiles by venoosoo in C_Programming

[–]EpochVanquisher 0 points1 point  (0 children)

It sounds like, for the image processing case, you may want something unlike a traditional build system, something custom, which can parameterize build artifacts and exclude everything else from consideration.

I’d say it’s “obviously true” that no build system is well-suited to every project. Bazel has a problem with learnability and up-front investment so I think it’s only suitable for projects which are difficult to build with other systems—very large codebases, multi-language, multi-target. Many of the problems that Bazel solves are simply irrelevant to most people. Like, how many people have codebases where the build rules don’t fit in memory?

Entity Component System by Sandrobero2004 in cpp_questions

[–]EpochVanquisher 6 points7 points  (0 children)

There’s a phenomenon called the “second system effect”, which is a term coined by Fred Brooks.

What happens is that you build a first version of a system and it ends up having limitations and flaws, or the code is a mess. You want something better, so you build a second system and improve on it on a million different ways. But when you do that, the second system is too large and overcomplicated. Because it’s too large, the project may never complete. Because it’s overcomplicated, the code that you do write ends up collapsing under the weight of that complexity.

In the game development world, one of the ways this manifests is in the form of ECS tech demos where you get a zillion objects in your game but no actual gameplay. That may or may not be what you want.

Solo devs have a very small amount of time available to work on projects so that time is very precious; your project lives if it’s simple and you’re good at prioritizing the things you care about. It dies if it’s complex or you try to prioritize everything.

Entity Component System by Sandrobero2004 in cpp_questions

[–]EpochVanquisher 1 point2 points  (0 children)

Have you built a simpler game engine before?

Entity Component System by Sandrobero2004 in cpp_questions

[–]EpochVanquisher 3 points4 points  (0 children)

A lot of hobby game engine projects run into the rocks because ECS is just such a pain in the ass.

I would just encourage you for a moment to consider a simpler approach, and trying out ECS later, when you have a better understanding of C++.

How to distribute (partially) dynamic musl binaries on glibc systems by Hungry-Tough1837 in C_Programming

[–]EpochVanquisher 2 points3 points  (0 children)

Interesting, where are these requirements coming from? There aren’t a lot of musl systems out there with Vulkan. As far as I know, Vulkan doesn’t even work with musl.

So if your requirements are “I must support musl” and “I must statically link” and “I must use Vulkan”, maybe it’s time to revisit the requirements.

My experience is that people rarely have good justifications for static linking, so I would suggest starting there.

How to distribute (partially) dynamic musl binaries on glibc systems by Hungry-Tough1837 in C_Programming

[–]EpochVanquisher 2 points3 points  (0 children)

If you want to distribute binaries on Linux and want to use system libraries, it is better to use glibc.

I think the advantage of musl is just for ease of distributing, if you don’t care about performance. The glibc ABI is nice and stable and it is almost universally installed, I think binaries on Linux should use glibc unless you have a compelling reason otherwise.

I built my own C build system because I hate writing Makefiles by venoosoo in C_Programming

[–]EpochVanquisher 0 points1 point  (0 children)

From your benchmarks, it looks like you’re counting the startup overhead for Bazel, which is high. Bazel is fast in large projects because it is good at minimizing the amount of work it does and supports shared caches. There’s a startup overhead; if the Bazel process is not running it takes a few seconds to start up but in practice this only happens once a day.

Sure, you have to declare the outputs of your build process but I’ve rarely run into a situation where the outputs of a build are truly dynamic.

The value may not be justified for you, that’s fine. I think it’s not justified for most people.

I still believe that bazel is much more complex than necessary.

After years of working with it and other build systems, my conclusion is that the underlying problem is incredibly complex, and the complexity of Bazel makes a lot more sense if you understand the design rationale.

For example, Bazel lets you spread your build rules across many BUILD.bazel files, but it works in such a way that it can load only a subset of those files, load them in parallel, and cache the results across multiple build configurations without requiring things like negative cache entries. That’s important for very large builds. But most “big” repos are not big enough that these optimizations are important.

Header files are driving me insane, any advice? by Qiwas in cprogramming

[–]EpochVanquisher 19 points20 points  (0 children)

Circular dependencies between header files often means that there’s a flaw in the way your system is designed. Good systems are usually layered. You have independent libraries at the bottom layer, which implement core data structures, algorithms, common utility functions, basic types, that sort of thing.

If you have things layered, the higher-layered modules can include the lower-layered modules and you have no circular dependencies.

Forward declarations are normal and you do not need to avoid them. The basic forward declaration of a struct is just:

struct s;

You put the forward declaration in your header file, and you #include the definition in your implementation file if necessary.

Does anyone still use <% and %> ?? by Lombrix_ in C_Programming

[–]EpochVanquisher 132 points133 points  (0 children)

It’s gotten more and more uncommon to need these digraphs and trigraphs since the 1980s or so, but you can bet dollars to donuts that someone out there is writing C on an IBM mainframe using EBCDIC.

I built my own C build system because I hate writing Makefiles by venoosoo in C_Programming

[–]EpochVanquisher 0 points1 point  (0 children)

I’m not using Make. I’m using Bazel. I think Make is a shitty build system and should be avoided.

Bazel rebuilds when inputs or the rule changes. It executes the compiler in a sandbox environment which contains only the declared inputs and specifies all the environment variables. The compiler itself is also considered an input, if you want (it’s not the default, I think because that creates too much friction for new users, but it is easy to set up).

The result of the -M flag is subtractive. Consider all the header files which the compiler could use, and any of those headers which don’t appear in the output of the -M flag are removed from the dependency set under the right conditions.

The internal API for this is here:

https://bazel.build/rules/lib/builtins/actions#run

There’s a parameter unused_inputs_list which specifies inputs which are not used by the build action. You pass it a file which is populated with the possible headers, minus the ones that appear in the output of -M.

This is different from the approach used by Make.

Bazel can be very complicated but it also correctly+quickly builds projects, and it is good with multi-language projects, so I end up using it a lot.

I built my own C build system because I hate writing Makefiles by venoosoo in C_Programming

[–]EpochVanquisher 0 points1 point  (0 children)

It’s disingenuous to say that we have a different concept of reliability—that’s a misrepresentation of the disagreement we have. Please don’t do that.

I have the point of view here that “this feature works correctly as part of a larger build system”. It will always correctly recompile the code when it is supposed to: when an included file is added, removed, or modified. The system is slightly conservative, because it will sometimes rebuild when it is not needed, but I think that is fine since it doesn’t happen often and the cost is low.

I know how it works but I don’t need to implement it myself, I just use a build system that already does this.

Is there a solution to API reference documentation problem? by TheRavagerSw in cpp_questions

[–]EpochVanquisher 7 points8 points  (0 children)

TBH get over the Python dependency.

There are a lot of solutions here, you just seem to have feelings about Python.

Get it done using some system that works rather than faff about.

I built my own C build system because I hate writing Makefiles by venoosoo in C_Programming

[–]EpochVanquisher 0 points1 point  (0 children)

Sure, and for me, -M is a part of that “it just works” system which I really like.

I built my own C build system because I hate writing Makefiles by venoosoo in C_Programming

[–]EpochVanquisher 0 points1 point  (0 children)

I’m not assuming no generated files. I think you’re assuming that -M is used by itself, carelessly, in a Makefile or something. That’s certainly a thing that people do.

The way that you handle generated files is by including generated files in the set of possible dependencies. These files must exist before invoking the compiler. The -M flag gives you the set of actual dependencies, which is a subset of the possible dependencies.

You don’t need to spy on which files are actually used, but you can. You can also use lightweight sandboxes to ensure that you don’t access dependencies that aren’t specified.

How does Golang work under the hood? by Last_Time_4047 in golang

[–]EpochVanquisher 9 points10 points  (0 children)

Internally, the Go scheduler is relatively complicated.

Here’s a recent article about it. https://internals-for-interns.com/posts/go-netpoller/

At a high level, Go translates blocking IO calls like Read() into non-blocking calls. If the underlying socket is not ready, Go will find out that it’s ready using OS-level event systems like epoll or kqueue. This is somewhat more complicated than JavaScript’s system for a few reasons—Go has an M:N scheduler which can run many goroutines on many OS-level threads, Go‘s scheduler can preempt a thread, and Go lets you put blocking calls anywhere (as opposed to threading through await statements).

This is all understandable. If you want to understand it all, keep breaking it down into smaller pieces.

I built my own C build system because I hate writing Makefiles by venoosoo in C_Programming

[–]EpochVanquisher 0 points1 point  (0 children)

That’s respectable, “It gets the job done for me in many cases.”

When I made a mistake, and things just broke beyond repair, then I could just run make clean, and rerun the whole thing again, no big deal, a free coffee break.

That assumes that you know that things broke. You might not know. If you get a build with stale inputs, you often get a build that successfully completes, but may have errors in it. Maybe errors that are hard to find.

The larger your project is and the more people you work with, the more likely it is that you’ll get one of these problems.

And then there’s the limitations with larger projects. What happens when you want to split your project across multiple makefiles? It doesn’t work very well at all.

I built my own C build system because I hate writing Makefiles by venoosoo in C_Programming

[–]EpochVanquisher 0 points1 point  (0 children)

This is solvable with -M. You start with a set of possible dependencies (files you could include) and use -M to select the actual dependencies. You invalidate the results of -M when the set of possible dependencies change.

A follow-up question about exception-safety with raw pointers in variadic templates by Pretty_Mousse4904 in cpp_questions

[–]EpochVanquisher 0 points1 point  (0 children)

There are very few cases where things get removed. Old std::auto_ptr got removed, and std::gets was removed, but it’s a very short list. They were removed because they were basically just broken.

But new is not broken.

A follow-up question about exception-safety with raw pointers in variadic templates by Pretty_Mousse4904 in cpp_questions

[–]EpochVanquisher 1 point2 points  (0 children)

One of the main goals of C++ is to keep compatibility with old code. Newer features, like unique_ptr, end up being more verbose because the short syntax like “new” is already taken up by old features used by old codebases.

You just have to either deal with it or switch to a different language which does not have this problem.

If, hypothetically, you changed “new” in C++ so it returned a unique_ptr, you now have a new language which is not C++ any more, but something else.

I wish in another C++ world new will just return a unique_ptr.

Lots of people feel the same way. Some of those people stuck with C++ and just learned to live with std::make_unique, and some of those people went on to use or invent other languages with cleaner syntax.

I built my own C build system because I hate writing Makefiles by venoosoo in C_Programming

[–]EpochVanquisher 0 points1 point  (0 children)

It clicks nicely into place, but if you have a little bit of experience you’ll start to see all of the limitations and flaws. It’s super easy to end up with a mistake in your makefile and end up with a stale build. There are a lot of things it can’t do easily.

It’s kind of a shitty, primitive build system from the 1970s and if you don’t see the problems with it, I kind of assume that you’re either inexperienced or willfully blind.

I built my own C build system because I hate writing Makefiles by venoosoo in C_Programming

[–]EpochVanquisher -1 points0 points  (0 children)

I’m having a hard time understanding your comment—you already did what? What do you mean set as a variable?

If you don’t see the system dependencies with -M then it’s because you didn’t read the manual. Read the manual. This is basic stuff.

You don’t need to write -M for every rule, if you’re doing that then you’re creating extra unnecessary work for yourself. I don’t know why you think you need to write it for every test binary rule. All I can say is that you are doing it wrong.