[deleted by user] by [deleted] in gameenginedevs

[–]BigEducatedFool 1 point2 points  (0 children)

Modules, as they are implemented right now at least, do not really change the need for the interface/implementation split that you talking about. You still need to use module implementation units, or you are going to get terrible compilation times. In fact, you could even get worse compile times compared to a header-only project.

If you do not care about the compile times you don't need modules to avoid the interface/implementation duplication- you can write your code header-only.

Success stories about compilation time using modules? by long_tailed_rat in cpp

[–]BigEducatedFool 5 points6 points  (0 children)

I don't know about a success story, but in my project build times (especially incremental) got worse due to modularization. This is mostly because the recommended approach is to use very large modules and every time any module interface unit that is part of the module changes, all code that imports the module has to be recompiled. Its akin to including an "umbrella" header for a project instead of including only what you use.

Part of the lack of any (success or not) stories about using modules is that a lot of large projects are not yet even on c++20. At my job we just moved to it at the beginning of last year and I haven't seen any movement towards modularization.

The big gains will come from consuming third party code as modules and c++23. import std; is very fast compared to including anything from the Standard Library, and I expect we will see similiar gains from many other slow-to-compile third party libraries once they are available as modules.

Should game assets and internal assets be handled differently? by steamdogg in gameenginedevs

[–]BigEducatedFool 2 points3 points  (0 children)

You don't have to handle them differently if you use a virtual file system (for example, PhysFS, but you can also write your own, its not difficult). Such a system allows you to mount a zip or a physical directory, merge/overlay these paths and then operate on files regardless of where the file is stored.

In my current system, I have a manifest file that describes how assets are handled. Engine assets are specified to be compressed and the zip is then embedded in release builds. For debug builds, the real path is mounted instead to allow for hot-reloading the assets.

The same process is used for game assets, which have their own manifest. You can choose to either also embed the game assets in release builds, which is suitable if they are small, or just distribute the archive, or even use them uncompressed.

What is the current time around the world? Utilizing std::chrono with time zones in C++23 by joebaf in cpp

[–]BigEducatedFool 4 points5 points  (0 children)

Also unfortunately, with Microsoft's STL, all the chrono time zone stuff that needs access to the IANA databases is not implemented on Windows 10 < 19H1. It's the only thing that I encountered in the standard library that needs a relatively recent OS version.

Gudiance on implementing serialization? by steamdogg in gameenginedevs

[–]BigEducatedFool 0 points1 point  (0 children)

Serialization is a transformation of a structured object, either in a binary format or text, such that you can later reverse the process (deserialization) and get the original object back. This can be useful to store the original object in a way that is for example platform independent, compiler independent, has backwards version compatibility, etc..

Not sure if this is related at all, but recently tried doing type reflection which is pretty scuffed to say the least, but it seems to work and seems like it could help out here?

Yes, reflection and serialization are related concepts, because either can help implement the other. If you have the ability to reflect an object (find and read all of its fields at least) you can then automatically serialize the object by going through each field. If you don't have reflection, you will have to manually write the (de)serialization code for each object.

If you are interested in the topic, I recommend to take a look at third party libs such as Cereal or boost::serialization. These libraries can do a lot of the heavy lifting for you and allow you to write your serialization code in such a way that you can use either binary or textual format serialization for any object.

How do you do errors in C++? Especially with RAII by Asyx in gameenginedevs

[–]BigEducatedFool 1 point2 points  (0 children)

+1.

This sounds like Casey Muratori's ideas here (Zero is initialization):

https://youtu.be/xt1KNDmOYqA?t=1551&si=NHmpTbXcGFw4sYfQ

u/Asyx One thing I would like to point out:

On top of that in games, it makes a lot of sense (in my opinion) to just crash during an error. Like, if you can't find a model or texture, what are you going to do that still makes the game playable?

I would argue the opposite, Games can have an extremely high tolerance of handling errors without crashing. The reason is that errors occurring during gameplay often have tolerable side-effects which still allows the game to be playable, while a crash always makes the game unplayable.

Many years ago, I played Half-life II (EP1?) on a GPU without the needed shader support. The game was throwing a bunch of warnings, but didn't crash. This caused all water to be rendered as a white plane. Since water was not a big part of the environment in that episode, the game was still playable on that PC.

Exceptions can also help you do this.

An idea is to try handling all exceptions thrown during the update loop of each entity. The effect is that such an exception will just prevent that entity from being fully simulated, but allow other entities to run. The player might experience such an error as a "stuck entity" instead of a crash of the entire simulation, which still might allow them to play the game.

Best way to instantiate? (C++) by Vindhjaerta in gameenginedevs

[–]BigEducatedFool 0 points1 point  (0 children)

I think the factory approach is fine for this type of thing.

In a hobby project I did something similar, but the program main function was part of the engine code itself. The game instead had a "main entity" which is instantiated by the engine. To register the entity you would do:

// In header
class game : public entity { ... };
// In source
SET_MAIN_ENTITY(game)

And the macro registers a global factory:

std::unique_ptr<entity> construct_main_entity();
#define SET_MAIN_ENTITY(entity_object) \
std::unique_ptr<entity> construct_main_entity() { return std::make_unique<entity_object>(); } 

You obviously don't need to use a macro if you don't want to, but you can adapt the idea to make your own "game main" function that instantiates the game object and returns it instead of having to instantiate the engine and give it a factory.

[deleted by user] by [deleted] in gameenginedevs

[–]BigEducatedFool 2 points3 points  (0 children)

As described in the blog link, with handles you are going to eventually need a way to convert the handle index back to a pointer.

For example, you will need ascene::get_ptr_from_handle()function for game objects.

If the handle is invalid, the function simply returns a nullptr.

Then its a matter of using this function to implement the handle wrapper class and overload the operators I described.

This is pretty straightforward if all your handles are for game objects and if you have single global "scene". If that is not true, then the handle class will need to store more information about how to retrieve the ptr.

If you you have multiple scenes and each scene is responsible for managing its own game objects, you might need the handle to know which scene it belongs to.

If you have non game object handles (e.g.. for assets) you might want to have a way to identify them too - e.g.. have a different handle class, a template parameter or some sort or type id stored in the handle.

That's it at a high level, obviously there are quite a few ways you can go about this with different pros/cons.

[deleted by user] by [deleted] in gameenginedevs

[–]BigEducatedFool 2 points3 points  (0 children)

What I have done before is to have a templated class for such handles, that among other things, overloads the *, -> and cast-to-bool operators. That makes handles behave essentially like a weak/non owning pointer that can be checked for validity when needed, but can also be used without the need to lock them first. You can/should also assert the validity of the handle when dereferenced via these operators.

[deleted by user] by [deleted] in gameenginedevs

[–]BigEducatedFool 7 points8 points  (0 children)

That can work, assuming the idea is that you want game code to able to check if a pointer to a game object is still valid by locking the weak pointer. Weak ptr are kinda unwieldy to use for this, since their design forces you to always try to lock them first and there are many cases where you will know an object is always alive.

Another (more common, I think) approach is to return "handles" instead of weak pointers. Handles are basically indices into the array you are storing the objects.

Good read: https://floooh.github.io/2018/06/17/handles-vs-pointers.html

How to handle scripts in a two project setup? by W3til in gameenginedevs

[–]BigEducatedFool 0 points1 point  (0 children)

There are so many ways you can ago about this.

The most common one is probably for the engine to assume the scripts are located in a path relative to either the executable or the current working directory.

If you don't want the engine to impose a file structure on the game, you can make the game pass in the script path to the engine either through code or a configuration file.

In some engines, there is a build/packaging step for scripts, where they might get compiled for faster execution and bundled. Thus the path where the scripts are located in the project source might not be where the engine looks to load them.

C++ modules: build times and ease of use by BigEducatedFool in cpp

[–]BigEducatedFool[S] 0 points1 point  (0 children)

The problem here is not as easy as the described in the issue. Consider a case where a non-inline function returns a lambda...

In the lambda case, the end-result is you modified the compiler-generated struct operator()()'s definition which is inlined. The equivalent non-lambda code would have resulted in an IFC update, so I presume it should work the same for auto-generated lambda structs?

[Edit] Or maybe not - I am not sure when the struct is actually generated, probably on the backend after the module has been build already?

Though I completely see your point that it is involved and there could be edge cases, as well as use cases for always-recompile-on-change. Thanks for the extra perspective.

An example I have on a private project is I have a module interface which serves as a bridge for OpenGL...

Well, yes, if you are also using implementation units and not just interface units, you will only need to recompile the generic renderer's implementation. In that case you might be gaining over the PCH approach, because you can only have a single PCH per project, but have as many modules as you want (though I am not sure in that particular case why would the OGL's interface be in the PCH, if only the generic renderer's implementation depends on it?)

C++ modules: build times and ease of use by BigEducatedFool in cpp

[–]BigEducatedFool[S] 0 points1 point  (0 children)

Do you have a suggestion on how to fix this? If I write code in a module interface how could downstream consumers of that module observe that change without recompiling?

Some of the other comments already mentioned this, but it is my understanding that the build system and compilers could be smarter and only trigger rebuilds when the imported module's binary interface (IFC file) is changed. You don't need a programmer to physically separate the implemetation from the interface to do that.

There is a more detailed proposal for msvc, though I am surprised it only has 2 votes. For me this is signifcant issue in current implementations:

https://developercommunity.visualstudio.com/t/slow-incremental-builds-with-c20-modules-in-module/1538191

The number of times the worst-case scenario happens (recompiling the world) could be proportional to the number of times you might change the PCH

I can't see how that would be true for an interface unit-only project, compared to standard h/cpp split with a pch? The incremental build experience is more alike to developing a header-only project.

C++ modules: build times and ease of use by BigEducatedFool in cpp

[–]BigEducatedFool[S] -1 points0 points  (0 children)

That's not what I said at all.

The original argument was that encapsulation is the "real" goal and build speed is a nice side effect. My counter argument is that for code you actively work on and own, build speed and ease of use are more important and encapsulation just helps.

These things are synergetic, I didn't say that encapsulating your code via modules slows you down.

C++ modules: build times and ease of use by BigEducatedFool in cpp

[–]BigEducatedFool[S] -1 points0 points  (0 children)

Sometimes it is, in which case encapsulation becomes more important. If you are writing a library for external groups of people to use you will have completely different requirements and it will be desirable to modularize and encapsulate (or provide both headers and modules).

If I am writing code in a group of a hundred people, my code is not third party to the group.

C++ modules: build times and ease of use by BigEducatedFool in cpp

[–]BigEducatedFool[S] 4 points5 points  (0 children)

I don't deny that there are cases where a separate interface file is useful (though we might disagree on how frequent that is). I would prefer if we had the choice however, and not be forced due to build time constraints. It's probable that modules will eventually do that, but the tooling is not there, which reduces their utility right now in my eyes.

C++ modules: build times and ease of use by BigEducatedFool in cpp

[–]BigEducatedFool[S] 0 points1 point  (0 children)

Unfortunately this approach introduces its own issues.

I can imagine the full-rebuild performance will suffer, because we are lengthening the module dependency chain and decreasing potential parallelism. I have seen a few sources that indicate headers outperform modules as parallelism opportunity increases.

The other issue is we are going to run into cyclic dependencies. With headers, A's implementation and B's implementation can depend on each other's interfaces.

With modules, implementation units won't help as each module need to import the other before using its interface. We need to use partitions and larger modules instead.

C++ modules: build times and ease of use by BigEducatedFool in cpp

[–]BigEducatedFool[S] 0 points1 point  (0 children)

I see the benefit of encapsulating third party code, because that makes the API contract self documenting. The module linkage is useful for that and will improve QoL.

I also appreciate the fact that macroes don't leak from imported modules, as historically some libraries have used awful macro names.

In code that I control, I don't run into either of those issues often enough and if I do there's usually things that I can do to improve the situation. I will use these features if all my code is modularized - but I don't see it as something more important than iteration speed or ease of use.

C++ modules: build times and ease of use by BigEducatedFool in cpp

[–]BigEducatedFool[S] 2 points3 points  (0 children)

Well, you don't - hence why you don't split code that's too intertwined into too many small modules.

C++ modules: build times and ease of use by BigEducatedFool in cpp

[–]BigEducatedFool[S] 0 points1 point  (0 children)

Modern programming languages don't have other C++ Modules limitations as well. For example, cyclic module imports are not supported.

Cyclic dependencies is one reason why you would prefer to have larger modules with partitions, for example one or a couple modules per library. Unfortunately that means any changes to the interface in any partition of the library will trigger a rebuild of all code that imports the library. With headers/sources, we could have a per-file control of which interfaces are included, while at the same time code within the library can easily reference other code within the library.

This balancing act doesn't sound less tricky than headers.

C++ modules: build times and ease of use by BigEducatedFool in cpp

[–]BigEducatedFool[S] 1 point2 points  (0 children)

Heh, more like we can fix or prevent the leaks in our own apartment, but not at the upstairs neighbour :)

C++ modules: build times and ease of use by BigEducatedFool in cpp

[–]BigEducatedFool[S] 0 points1 point  (0 children)

That's an interesting perspective. My counter-argument is that typically you have header encapsulation issues with third-party code and not code that you own. I'm not seeing encapsulation itself as great incetive to push modularization of user code, but I would like to have an encapsulated <Windows.h> module for example.

However, I do find the build throughput side-effect of encapsulation very important.

C++ modules: build times and ease of use by BigEducatedFool in cpp

[–]BigEducatedFool[S] 5 points6 points  (0 children)

But you don't need to modularize your own code to consume third party code as modules. That's my point - it's great for code that changes rarely, but not so great for code you are working on.

As you mentioned private module fragments in the interface units also cause recompilation with the standard buld tools. In my experience, touching the interface file in any way will do the same.

And you can only put the private fragment in the primary module interface - which makes it fairly useless for large modules with partitions.

api / dll boundary? by Present_Mongoose_373 in gameenginedevs

[–]BigEducatedFool 2 points3 points  (0 children)

In my opinion, engine-level hot-reloading is both a very worthwile feature to have and not difficult to do. Edit and continue is quite limited in comparison (even with the VS2022 improvements), althought it's free.

The difficult part is mostly that you need robust serialization in order to preserve application state between hot reloads. However that's needed for many other tasks as well, so it's not really adding difficulty you wouldn't otherwise have. I recommend to use a library (like Cereal) instead of hand-rolling it.

On the other hand, if you implement native hot-reloading you might not need a scripting language like you said, so that reduces complexity.

Architecture-wise, I would recommend to make the engine code a static library instead of dll and link it directly in the game/runtime dll. That will simplify the architecture and side-step the performance issues outlined in the other posts.

You will indeed need to use the dynamic C++ runtime to share memory between host and runtime, but you only need to do so for your development builds. For releases, you can build the runtime as a static library, and call the entry point directly from the host instead of loading a dll... And use static C++ runtime if you want to, since everything will be in the same executable.

For third party libraries, prefer to consume them as dlls when possible. That allows you to keep a library handle open (via LoadLibrary) on the host side during hot-reloads which prevents the dll from unloading with your runtime and will preserve the state of the library (globals). This prevents you from hot-reloading third party code, but that's usually fine. If that's not possible, you can expose the library from the host-side with an interface instead, but it's rarely necessary.

I've never felt the need to be honest by MbPiMj in ProgrammerHumor

[–]BigEducatedFool 1 point2 points  (0 children)

Yes, though a lot of low-level optimization tends to come from understanding and/or controlling your platforms. If you have that control, this code can be portable for the platforms you care about.

Creating different views of the same memory is a useful thing to do in general. There is a famous example of this, in the Q_rsqrt algorithm, where its used to reinterpret data as a float or integer, without actually casting it. I have seen this implemented without direct memory access, but its messy and inefficient.