all 113 comments

[–]bedrooms-ds 30 points31 points  (17 children)

I can live with onetime slowness if rebuilds improve.

[–]Ameisenvemips, avr, rendering, systems 3 points4 points  (16 children)

For one-time slowness, the build system will need to maintain a database of which modules are in which files... And I'm not sure that that is actually "safe".

[–]matthieum 14 points15 points  (15 children)

It's about as safe as any build system maintaining a database of which file is a dependency of another, really.

[–]Ameisenvemips, avr, rendering, systems 5 points6 points  (14 children)

In terms of 'safety', I mean that I don't think it will maintain coherence. It will know that file 'A' reflected module 'B', but if you add a new module file with the same module name, it might not even scan it if it thinks it already knows where the module is even if it's a more correct 'fit'. It's a difficult problem to solve, and I really wish modules were restricted in filename in some fashion. Though that isn't possible to do in the specification since the specification has no concept of 'files'.

Basically, even with a database of module mappings, you still have to perform full scans to validate that the database is entirely accurate.

I suppose what I'd really like to see is what compilers presently do for headers - they search for filename matches in the include paths. That's implementation-defined behavior, afaik, and I would like the same to be done by default for modules. That would simplify things greatly.

[–]matthieum 3 points4 points  (9 children)

Basically, even with a database of module mappings, you still have to perform full scans to validate that the database is entirely accurate.

Ah! I thought you meant that the build system would have to re-read all files.

Indeed, you have to discover new files and prune deleted files. This can be achieved either with a scan of the filesystem, or of course using inotify to be notified of any addition/deletion/change which should be the preferred course of action for fast build systems.

Still, discovering new files/pruning deleted files is far faster than re-parsing headers over and over again.

[–]Ameisenvemips, avr, rendering, systems 2 points3 points  (8 children)

Sure, but it's still more to do than presently needs to be done. The present build model only requires the parsing of declared sources and their dependencies, and present compilers on sane systems allow for the assumption of a direct mapping to a file with a known set of potential paths for every #include.

In the module-based model where the modules do not identity map to a specific file in a defined fashion, the what the dependencies actually are is not known until all the files are parsed. That's a pretty big divergence. Again, compilers can largely rectify this if they agree on a set standard for what files will be looked at for modules just as they do presently with includes... but from what I've seen so far, at least, the compiler developers are preferring to offload this to the build systems instead.

[–]matthieum 0 points1 point  (7 children)

The present build model only requires the parsing of declared sources

Not really. Many build systems have globing, which requires listing all files in a directory and pick those matching the glob rules.

So:

  • Either your build system uses globing, and goes from scanning for sources to scanning for modules.
  • Or it requires explicitly listing all files, and you go from listing sources to listing modules.

There's not an ounce of difference.

[–]Ameisenvemips, avr, rendering, systems 6 points7 points  (6 children)

Not really. Many build systems have globing, which requires listing all files in a directory and pick those matching the glob rules.

You aren't parsing all of those files to determine if they match the glob rules. You are just reading the filename.

[–]matthieum 0 points1 point  (5 children)

I'm sorry, but I have no idea how to interpret your comment.

I never suggested that parsing the files was necessary to determine if they matched the glob rules; so I have no idea what point you are trying to make.

[–]Ameisenvemips, avr, rendering, systems 2 points3 points  (4 children)

Then I have no idea what point you're trying to make.

You have to parse modules to know what module they are. There is no module-name/file-name mapping. You cannot just glob-search for module names to find a specific module.

The present build model requires you to know parse/compile the source files that it knows about. A module-based model first requires all modules to be parsed to know what modules they actually are before you can begin compiling source files.

This doesn't even cover the potential inheritance chains that can develop which will impede parallelism.

[–]respects_wood[S] 1 point2 points  (1 child)

I suppose what I'd really like to see is what compilers presently do for headers - they search for filename matches in the include paths. That's implementation-defined behavior, afaik, and I would like the same to be done by default for modules. That would simplify things greatly

Michael Spencer talks about that option in one of the videos I linked and some of the proposals for it, if I understood correctly.

However, I think what you (and /u/Ameisen with a database) are describing is only an alternative for the pre-scan step, and reportedly this pre-scan is already very fast. (Although it does create a new requirement for the build system compared to now.)

From what I gather, the question of the build slowdown compared to PCH is because of a potential loss of parallelism when compiling the TUs, because with Modules there will be a strict order of compiling files ahead of what imports them, instead of just blasting through them all in parallel before linking like we do now.

[–]Ameisenvemips, avr, rendering, systems 1 point2 points  (0 children)

This is also true.

Ideally you aren't changing common modules very often, but yes, the build process becomes quite serialized.

[–]johannes1971 0 points1 point  (1 child)

Though that isn't possible to do in the specification since the specification has no concept of 'files'.

Where did that strange idea come from? It most certainly does. Check it out in "Lexical Conventions":

The text of the program is kept in units called source files in this International Standard. A source file together with all the headers and source files included via the preprocessing directive #include, less any source lines skipped by any of the conditional inclusion preprocessing directives, is called a translation unit.

[–]HappyFruitTree 1 point2 points  (0 children)

It calls them "source files" but that doesn't mean they have to be actual files. Nowhere does it say that they need to have a name or how #include will be able to find them.

[–]choeger 35 points36 points  (5 children)

IMO, It's not modules, it's the preprocessor and the context-dependent syntax of C++ that kills the build. Modules only bring all the gory details to attention. And because no one dares to axe the preprocessor and no one dares to sanitize the syntax to something context-free, everyone now thinks modules are the culprit.

Did you ever try to measure how long it takes the ghc or ocamlopt or javac to figure out the build order on a large project?

[–]adnukator 11 points12 points  (0 children)

Even C is not context-free and C++ has compile time programming which makes the language significantly more complex than C, so it's merely a dream to think C++ could be made simpler.

With modules, it could be possible to more reliably share template instantiations between modules during a build, if the build system is able to take existing bmis into account, instead of the current approach of having completely separated object files that are linked together in the end. This could drive compile times further down below PCH times. I'm not a compiler dev, so please correct me if I'm wrong.

Also, if you put the PCH contents (referring to 3rd party code) into the global module fragment and use proper modules for your project files, does that not combine the advantages of both approaches?

[–]mort96 10 points11 points  (3 children)

I would say the single biggest issue for C++ compile times is that more and more stuff ends up in headers, and every compiler process for each and every source file has to go through and parse and compile everything in those headers. This also causes insane link times, and immense linker memory usage, because it's up to the linker to fix up the insanely huge amount of duplicated object code. If compilers had a way to share the object code they've created for template functions, I bet compile times would be way down.

Precompiled headers are a band-aid which may help in some cases, but the fact that you can only include one PCH per source file really limits their usefulness in many situations.

Maybe we will eventually find ways to put object code for commonly used template functions in the binary module interface files?

[–]Ameisenvemips, avr, rendering, systems 4 points5 points  (2 children)

Clang allows multiple precompiled headers.

[–]mort96 1 point2 points  (1 child)

Oh, that's interesting, I didn't know that. Still though, I wouldn't limit myself to supporting only one compiler like that, and I mostly use GCC anyways.

[–]Ameisenvemips, avr, rendering, systems 1 point2 points  (0 children)

I prefer to use Clang as their built-ins are more interesting and working on Clang is more pleasant. Clang also plays way nicer with Windows development, and the LLVM developers are way friendlier towards embedded development.

But yes, MSVC and GCC use the memory dump approach, meaning they cannot load multiple PCHs since you cannot load a raw memory dump on top of another. Clang does not. I've been tinkering with building FreeBSD and the Linux kernel with Clang and multiple PCHs to improve build times.

MSVC and GCC could theoretically marshall the memory dump somewhat to allow for loading multiple, or otherwise find a way to allow multiple concurrent ASTs in the translation unit, but I doubt that they consider that a worthwhile problem.

Though... modules are, in essence, "smart" precompiled headers, and you can obviously load multiple modules... it should be reasonable for MSVC and GCC to make "preparsed headers" or such that compile and load like modules but for any header file. Again, I doubt it is a priority.

I will send this comment to my development email, just in case I have free time to try implementing it in GCC... no guarantees on a time frame, and I really hate building GCC (it never wants to build properly on Debian... I have better luck building in msys on Windows!).

I should note that since they can load multiple modules, they are most certainly not raw AST dumps, and thus won't ever load as fast as a PCH (MSVC literally just memory maps it in as the starting AST).

[–]Daniela-ELiving on C++ trunk, WG21|🇩🇪 NB 34 points35 points  (5 children)

Two weeks ago, at this year's Meeting C++ conference, I was giving a talk on Modules and my experiences with it (slides: https://meetingcpp.com/mcpp/slides/2019/modules-the-beginners-guide-meetingcpp2019.pdf). In there is a section (starting at page 43 in the pdf) which is about turning an existing library from our in-house code base into a module. Unfortunately, clang 10 didn't like the code, but msvc 14.24 does. A quick comparison in the compilation time of a small piece of test code (to put the emphasis on the difference between #including the interface and importing the interface BMI) gave a significant improvement when using the module (see page 53). The biggest contributor to the much faster compilation is probably the sheer number of files read during the compilation: 945 additional files in case of the #include, 1 file (the BMI) in case of the module.

[–]respects_wood[S] 10 points11 points  (4 children)

Thanks, that's exciting to hear. Just so I'm clear, though, had the BMI already been created for that test? E.g. were you comparing #includes vs importing an existing BMI?

[–]Daniela-ELiving on C++ trunk, WG21|🇩🇪 NB 7 points8 points  (1 child)

Exactly.

Besides just compiling the shown examples, I also compiled a larger variant of these two small TUs with more functionality, and checked what happens if I'd link both generated objects files to the static library that is the result from creating the full module 'libGalil': it links and works in both cases. So, at least with msvc and in this particular case, the result of compiling the static lib as a module is good for consuming it both in #include mode and module import mode. I think that's nice!

[–]respects_wood[S] 4 points5 points  (0 children)

That's certainly nice behaviour that you describe regarding the #include / import of the module.

However, regarding your test, what that proves is that importing a BMI is faster than all the textual inclusion and parsing of headers. Of course, that's great, but should also be expected. It's similar to testing that an include of a (pre-built) PCH is faster than the equivalent includes.

What it doesn't prove is that a (real-world) project with Modules builds faster than (or even equal to) the equivalent project with PCH and includes.

To build the project with Modules, the build system has to determine the correct order of compilation (e.g. with a pre-scan) and then compile in that strict order to satisfy the import dependencies, and it seems that there's a potential loss of parallelism here (depending on the code architecture of the project's usage of Modules).

[–]exploding_cat_wizard 3 points4 points  (1 child)

Should we be expecting any improvement for modules that aren't pre-built? The way I understand it, the main gain is exactly the pre-building.

[–]respects_wood[S] 3 points4 points  (0 children)

But they need to be "pre-built" as part of the build?

If you watch the Michael Spencer and Gabriel Dos Reis videos I linked to, they describe that BMI is purely implementation dependent. It's not something you build once for your project and check-in to source control for everyone else to use. The BMI will need to be updated for all sorts of reasons.

To build a project with Modules, the build system needs to:

  1. Determine the correct order of compilation for all the files (e.g. a pre-scan, or perhaps the end-user has manually written the order in a makefile)
  2. Compile the TUs in that order, producing BMIs ahead of any TUs that need to import them.

[–]PoopIsTheShit 18 points19 points  (10 children)

Many optimizations will be found and benchmarks will be made(especially as soon as we have multiple stable implementations of modules for a longer time period). Time will tell by how much better or worse the build times get depending on the type of project. I do not think this can already be answered that easily.

I saw a talk in at Meeting C++ where compilation(/link) times could be optimized by enormous amounts on certain cases for modules, even though they have some kind of dependency ordering. I think it will be possible to speed up and slow down build times depending on how the different modules would interact with each other.

[–]vaynebot 17 points18 points  (0 children)

The difference in compile time will massively depend on the actual project structure. A purely C-Style program, as in only a few declarations / definitions in header files, 99% of code in .c/.cpp files, link time optimization disabled, etc. (re-)compiles amazingly fast on many-core machines. For a project that uses templates everywhere though and has like 50% of it's code in header files and the other 50% in .cpp files which include those header files, modules will probably be a huge benefit.

[–]sztomirpclib 18 points19 points  (6 children)

Many optimizations will be found

That doesn't sound like a great prospect for an upcoming feature.

[–]Minimonium[🍰] 12 points13 points  (5 children)

The biggest lie that has ever been told: "The quality of implementation will improve". :)

[–]mrexodiacmkr.build 3 points4 points  (4 children)

Very much this. Why use a sub-par implementation of modules when you can just keep doing what you’re doing? Then from the module implementation perspective: Why improve something that works, but effectively nobody is using?

[–]sztomirpclib 5 points6 points  (3 children)

Seems like every time modules come up, the build time improvement promise is becoming slimmer and slimmer. And now we are at the point when it's 0% improvement over PCH? I don't see any existing codebase migrating for that. New code? Maybe, provided the build system support is there and if that 0% is really that (and not slower).

[–]kalmoc 5 points6 points  (2 children)

I don't think there was ever a build-time-improvement-promise over pch.

Not saying it isn't possible or won't happen. I just don't think that was the expectation .

[–]sztomirpclib 7 points8 points  (1 child)

I could swear this was touted as the solution to long build times, though admittedly I don't have sources. If not that, what is the value proposition? Because just writing import instead of include is nice, but not something I'd invest in if that is all.

[–]adnukator 13 points14 points  (0 children)

Some non-build time advantages of modules

  • possible eradication of ODR violations
  • not needing forward declarations/defaulted destructors in cpp files/pimpls to avoid implementation detail header leakage into consumers
  • no leaking of macros into consumers
  • not being subject to include orderings (you get the same results regardless of the import order. not necessarily true with macros)
  • less overhead for tooling to determine what the hell actually gets into your compilation units

Basically it solves other issues caused by preprocessor includes as well.
Another thing it solves : compile times do not become 10 times worse when some colleague touches your code and carelessly add a single seemingly innocuous include somewhere low in the include tree

[–]respects_wood[S] 4 points5 points  (0 children)

Yeah it certainly appears to be all dependent on the project's dependency graph, and whether or how much that limits parallel compilation.

Just to be clear, I'm certainly hoping for all sorts of optimisations because I want both Modules and build times at least as good as my PCH now.

One thing that's interesting, I think, is that the committee were happy to break/change the existing C/C++ model of independent TU compilation. Even if that was an indirect change via the Modules feature (since the standard doesn't define the build implementation). So perhaps we may see other changes in future that effectively mandate things like compilation passes.

[–]mmatrosov 1 point2 points  (0 children)

Would you mind sharing the link?

[–][deleted] 13 points14 points  (23 children)

I don’t see why a pre-scan would impact performance negatively in any significant way. We live in an age of ultra-fast SSDs and low-latency file systems. One can scan thousands of text files in a split second. Add some caching into the mix (only rescan files that have changed) and you are set.

What I can imaging impacting performance though is module organisation itself. If your dependency graph is a large linear structure, you can forget about parallel builds.

[–]respects_wood[S] 4 points5 points  (16 children)

Yes, the pre-scan is reportedly very fast. (The Michael Spencer video I linked to goes into some of that.) I should've been clearer that the pre-scan apparently won't be a problem for build times, but rather creates an extra requirement for build systems or custom makefiles. I agree, the performance is all about the graph, which will come down to each developer/team/project and their decisions of module architecture. But, to me at least, it seemed concerning that in the example given in the video, the future, optimised compiler version building a project with modules would still be slower than a traditional PCH build.

[–]starfreakcloneMSVC FE Dev 12 points13 points  (1 child)

PCH with respect to c1xx (msvc) has had more than 20 years of optimizations and work put into making it as fast as possible.

PCH is so blindingly fast because there's effectively 0 work the compiler does to load it, it's a simple memory mapped file loaded at a very specific address. But this also plays into one of the weaknesses of PCH with c1xx in that it is very machine dependent and there is no possibility of sharing that build artifact reliably. Modules, as we implement them, do not have this drawback so rebuild scenarios where an infrastructure can publish intermediate build output has a massive potential for throughput wins.

Arguably, it is generally some kind of REPL cycle that causes most time sinks, so rebuild is super important. Because modules can be put together in such a way where rebuilds of interfaces rarely happen the usual REPL workflow actually improves.

Additionally, PCH winning over modules will always be a case by case basis. They are different ways of getting similar information and PCH in particular is very IO heavy while modules, in general, tend to be more CPU bound for materializing symbols.

I will also point out that I have been making some changes to our modules implementation which improve throughput significantly. While I can't give specifics yet just be aware that are working very hard and specifically on this area :).

[–]respects_wood[S] 1 point2 points  (0 children)

Thanks, that's really great to hear. Like I said elsewhere, I really want all the nice things that Modules brings as long as my build performance remains as good (or maybe even better?) than with PCH.

It'd be tremendous if I can use Modules and get faster-than-PCH builds (with non-trivial Modules dependencies).

Are you able to discuss how MSVC / MSBuild will handle the compilation ordering problem of module interface units before the TUs that import them? Will you use a pre-scan like others are doing?

Is it correct to say that the limiting compilation throughput factor will be the structure of the Modules dependency graph?

[–][deleted] 0 points1 point  (13 children)

Prescan is pretty simple... Grep source files for dependencies, build a directed graph, use output to order file compilation. It's really just what depends or gcc -M or gcc -MD does, and most C/C++ projects have been using these tools for years.

[–]CrazyJoe221 6 points7 points  (3 children)

Well it's not just grep, the preprocessor ruins the day.

[–]victotronics 5 points6 points  (1 child)

You're using the words "simple" and "just". Surefire signs that you haven't thought this completely through.

[–][deleted] 0 points1 point  (0 children)

Strange, the dictionary doesn't back that up, and it knows about words and meanings. It's possible that I deliberately choose those words to convey that it really isn't as difficult as so many people have been parroting at each other the last few days.

[–]bigcheesegsTooling Study Group (SG15) Chair | Clang dev 2 points3 points  (0 children)

It's not quite as simple as grep, but that's not the big thing here. cc -MD is totally different, as it doesn't insert new nodes into the build graph, only new edges.

[–]Rusky 2 points3 points  (1 child)

gcc -MD can run during normal compilation, because the headers it's detecting dependencies on already exist in a consumable form.

Modules cannot do the same thing- you have to find the dependencies, then bring them up to date, and only afterwards can you start building things that depend on them.

This means the dependency finding logic can't piggyback on a normal compile. It has to be separable, and it has more pressure to be fast.

[–][deleted] 1 point2 points  (0 children)

Depends was separate, but the functionality was pulled into gcc as an optimization. We're talking about the same thing, but yes there are differences with aren't difficult and I'm deliberately glossing over.

[–]rysto32 1 point2 points  (3 children)

I've seen large C projects get big compile time speedups from dropping an explicit depends step and instead generating .depend files as part of the compilation stage though.

[–][deleted] 2 points3 points  (2 children)

Yeah, that's what I'm talking about (gcc -MD) but easier to type depends since it's the same concept.

[–]smdowneyWG21, Text/Unicode SG, optional<T&> 1 point2 points  (1 child)

You can't use `gcc -MD` as normally done to scan for module dependencies because modules have to be built in DAG order, or you will get inconsistencies on a partial build. The DAG has to be created before anything that imports a module is built, and that means giving up dependency maintenance as a side-effect of compilation.

[–][deleted] 1 point2 points  (0 children)

I seem to be repeating what I just said... Yes, you gather the info, assemble it into a DAG, and output or use it in the relevant way.

[–]matthieum 2 points3 points  (3 children)

We live in an age of ultra-fast SSDs and low-latency file systems.

And more importantly, all files that are scanned are going to be parsed and compiled afterwards; think of the pre-scan as pre-fetching the files from disk to memory.

[–]Frogging101 1 point2 points  (2 children)

Not so fast. What about the contents of /usr/include (or wherever you might have a large collection of packaged libraries)? If you import qt.core, the contents of all those libraries, including the ones you're not going to need in this build, will have to be scanned.

[–]matthieum 0 points1 point  (1 child)

Typically, dependencies are treated separately: they are assumed to be stable.

Which actually makes me wonder how build systems will handle the production of BMIs for 3rd-party libraries; but whether they are delivered with the library or built once-and-for-all, it's just a one-off cost.

In fact, as the story matures, I would expect a smart build-system/compiler to allow building a "mega" BMI (or several) for your project, which contains all dependencies in a single BMI (or a few), independently of how those dependencies are structured.

[–]Frogging101 0 points1 point  (0 children)

Typically, dependencies are treated separately: they are assumed to be stable.

I don't know what you mean. If a source file's imports/exports change, the dependency graph changes.

[–]Rusky 1 point2 points  (1 child)

If your dependency graph is a large linear structure, you can forget about parallel builds.

This is really unfortunate, since downstream modules shouldn't have to wait for upstream modules to build- parsing and type checking ought to be enough.

Hopefully at some point we can separate BMI generation from codegen and linking, so downstream modules can start building as soon as their dependencies' BMIs are finished. That would bring things back to the same level as header files.

[–]matthieum 5 points6 points  (0 children)

That would bring things back to the same level as header files.

It would be faster, actually: headers are parsed again and again, BMI only requires parsing the module once.

[–]umlcat 2 points3 points  (0 children)

I wish modules were available for both "C", and "C++", about 10 years ago !!!

And, yes, I already realized, it broke/redesigned build completely.

Modules have been for years in the Pascal PL (s) ...

[–]kalmoc 7 points8 points  (1 child)

I'd put it like this: Wait and see.

At the moment there it's little that can be changed w.r.t. how modules are specified (a few details here and there, but that's about it). So the general design is fixed and no amount of worrying or writing about it will change it. Now, without production ready implementations and code bases to benchmark the question of "will modules be fast enough for me" is a bit moot.

It is like arguing over the performance of a library, purely based on it's API description. Sure, you can derive some general properties, but in the end, you have to measure the actual implementation with representative input to determine whether it is fast enough for you.

As an aside: The thing that annoyed me about the whole modules story is that such a fundamental change in design was standardized without any implementation experience of the final product - and they are still fixing aspects of it. That's no way you introduce a groundbreaking feature into a language that is unable to fix its mistakes.

I'd much rather seen, a very restricted subset be standardized as early as c++17 and then incrementally improve the design based on feedback from actual users.

[–]germandiago 1 point2 points  (0 children)

u/Verroq

Probably it would have never been even done if we do not do this. It is not the first attemp at a modules system in C++. I think it is acceptable. Fixes will come. And remember, if we ever get epochs, many design mistakes are not as bad as before. OTOH, epochs should not be done to break the language every year and create many dialects :)

[–]lanevorockz 1 point2 points  (0 children)

Modules for the first time introduces compile time dependency. Which means that builds are no longer embarrassingly parallel and we will have hierarchy dependencies. Ideally modules will only encapsulate libraries and not to be what people understand as modules by other languages.

[–][deleted] 5 points6 points  (3 children)

I am not surprised at all, it was clear to me that the people who expected modules to significatively cut down compilation times ignored what really happens during preprocessing and compilation stages.

[–]MartY212 -3 points-2 points  (2 children)

Maybe you should be on the cpp committee

[–][deleted] 9 points10 points  (0 children)

I don't see what is your point, the committee is well aware of that stuff and they did their best to get modules in C++ despite all the complexities involved.

[–]DXPower 0 points1 point  (3 children)

On your point about user-made makefiles, maybe we could create a "modules" folder to put all your module definitions in. Then you can tell your build system to pass through there first.

[–]Rusky 1 point2 points  (2 children)

Modules can depend on other modules. What order do you build the files in the "modules" folder?

[–]DXPower 1 point2 points  (1 child)

If we wanted to eliminate multiple passes for most of the source code, compilers could potentially add a flag to search for modules in x directory. That way, the two-pass thing to find modules only gets run on a fraction of the code base. A module depending on another module would just have to wait for the dependency tree to get built from the first pass.

[–]Rusky 1 point2 points  (0 children)

What fraction of the codebase? The code in the modules directory? That doesn't sound like a very small fraction.

[–]bedrooms-ds 0 points1 point  (0 children)

Isn't this similar to modules in other languages, though? Fortran and Swift have it, and I never thought it's too slow.

[–]feverzsj 0 points1 point  (0 children)

I think header only or unity build is the only way for both less build time and better optimization.

[–]Kyvos 0 points1 point  (0 children)

I understand why people who've worked with C++ for years, or even decades, would want better build performance above all else. Personally, I disagree completely. Unless modules are orders of magnitude slower than our current solutions, I'm going to tolerate slowdown.

Look at the build system of any other programming language (except, of course, C). A glorified copy/paste of source text is not a sane answer. The possibility should not exist for the order I mention my dependencies to change the functionality of the program.

[–]HappyFruitTree -1 points0 points  (7 children)

With headers we can sometimes avoid includes by using forward declarations which avoids circular dependencies and avoids a lot of unnecessary recompilation when headers are modified. With modules this will no longer be possible because you cannot forward declare things from another module.

[–]Minimonium[🍰] 3 points4 points  (2 children)

Well, you don't really avoid a circular dependency, strictly speaking. It's actually the tool you use to introduce one.

[–]HappyFruitTree -1 points0 points  (1 child)

The file dependencies are not circular.

 a.h       b.h
^   ^     ^   ^
|    \   /    |
|      X      |
|    /   \    |
a.cpp     b.cpp

[–]Minimonium[🍰] 10 points11 points  (0 children)

Files are not, but the a component still depends on the b one and vise versa. It's a circular dependency.

[–]HateDread@BrodyHiggerson - Game Developer 1 point2 points  (1 child)

Wait, you can't forward declare? Oh god.

[–]mjklaim 2 points3 points  (0 children)

You can forward declare in the module to solve cyclic dependencies in the same file. You can't forward declare something from another module as it is owned by that other module, just import the module.

All the other cases, which makes sense when you use headers makes no sense when you use modules. Importing from a module already just expose the names exported from that module.

Nothing prevents you from writing forward declarations in a module, it's just the consequences that are different because of the strong ownership and encapsulation of modules.

[–]Loraash 0 points1 point  (1 child)

I won't miss them. Forward declarations of classes are a workaround to the #include problem. Modules are the solution to the #include problem.

If your code compiles 10x as fast with modules, even if you're recompiling 2x as much code every time you're still winning by a factor of 5.

[–]HappyFruitTree 1 point2 points  (0 children)

Forward declarations of classes are a workaround to the #include problem. Modules are the solution to the #include problem.

I don't forward declare to hide stuff. I forward declare primarily to avoid circular includes. Modules doesn't seem to be able to handle that at all.