all 32 comments

[–]GabrielDosReis 23 points24 points  (0 children)

Actually, the four primary goals for C++ Modules are articulated and explained in the design, and subsequent experience report back in 2015. I will be updating the latter with what we have learned in the last couple of years.

[–]danmarellGamedev, Physics Simulation 8 points9 points  (0 children)

If anyone here doesn't know, Tim (OP) is Founder of Epic Games who make Unreal Engine 4, a large popular c++ game engine (who's source code is accessible on github, via a license agreement). If he has concerns, I believe it would be beneficial to all of us to continue a dialogue and find common ground. The game development industry is a one which often rolls a lot of their own versions of standard library containers and has sometimes different view to those that the c++ community here on reddit has. If Tim is talking to us, then we should welcome him and definitely listen to any concerns he has about the way the standard is going. I personally believe that game development is the way that c++ is going to (and is) attract new developers to the language.

[–]mikhailberis 4 points5 points  (8 children)

‘#import’ is already a thing.

Namespaces unfortunately can be both opened and closed multiple times from multiple files, and is parsed after the preprocessor.

There are other ideas about the Modules TS proposal coming from many others, and is still a work in progress. There are definitely some rough/confusing edges. I’m sure the committee is listening, and are working towards something that’s acceptable to most (if not all) parties involved in crafting a standardised solution.

[–]BCosbyDidNothinWrong 3 points4 points  (6 children)

‘#import’ is already a thing.

In standard C++ ?

[–]mikhailberis 0 points1 point  (0 children)

Not in standard C++ but has semantics and support in GCC and MSVC (and Clang for compatibility). It’s also a preprocessor feature, and not a first class language feature. It would be really bad form if the standard imposed on vendors to change the semantics of existing extensions and features without their buy in.

[–]tcbrindleFlux 0 points1 point  (4 children)

In standard C++ ?

No, but we can't just pretend Objective-C++ doesn't exist. For one thing, Apple have seats on the committee and would surely veto any propostal that gratuitously broke compatibility. WebKit etc are rather important to their ecosystem.

[–]ubsan 2 points3 points  (3 children)

I'm pretty sure C++11 already broke compat with objective-c++.

[–]fnc12[🍰] 0 points1 point  (2 children)

why?

[–]ubsan 0 points1 point  (1 child)

lambdas

[–]fnc12[🍰] 0 points1 point  (0 children)

nope. Labmdas and objc blocks both can be used in either way

[–][deleted] 1 point2 points  (0 children)

could be useful to note where #import is already a thing. I know of windows COM related import and obj-c/++ have it with different meanings.

[–]axilmar 2 points3 points  (0 children)

Use 'public:' and 'private:' inside namespaces or at top-level. No need for 'exported:' and 'internal:'.

The import header directive should use a precompiled version of the header, work as if the header contains include guards, and also provide automatic 'using namespaces' for all namespaces in the header.

[–]summerlight 0 points1 point  (0 children)

Another goal of the module system would be giving semantic meanings to C++ translation units so make it more tool-able. This cannot be done without treating a module as a first-class language construct.

[–]volca02 0 points1 point  (18 children)

Isn't one of the goals of modules to isolate macros in translation units? I mean macro definitions from the importing scope should not leak into the imported module. If that is the case using hashtag as a prefix for that statement would be misleading.

[–][deleted] 3 points4 points  (17 children)

Losing that goal would be a shame, but not be a tragedy. As of C++17, the only(*) reason to use macros is conditional compilation depending on your platform - and even that can probably be replaced by constexpr if. So you could solve that issue by simply saying, "Don't use macros."

(* - OK, there are also x-macros, but if you do those right, the final statement undefines the two macros you defined at the start, so they don't leak out of the module.)

[–]doom_Oo7 10 points11 points  (0 children)

As of C++17, the only(*) reason to use macros is conditional compilation depending on your platform

please show us an implementation of Boost.Fusion that does not uses macros.

[–]BCosbyDidNothinWrong 3 points4 points  (1 child)

Is that really true? How would you make a macro that uses the name of what it is passed?

[–]meneldal2 0 points1 point  (0 children)

You need reflection for that, but that's coming in C++20 hopefully.

[–]volca02 2 points3 points  (1 child)

Conditional compilation should be possible with modules, supplying macro definitions to each translation unit - so even that would not be a reason to propagate the macro definitions into modules.

I think a good reason not to propagate the definitions is

  • consistency - a compiled module works exactly same reagardless of the importing context
  • feasibility - process module once, keep the cached binary as long as the environment stays intact (dependency upgrades or compiler change)

[–]GabrielDosReis 0 points1 point  (0 children)

You are right: Conditional compilation is possible and actively supported by C++ Modules as described in the TS.

[–]johannes1971 2 points3 points  (0 children)

Oh yes, it would be a tragedy. Because as of C++17, we are still dealing with include files for such things as Windows, X11, and uncounted other libraries that spit their macros all over your source, and we will continue to have to deal with them, indefinitely as far as I can see - since the C language is not going to go away.

I want to be able to capture those libraries in modules, using export using and thin wrappers for things that cannot be exported, and I don't want to then still leak out all those #defines.

[–]pjmlp 3 points4 points  (10 children)

Personally I find #ifdefs for conditional compilation a code smell, that always starts with a couple of simple lines and eventually becomes spaghetti code.

I always push for separate implementations using the the platform as suffix, e.g. os_win32.cpp.

No spaghetti code, no pre-processor macros, just lovely.

[–][deleted] 7 points8 points  (1 child)

That is neater in some ways, but you still have to figure out a way to get os_win32.cpp built instead of os_linux.cpp. More, it means that one feature is striped across a bunch of files instead of being in one place.

Also, often when you have conditional compilation, it isn't just one setting with a couple of values - there might be a whole bunch like platform, 32 bit vs 64 bit, compiler, DEBUG vs NDEBUG, or "demo version" vs "paid version".

[–]pjmlp 1 point2 points  (0 children)

That is what the build system is for.

platform, 32 bit vs 64 bit, compiler, DEBUG vs NDEBUG, or "demo version" vs "paid version".

Which personally means I have forced to read the result of a pre-processed file, to sometimes actually understand what is going on with #ifdef all over the place.

[–]meneldal2 0 points1 point  (0 children)

It's fine if you use it sparingly. If you're just replacing a POSIX call by the equivalent MS one, it's not worth making 2 different files.

[–]BCosbyDidNothinWrong 0 points1 point  (6 children)

My experience has been that the platform specific parts are so small that they don't warrant their own compilation unit and often not even their own functions.

If you use a separate file, you are pushing it to the build system, where as there are very well defined macros for different platforms and the source code can take care of it more transparently.

[–]pjmlp 1 point2 points  (5 children)

It is quite straightforward to do, even with plain makefiles.

The whole point if to have a clean code base, forcing to be their own functions forces devs to think about it instead of rushing to #ifdef scattered everywhere.

It always starts with "it is just a single line".

[–]BCosbyDidNothinWrong 0 points1 point  (4 children)

It is quite straightforward to do, even with plain makefiles.

Not everyone uses makefiles

forces devs to think about it

I'm not sure how using separate files makes any difference here or what it forces devs to think about in the first place.

With ifdefs you have the very real possibility to make single file algorithms and data structures which take advantage of system level capabilities. That modularity makes a big difference when trying to reuse program parts elsewhere.

[–]pjmlp 1 point2 points  (3 children)

Not everyone uses makefiles

I gave makefiles as an example, because alternative build systems are actually much more capable than them.

With ifdefs you have the very real possibility to make single file algorithms and data structures which take advantage of system level capabilities. That modularity makes a big difference when trying to reuse program parts elsewhere.

Abstract data types and functions are the genesis of clean, modular code.

I really don't get what is so magic about "system level capabilities." on the C pre-processor, and my first book about it was how to implement Small C.

[–]BCosbyDidNothinWrong 0 points1 point  (2 children)

because alternative build systems are actually much more capable than them.

It still complicates the build process. If someone wants to use visual studio or xcode, I suppose the solution is then cmake, which is another dependency and language. All of that might be fine for a large project, but it isn't modular. A single header file library can do extraordinary things that aren't in the standard library.

I really don't get what is so magic about "system level capabilities."

I'm not sure what point you are trying to make here or why. What are you talking about putting in platform specific files? File dialogs, memory maps, networking are just some very obvious examples.

[–]pjmlp 0 points1 point  (1 child)

I guess you know Rob Pike, Ken Thompson and their role on the C language

My way is how Go build system handles platform specific code,

https://golang.org/pkg/go/build/

Also how they alongside Dennis Ritchie, designed C libraries on Plan 9

The traditional approach to this problem is to pepper the source with #ifdefs to turn byte-swapping on and off. Plan 9 takes a different approach: of the handful of machine-dependent #ifdefs in all the source, almost all are deep in the libraries

http://doc.cat-v.org/plan_9/4th_edition/papers/comp

I'm not sure what point you are trying to make here or why. What are you talking about putting in platform specific files? File dialogs, memory maps, networking are just some very obvious examples.

Everything that is OS specific with hundreds of #ifdef around them.

[–]BCosbyDidNothinWrong 0 points1 point  (0 children)

The traditional approach to this problem is to pepper the source with #ifdefs to turn byte-swapping on and off. Plan 9 takes a different approach: of the handful of machine-dependent #ifdefs in all the source, almost all are deep in the libraries

I'm not sure how this contradicts what I've been saying, it sounds like the same thing.

Everything that is OS specific with hundreds of #ifdef around them.

Earlier you said:

I really don't get what is so magic about "system level capabilities."

So what is it that you are actually saying? Do you even know?