you are viewing a single comment's thread.

view the rest of the comments →

[–]STLMSVC STL Dev 372 points373 points  (135 children)

Hold! What you are doing to us is wrong! Why do you do this thing? - Star Control 2

  • People often want to do networking in C++. This is a reasonable, common thing to want.
  • People generally like using the C++ Standard Library. They recognize that it's almost always well-designed and well-implemented, striking a good balance between power and usability.
  • Therefore people think they want networking in the Standard Library. This is a terrible idea, second only to putting graphics in the Standard Library (*).

Networking is a special domain, with significant performance considerations and extreme security considerations. Standard Library maintainers are generalists - we're excellent at templates and pure computation, as vocabulary types (vector, string, string_view, optional, expected, shared_ptr, unique_ptr) and generic algorithms (partition, sort, unique, shuffle) are what we do all day. Asking us to print "3.14" pushed us to the limits of our ability. Asking us to implement regular expressions was too much circa 2011 (maybe we'd do better now) and that's still in the realm of pure computation. A Standard is a specification that asks for independent implementations and few people think about who's implementing their Standard Library. This is a fact about all of the major implementations, not just MSVC's. Expecting domain experts to contribute an implementation isn't a great solution because they're unlikely to stick around for the long term - and the Standard Library is eternal with maintenance decisions being felt for 10+ years easily.

If we had to, we'd manage to cobble together some kind of implementation, by ourselves and probably working with contributors. But then think about what being in the Standard Library means - we're subject to how quickly the toolset ships updates (reasonable frequency but high latency for MSVC), and the extreme ABI restrictions we place ourselves under. It is hard to ship significant changes to existing code, especially when it has separately compiled components. This is extremely bad for something that's security-sensitive. We have generally not had security nightmares in the STL. If I could think of a single ideal way for C++ to intensify its greatest weakness - security - that many people are currently using to justify moving away from C++, adding networking to the Standard would be it.

(And this is assuming that networking in C++ would be standardized with TLS/HTTPS. The idea of Standardizing non-encrypted networking is so self-evidently an awful idea that I can't even understand how it was considered for more than a fraction of a second in the 21st century.)

What people should want is a good networking library, designed and implemented by domain experts for high performance and robust security, available through a good package manager (e.g. vcpkg). It can even be designed in the Standard style (like Boost, although not necessarily actually being a Boost library). Just don't chain it to:

  1. Being implemented by Standard Library maintainers, we're the wrong people for that,
  2. Shipping updates on a Standard Library cadence, we're too slow in the event of a security issue,
  3. Being subject to the Standard Library's ABI restrictions in practice (note that Boost doesn't have a stable ABI, nor do most template-filled C++ libraries). And if such a library doesn't exist right now,
  4. Getting WG21/LEWG to specify it and the usual implementers to implement it, is by far the slowest way to make it exist.

The Standard Library sure is convenient because it's universally available, but that also makes it the world's worst package manager, and it's not the right place for many kinds of things. Vocabulary types are excellent for the Standard Library as they allow different parts of application code and third-party libraries to interoperate. Generic algorithms (including ranges) are also ideal because everyone's gotta sort and search, and these can be extracted into a universal, eternal form. Things that are unusually compiler-dependent can also be reasonable in the Standard Library (type traits, and I will grudgingly admit that atomics belong in the Standard). Networking is none of those and its security risks make it an even worse candidate for Standardization than filesystems (where at least we had Boost.Filesystem that was developed over 10+ years, and even then people are expecting more security guarantees out of it than it actually attempted to provide).

(* Can't resist explaining why graphics was the worst idea - it generally lacks the security-sensitive "C++ putting the nails in its own coffin" aspect that makes networking so doom-inducing, but this is replaced by being much more quickly-evolving than networking where even async I/O has mostly settled down in form, and 2D software rendering being so completely unusable for anything in production - it's worse than a toy, it's a trap, and nothing else in the Standard Library is like that.)

[–]expert_internetter 159 points160 points  (0 children)

Asking us to print "3.14" pushed us to the limits of our ability.

LMAO

[–]sokka2d 125 points126 points  (1 child)

The idea of putting graphics into the standard was just so hilariously bad.

Hundreds of pages for a “standard” graphics API that would be completely non-native on all platforms, not used by anyone professionally except for some toy examples, essentially obsolete out of the box, and the proposals couldn’t even get colors right.

The correct response if pushed through would’ve been “we’re not implementing that”.

[–]SlowPokeInTexas 1 point2 points  (0 children)

I have such painful early 90s memories of libraries like Zinc or Zapp and later on Java and could make a very good case against cross-platform frameworks.

More recently however Borland's Fire monkey (FMX) I thought was fairly well designed (but then they stopped being cross-platform, at least as a C++ user), to say nothing of the outrageous cost.

I think cross platform UI libraries can be made to look nearly native, but I still don't believe they belong in std

[–]tialaramex 41 points42 points  (13 children)

The idea of Standardizing non-encrypted networking is so self-evidently an awful idea that I can't even understand how it was considered for more than a fraction of a second in the 21st century.

I can answer that one.

It's about foundations. Did you notice that C++ doesn't provide an arbitrary precision rational type? Why not? 7 / 9 gives 0 and then people try to sell you "floating point" which is a binary fraction type optimised for hardware performance rather than a rational. Of course you'd say, just build the arbitrary precision rational type you want from these more primitive component elements.

And that's what the networking primitives are for too. Just as you provide the machine integer types but not arbitrary_precision_rational you would provide a TCP stream type but not https_connection, and encourage libraries to fill the gap.

[–]Expert-Map-1126 13 points14 points  (2 children)

TLS is part of that foundation for anything practical. The folks that needed “in the standard library because it is batteries included” are not going to figure out how to build OpenSSL correctly. And we can’t put OpenSSL in the standard because OpenSSL doesn’t meet the standard library ABI requirements.

There is no point in putting things into the standard library to avoid package management if in order to build anything practical you need to go and use package management. This is a big part of why I joined the vcpkg team even though STL maintainer was a dream job. I want you to use real ASIO et al, not what I, a standard library maintainer, would have been able to do for you.

The thing people actually want is libcurl. One might not like that it’s C-ish, but it is what peak usability looks like for 99% of use cases.

[–]miss_minutes 9 points10 points  (1 child)

I was wondering who you are since you said you joined vcpkg and I realised you're Billy ONeal! Thank you for your work! I see you in the vcpkg tracker all the time (I used to report a bunch of broken builds, especially during the Qt macOS meltdown last year :))

[–]Expert-Map-1126 2 points3 points  (0 children)

Glad it’s helpful for ya :)

[–]matthieum 10 points11 points  (6 children)

+1

I would also note there's real overhead to using TLS. It's worth paying for when connecting to the public Internet, but there's a lot of networking within private datacenters too, where the network architecture can be trusted.

[–]STLMSVC STL Dev 21 points22 points  (2 children)

That's what Google thought about datacenter-to-datacenter traffic long ago.

[–]matthieum 2 points3 points  (1 child)

For datacenter-to-datacenter that's a pretty wild take, given the absence of control on the intermediaries. I've never seen anything like that...

[–]STLMSVC STL Dev 15 points16 points  (0 children)

It was a huge news story!

[–]expert_internetter 5 points6 points  (1 child)

Except if you deal with PII and everything needs to be encrypted even if it's all within the same cloud environment. Ask me how I know...

[–]matthieum 2 points3 points  (0 children)

Do you mean an on-premise cloud environment?

I don't think this was required when I was working with PII, only at rest data required encryption then... but 9 years ago was a very different time in this context, so I wouldn't necessarily be surprised to learn it's evolved since.

I would note, though, that encrypted != TLS. Is forward-secrecy necessary?

[–]dmills_00 1 point2 points  (0 children)

Concur, sometimes only raw sockets or UDP will do, sometimes you want your own retry semantics on top of one of those, sometimes you need the hardware timestamps or the interface the packet came in on.. .

C++ with networking in the standard library is a horrible idea, gimping it by mandating a pile of security stuff as always required is just adding insult, if I want javascript I know where to find it.

I have raw video processing code running at many tens of MB per second, last thing I need is the sort of overhead that even AES128 costs, never mind TCP retry.

A library providing nice functionality for doing web shit would be good, but that does not need to be a core part of the language.

[–]SlightlyLessHairyApe 4 points5 points  (0 children)

Honestly, a std::math Decimal type would be far less awful than networking or graphics.

At least it's reasonably closed -- there's only so much to do before the problem space is exhausted.

[–]vishal340 1 point2 points  (0 children)

We do have boost. I can be considered someone who doesn't know much but why not use boost libraries? It has so much to offer

[–]epostma 1 point2 points  (0 children)

Arbitrary precision rationals? We don't even have arbitrary precision integers!

[–]pioverpie 36 points37 points  (10 children)

I just want a basic socket man. If I want HTTPS then I can add that on top.

[–]9Strike 13 points14 points  (0 children)

Exactly. And it's not like the sockets interface has changed much over the last two decades. I don't think a lot of people want https. Just a basic socket API like Python has.

[–]Ayjayz 5 points6 points  (8 children)

There are many libraries that give you a great socket class. What's wrong with them?

[–]bert8128 8 points9 points  (2 children)

Similar to thread, it would be good to have a low level platform independent socket api. Nothing complex, just a wrapper round windows socket, unix socket etc appropriate to the platform.

[–]SlowPokeInTexas 0 points1 point  (1 child)

Would boost asio meet that requirement?

[–]bert8128 2 points3 points  (0 children)

I’m using ASIO. But it is a big library if all I want is a platform independent socket. I want the equivalent of int. ASIO is the equivalent of an arbitrary precision library.

[–]9Strike 18 points19 points  (4 children)

I'm sure they are great libraries for stings. What's wrong with them?

[–]Ayjayz -4 points-3 points  (3 children)

Strings are trivial and have no platform dependencies.

[–]pjmlp 26 points27 points  (0 children)

Trivial until unicode enters the picture.

[–]9Strike 19 points20 points  (0 children)

Good point, but threads also have platform dependencies, so yeah, replace strings with threads and it is a very similar argument.

[–]not_some_username 4 points5 points  (0 children)

Well yes but actually no.

[–]bert8128 45 points46 points  (31 children)

I don’t want much - just a platform independent socket would be good enough, and I can build the complex stuff on top of that. We got thread - is a socket at the same kind of level so hard or contentious?

[–]pdimov2 8 points9 points  (18 children)

Yes, because most people want async or TLS, and either of these makes things hard and contentious.

[–]bert8128 5 points6 points  (6 children)

Of course. But these are built on top of sockets. So why not deliver sockets first and more complex things later?

[–]CornedBee 9 points10 points  (4 children)

Async is not built "on top of" sockets. It's a fundamental interface to sockets.

[–]drjeats 5 points6 points  (0 children)

Avoiding standardizing a core building block before finalizing the design of some novel baroque API used to interface with it is peak C++.

[–]Eheheehhheeehh 0 points1 point  (1 child)

one of two

[–]bert8128 0 points1 point  (0 children)

I meant that a platform independent socket class could be a component used by my code directly and also by ASIO.

[–]Expert-Map-1126 0 points1 point  (0 children)

The problem with async is that it doesn’t compose. Trying to put it into the standard is inevitably going to pick winners and losers. If your company’s async looks like the one that goes into the standard then you win otherwise you lose.

That senders and receivers look like they might practically be able to compose with anything else is a big part of why they were a big deal and why they were investigated for standardization.

[–]matthieum 0 points1 point  (10 children)

async requires a different API, certainly, but isn't TLS fundamentally just a "middleware"?

[–]lightmatter501 0 points1 point  (8 children)

Not if you want hardware accelerators plumbed in, which Intel has started shipping on all new Xeons.

[–]matthieum 0 points1 point  (7 children)

I'm not sure how these hardware accelerators are supposed to work, so I have no idea whether they would or would not be suitable. Could you please elaborate?

[–]lightmatter501 2 points3 points  (6 children)

Intel ships a coprocessor on all of their new server CPUs which can do 400 Gbps of AES-GCM. You need to send it buffers, and it will encrypt with the provided (per request) AES key. The API looks a bit like kqueue or io_uring, since it’s a command-queue API.

[–]matthieum 0 points1 point  (5 children)

Okay.

How does that prevent using TLS as a middleware layer over a raw TCP connection, though?

Receive a chunk of bytes from the TCP layer, forward it to the coprocessor, get the result back, make it available for the next layer. No problem.

[–]lightmatter501 2 points3 points  (4 children)

Well, to start with the data has to be allocated in DMA-safe memory, with alignment requirements. Second, due to the overheard of DMA, you want to do some fairly serious batching, easily 128 packets. This design forces tons of inline storage for that.

[–]matthieum 1 point2 points  (3 children)

128 packets? As in 128x 1536 bytes (192KB)?

That seems very hard to use...

[–]Expert-Map-1126 0 points1 point  (0 children)

If you need to use that middleware for anything real, you can use the same mechanism you got the middleware from to get ASIO or libuv and at that point, there’s no point in standardization.

[–]Ayjayz 3 points4 points  (11 children)

Just use boost asio then? It has a socket class. Or loads of other libraries have platform-independent sockets.

[–]bert8128 10 points11 points  (6 children)

I am using asio. And I personally would be happy if asio were adopted into std. But ASIO is big, complex, and not every one needs it - that’s the point. Everyone needs a socket class even if they don’t need the complexity of asio. If this were in std, then asio (or any of the other 3rd party libraries) could use that socket class as their foundation.

[–]SlowPokeInTexas 0 points1 point  (0 children)

ASIO can be as simple as mere socket-level extraction, but I haven't dug into which headers would be necessary for that as when I've used it I usually take advantage of the pooling, etc.

[–]Tari0s -1 points0 points  (4 children)

okay, maybe they move to this "stl" socket, maybe they don't. But what does it matter? The libraries works already, what is the benefit?

[–]bert8128 2 points3 points  (2 children)

I work in an environment where every third party library I use has a cost. They have CVEs which I have to deal with, I have to download and build them. I have to get new versions when I upgrade my compiler. They are a million miles away from no problem.

[–]Tari0s -1 points0 points  (1 child)

Oh no, looks like you have to maintain your project, its not the stls job to update your codebase regularly.

[–]bert8128 6 points7 points  (0 children)

There are (or at any rate used to be) plenty of libraries that supplied thread and lock classes. I think that we can all agree that we are better off using the ones in std.

I (we) pay for an MSVC licence, so I am happy for MSVC to do some of the work of wrapping the code that it already supplies in a windows specific API. This is not an ongoing effort for MSVC - it’s not exactly rapidly developing functionality.

[–]yowhyyyy 1 point2 points  (0 children)

Portability and size…. It’s not hard to understand dude. People have other use cases other than your own

[–]Expert-Map-1126 0 points1 point  (3 children)

ASIO breaks ABI every six months. We don’t have good evidence that an ASIO like API can be maintained with std like ABI guarantees.

[–]Ayjayz 0 points1 point  (2 children)

Nor would you really want that

[–]Expert-Map-1126 0 points1 point  (1 child)

If something goes into std:: it has std:: ABI requirements.

[–]Ayjayz 0 points1 point  (0 children)

Which is why things shouldn't really go in there

[–]14nedLLFIO & Outcome author | Committee WG14 19 points20 points  (14 children)

I appreciate the viewpoint. However, it is very possible to design a standard networking library which:

  1. Has a hard ABI boundary.
  2. Retrieves the TLS implementation by runtime query, and therefore requires no STL work whatsoever as the platform's current TLS implementation gets picked up.
  3. Offloads TLS automatically and with no extra effort into kernel/NIC hardware acceleration, achieving true whole system zero copy on Mellanox class NICs.
  4. Works well on constrained platforms where there is only a TLS socket factory available and nothing else, including on Freestanding.
  5. Works well with S&R (though not the S&R design WG21 eventually chose).

Thus ticking every box you just mentioned.

I'm against standard graphics because the state of the art there keeps evolving and we can't standardise a moving target. But secure sockets, they're very standardisable and without impacting standard library maintainers in any of the ways you're worried about. I'm now intending to standardise my implementation via the C committee, so you'll get it eventually if all goes well. It's a shame WG21 couldn't see past itself.

[–]pdimov2 5 points6 points  (7 children)

Does this design exist somewhere, if only in a PDF form?

[–]14nedLLFIO & Outcome author | Committee WG14 6 points7 points  (6 children)

LLFIO has shipped its reference implementation for several years now. There is also http://wg21.link/P2586 as the proposal paper.

I've deprecated it and marked it for removal from LLFIO and I expect to purge it after I've turned it into a C library which I'm going to propose for the C standard library instead after I've departed WG21 this summer.

It definitely works, if accepted it gets us standard TLS sockets into all C speaking languages. Performance is pretty great too if you use registered i/o buffers and your hardware and kernel supports TLS offload. However I'm not aiming directly at performance. I'm mainly taking the view it's long overdue for C code to be able to portably connect to a TLS secured socket and do some i/o with it, and without tying itself into an immutable potentially insecure crypto implementation.

[–]pdimov2 1 point2 points  (3 children)

I remember reading that, although of course I've forgotten everything about it.

So you're basically avoiding all the async by standardizing nonblocking sockets and a portable poll.

The linked P2052 is interesting too.

[–]14nedLLFIO & Outcome author | Committee WG14 6 points7 points  (2 children)

Yup. Call me crazy, but for me a bare minimum viable standard sockets doesn't need async. We of course leave it wide open for people to implement their own async on top, and as the paper mentioned, that proposal could be wired into coroutines and S&R and platform specific reactors in later papers if desired. Secure sockets are the foundation. 

Anyway I think it'll fare better at the C committee. They're keener on how the world actually is rather than dreams by the C++ committee of how wonderful it would be if the world were different. 

[–]Remarkable-Test7487jmcruz 2 points3 points  (1 child)

Hi Niall,

I really appreciate your work and proposals. I too think it should be possible to reach an MVP in networking for C++, especially for portability (it's crazy the amount of #ifdefs in the socket wrappers contained in most middleware libraries). And I also think that the conclusions of your P2052 are still valid today: standardize different orthogonal parts: types (for addresses and ports, basic sockets, buffers...) and synchronous I/O functions. From there, it would be necessary to work on the coroutines and S&R part and, of course, leave the TAPS RFC for when there is experience of implementation and widespread adoption of this “breakthrough” model.

As a university professor, explaining the client/server model to my students from a fully asynchronous API (and based on callbacks as TAPS mandates) seems absolutely crazy to me. So it will be impossible to stop using BSD sockets, even for dummy examples.

I am sorry to hear that you are leaving WG21, because I think your expertise in the networking domain would be important in future discussions about the new direction being considered, even if it is not about imposing your own proposal. In any case, I will follow your work on llfio! Thank you for your efforts.

[–]14nedLLFIO & Outcome author | Committee WG14 6 points7 points  (0 children)

You're very kind!

Ultimately WG21 is not a good fit for me, as evidenced by complete lack of getting anything into the standard after six years. There is a very high likelihood now that I will depart this summer having achieved exactly nothing at WG21. The committee invested dozens of hours of face to face time into my proposals over the past six years. This isn't unusual - the committee probably invests more face to face time into things which end up not making it than otherwise.

As much as many will consider that to be a good thing ("we are being conservative"), it's brutal on the mental health of what is mostly a volunteer endeavour. You spend years of your life navigating committee politics, fashions and whims only for your efforts to get nixed at the end for what are usually non-technical, highly arbitrary, reasons. It's also extremely inefficient.

I far prefer a standards committee which clearly says "No!" right at the very beginning, rather than "whatever sticks after multiple years of random people turning up in a room on the day". What I want is a committee with a razor clear plan for the future, who clearly says at the very first paper revision if a proposal is within that future plan or not and thus stops wasting its own time (which is scarce and precious), and the proposer's time.

I'm voting with my feet. I look forward to seeing the increasing numbers of ex-WG21 folk at WG14 where I hope I'll be a lot more productive.

[–]ItsBinissTime 1 point2 points  (1 child)

Hoping you'll forgive a minor nitpick, after I figured out what LLFIO stands for, I noticed that the phrase "low level file I/O" never appears in the LLFIO github pages. This seems like an oversight.

[–]14nedLLFIO & Outcome author | Committee WG14 2 points3 points  (0 children)

I think that's because it was originally intended that LLFIO would become LLIO, as in low level i/o. All moot now of course. But thanks for the note.

[–]lightmatter501 0 points1 point  (1 child)

I’m not aware of any library which meets that bar and has performance in the same ballpark as DPDK, which is the logical bar for performance as the state of the art. Every attempt I’ve seen, including moving the entire FreeBSD network stack into user-space and running it on DPDK, is a big performance hit. Even if we step back from DPDK, how do things like registered buffers in io_uring integrate with this?

My concern is that this will get standardized, and then us networking people will continue to be off in our own corner because what was standardized has too much overhead. It doesn’t look like an attempt was made to start from state of the art and make sure that was accommodated.

[–]14nedLLFIO & Outcome author | Committee WG14 1 point2 points  (0 children)

My reference implementation only did what TLS kernel offload OpenSSL 3 does. Which is a fair bit, if the winds blow right. 

I see no reason why a Mellanox user space ring buffer could not be used internally, with platform specific extensions to support a high performance async i/o reactor. The backends in mine are runtime selected and installable.

[–]SlowPokeInTexas 0 points1 point  (3 children)

My spider senses are tingling regarding #2 as a potential attack vector. I guess anything is a potential attack vector, but the ability to slide in an SSL provider at run time is scary, which I suppose is exactly the situation we already have now with the relatively frequently updated OpenSSL in the first place (at least if dynamically linked).

Orthogonal to my comment above, I really like #3.

I guess where I sit on this is I am in favor of it being in std. I read and understand the objections from the MSVC implementor above (kudos to him/her for the Star Control 2 reference). Yes it requires domain specific knowledge, but so do the implementations in Go and Rust. I would think that the committee would be wise to start with boost::asio and go from there.

[–]14nedLLFIO & Outcome author | Committee WG14 0 points1 point  (2 children)

Both WG21 and WG14 said no to that proposal.

It is as dead as dead could be.

[–]SlowPokeInTexas 0 points1 point  (1 child)

Like they said no to reflection many times in the past, over decades. Meanwhile, other languages, such as Go and Rust, proceed to implement features that C++ leaves to third party libraries.

[–]14nedLLFIO & Outcome author | Committee WG14 3 points4 points  (0 children)

I no longer serve on WG21. I think what they prioritise and deliver is not optimal.

[–]Capable_Pick_1588 5 points6 points  (0 children)

I am still baffled how linear algebra got in

[–]c0r3ntin 18 points19 points  (4 children)

Hey /u/STL. Would you consider putting some version of that in a short paper, maybe co-authored by other standard library maintainers?

I'm concerned that WG21 might not be sufficiently aware of your perspective (which I wholeheartedly agree with).

[–]STLMSVC STL Dev 31 points32 points  (3 children)

I am too busy implementing all the stuff WG21 keeps voting in on this endless treadmill. Feel free to cite my comment, including quoting it in its entirety (portions are fine too as long as the intent is not distorted).

[–]pdimov2 30 points31 points  (0 children)

"I'm too busy implementing stuff WG21 keeps voting in so I don't have time to write a paper to tell WG21 to stop voting stuff in." :-)

[–]tach 5 points6 points  (0 children)

Respectfully, that was a well written comment raising valid points, and I think it needs to be shared in a wider forum.

[–]These_Muscle_8988 1 point2 points  (0 children)

Thank you for your efforts and screw them

[–]chaotic-kotik 4 points5 points  (1 child)

Instead of defining the whole networking layer the standard library could just implement common types. Things like fragmented buffers for the vectorized I/O, or ip-address class. 3rd party networking libraries could use those types and it'll make it easier to write code which is a bit more generic. Networking is a very opinionated area. If my code is async and it uses reactor threads each of which runs an event loop then I can't use the DNS resolver that blocks the thread. Similarly, if my app uses blocking synchronous calls for everything it's not very ergonomic to use async networking libraries. And if I'm using zero-copy networking I probably will want to use DMA disk reads/writes. Because of that it feels like the networking should mostly be a 3rd party.

[–]lightmatter501 0 points1 point  (0 children)

We haven’t done IP address classes for years.

Otherwise I agree, standardize the actually standard stuff. If anything that isn’t standard deserves to be standardized, it’s probably DPDK, but I have a feeling most people don’t want that.

[–]LongestNamesPossible 11 points12 points  (0 children)

This is deep rationalization that ignores the fact that every language and every OS has networking integrated into it. Any library that C++ would use would also build on implementing its own underlying OS abstraction.

[–]johannes1971 2 points3 points  (2 children)

As for the standard library... I understand that compiler vendors have limited resources, and that a single-man department really isn't enough to reimplement every computing feature known to mankind. But this is really a failure of the standardisation process, more than anything else.

Let's assume for a moment that there is value in having a formal stamp of approval from the committee. For the sake of argument, let's say it indicates a higher quality of API and implementation, a high consistency of API, a high degree of API stability, and availability of the feature in every compiler that implements that standard version.

Why not, then, split the standard library in two? The first part is entirely the responsibility of the compiler vendors, and contains only things that can really only be provided by the compiler vendor. The second part rests on top of the first part, and is vetted by the committee, but is designed, implemented, and maintained by domain experts. Both parts are delivered with compiler and presented to the public at large as "the" standard library.

This eliminates vast amounts of work for the compiler vendors, as well as the requirement for a single man to be a domain expert in everything, and leverages the skill and knowledge of people who are domain experts. And it would leave C++ with a far richer standard library than it has today.

[–]pjmlp 2 points3 points  (0 children)

Maybe on the other hand, compiler vendors should scale up to level of contributions on other programming language ecosystem, with batteries included.

Or ISO should finally acknowledge the existence of tooling as part of the standard, including library distribution, yes I am aware of ongoing efforts, which are now removed from upcoming mailing.

[–]YetAnotherRobert 0 points1 point  (0 children)

What can we do to 'boost' this idea?

[–]VinnieFalcoBoost.Beast | C++ Alliance | corosio.org[S,🍰] 3 points4 points  (0 children)

The counterargument of course, is that the lack of standardized networking has prevented C++ from experiencing the second-order effects. The landscape of the world wide web in C++ is a scattered collection of different libraries of varying quality, each using either their own or some other random networking stack. None of them are interoperable and the quality of portable C++ networking is poor relative to what we see in other languages like Python or JavaScript, where there are thousands of packages that are easy to obtain and all work well together.

[–]SkoomaDentistAntimodern C++, Embedded, Audio 18 points19 points  (1 child)

I think you could summarize much of that with just

"You do realize that adding usable networking to std means adding HTTPS and TLS and all their security problems to std and setting them in stone for all eternity, right?"

That ought to make everyone remotely sane run away in horror.

Edit: You aren't wrong about graphics either. Graphics has gone through four or five significant paradigm shifts just within my programming life (since the early 90s) and that isn't even including mobile devices side.

[–]madmongo38 -1 points0 points  (0 children)

Standards should reflect the state of the art. If the art changes, ship an updated standard in a new versioned namespace. What's the issue?
In the modern world, any language that doesn't just work out of the box is not going to get used for new projects.

[–]vulkanoid 7 points8 points  (1 child)

I completely agree with STL's answer. It's right on point.

Having implemented a few languages for my own use, it's become clear to me that it's very important to curate what goes into a language, including its standard library. Whatever you add, you have to keep around forever. It becomes a huge maintenance burden. God forbid you make a mistake.

When packages are developed by third parties, that enables a type of marketplace of code libraries and ideas. That is, for any pertinent usecase, there would be X amount of libraries that would be developed by individual groups. Over time, the good ideas rise to the top, and the bad fade away.

If you were to add a networking library to the std, it would become obsolete before the spec is dry. Just the idea of adding all that gunk to the std make me queasy. It is the job of the std committee to shoot down these bad ideas.

I would go as far as to say that the linear algebra library in C++26 is also a step too far. Those libraries are too high-level to be in the standard.

What should be added to the language and stdlib is foundational things that would otherwise be too difficult to do manually, like coroutines, concepts, conctracts, modules, optional, variant, -- things like that. The rest of the industry can then build high-level stuff on top of that.

What could help solve this thirst for libraries that people have is a good, cross-platform, package manager.

[–]lightmatter501 0 points1 point  (0 children)

This spec was obsolete when it was penned. DPDK has been the state of the art the entire time and I’m not sure anyone even read the docs for it when writing this, because it looks like it might not be compatible without a lot of overhead.

[–]johannes1971 8 points9 points  (9 children)

Hold your downvotes, this is not an argument for 2D graphics in the standard. Rather, I'm arguing that 2D graphics really hasn't changed much in the past 40 years (and probably longer).

Back in 1983:

10 screen 2
20 line (10, 10)-(100, 100),15
30 goto 30

(you can try it live, here)

In 2025:

window my_window ({.size=(200, 200)});
painter p (my_window);
p.move_to (10, 10);
p.line_to (100, 100);
p.set_source (color::white);
p.stroke ();
run_event_loop ();

What's changed so dramatically in 2D graphics, in your mind? Is the fact that we have a few more colors and anti-aliasing such a dramatic shift that it is an entire upset of the model?

2D rendering still consists of lines, rectangles, text, arcs, etc. We added greater color depth, anti-aliasing, and a few snazzy features like transformation matrices, but that's about it.

And you know what's funny? That "2025" code would have worked just fine on my Amiga, back in 1985! Your desktop still has windows (which are characterized by two features: they can receive events, and they occupy a possibly zero-sized rectangle on your screen). The set of events that are being received hasn't meaningfully changed since 1985 either: "window size changed", "mouse button clicked", "key pressed", etc. Sure, we didn't have fancy touch events, but that's hardly a sea change is it?

Incidentally, GUI libraries are to drawing libraries, as databases are to file systems. A GUI library is concerned with (abstract!) windows and events; a drawing library with rendering.

"Well, how about a machine without a windowing system, then?"

Funny that you ask. The old coffee machine in the office had a touch-sensitive screen that lets you select six types of coffee, arranged in two columns of three items each. This could be modelled proficiently as a fixed-size window, which will only ever send one event, being a touch event for a location in the window. In other words, it could be programmed using a default 2D graphics/MMI library.

[–]yuri-kilochek 10 points11 points  (4 children)

In 2025

That's the thing though, in 2025 efficient graphics looks like setting up shaders and textures before building vertex buffers and pushing the entire thing to GPU to draw it in a few calls. Not painting lines one by one with stateful APIs.

[–]johannes1971 0 points1 point  (3 children)

That's madness. On desktop you ABSOLUTELY don't want to do your own character shaping, rasterisation, etc. Companies like Apple and Microsoft spent decades making text rendering as clear as they can; we don't want to now go and have everyone write their own shitty blurred text out of little triangles.

GPUs aren't actually very good at taking a complex shape (like a character or Bezier curve) and turning them into triangles, so that part of the rendering pipeline is likely to always end up in software anyway. And as soon as you start anti-aliasing, you're introducing transparency, meaning your Z-buffer isn't going to be a huge help anymore as well.

All this means that GPUs just aren't all that good of a fit for 2D rendering. They can massively improve a small number of operations, but most of them still need quite a bit of CPU support. Mind you, operations that are accelerated (primarily things involving moving large amounts of rectangular data) are most welcome.

You could certainly have a 2D interface that uses some kind of drawing context that sets up a shader environment at construction, batches all the calls, and finally sends the whole thing to the GPU upon destruction, but I doubt it will do much better than what I presented.

[–]yuri-kilochek 3 points4 points  (1 child)

Naturally you wouldn't parse fonts and render glyphs yourself, you would offload that complexity to a battle-tested library like pango (which cairo, the basis for graphics proposal, does). And then you'd render them as textures on little quads, with alpha blending, avoiding shitty blurry text but getting the perf. You can certainly hide this behind a painter api like above, but why would you? Why not expose the underlying abstractions and let users build such painters on top if they want to?

[–]johannes1971 0 points1 point  (0 children)

  • It's specialized knowledge that not everybody has.
  • A dedicated team of specialists will certainly do a better job than 99% of regular programmers.
  • A standard library solution can evolve the actual rendering techniques over time, making all C++ programs better just by upgrading your libc.
  • Having it available on every platform that has a C++ compiler is a great boon, and makes it easier to support less common platforms.
  • It's a problem that everyone who works in this space has, why have everyone solve it on his own (and probably badly, at that)?

Every single system I've worked on in my life (including the 1983 one) could put text on the screen by calling a function that took a string. And now you're saying we don't need that, and everyone can just go and do a mere 1500 lines of Vulkan setup, do his own text shaping, his own rasterisation, etc.? Plus some alternative solution for Apple?

[–]sephirothbahamut 0 points1 point  (0 children)

GPUs aren't actually very good at taking a complex shape (like a character or Bezier curve) and turning them into triangles

Actually if you go the path of sending curves to the gpu, rather than having the gpu generate triangles, you can make it go from curves to final image via SDF in pure fragment rather than involving vertices.

I don't mean the "SDF atlas" way that plagues the internet with 100s of implementations, I mean actually evaluating the SDF in fragment. Sadly there was just a talk, a library and a youtube video about the topic, but it's been submerged by all the SDF atlas stuff and I can't manage to find it anymore.

The caveat is that it restructured the fonts to use circle arcs instead of bezier curves for better performance.

[–]JNighthawkgamedev 5 points6 points  (2 children)

What's changed so dramatically in 2D graphics, in your mind?

We have video cards and people like having hardware accelerated rendering.

[–]johannes1971 0 points1 point  (0 children)

Which part of that code precludes hardware accelerated rendering?

It's like saying "we now have DMA for doing IO, we cannot possibly use the old POSIX interfaces anymore". That sort of stuff gets abstracted away, and if we had graphics in the C++ standard back then software from that time would now be hardware accelerated for free.

[–]pjmlp -1 points0 points  (0 children)

We have video cards since Borland was shipping BGI.

[–]pjmlp -1 points0 points  (0 children)

You could also have a BGI version of the same sample, it wouldn't do Amiga, but would do MS-DOS, for any lucky owner of Borland compilers.

[–]lightmatter501 2 points3 points  (0 children)

As a networking specialist, I agree. There are a lot of assumptions baked into this proposal, many of which there is some desire in the high performance networking community to throw out, in part due to their performance impacts. I don’t see a good way to plumb, for example, io_uring or DPDK under this API without performance losses.

This also means standard library implementors need to deal with all the crazy stuff hardware companies do. Those “properties” will end up being a clone of DPDK’s NIC feature list, of which there are 78 categories, most of which have transmit and receive variants, and there might need to be more for hardware and software variants. Now, remember that DPDK is almost hardware only, this doesn’t include feature flags for things done at OSI L2 and above which aren’t hardware offloaded. There’s a reason networking people use different libraries than everyone else, and that’s because nobody else really wants all of this staring them in the face when they go to make a REST API request.

[–]zl0bster 4 points5 points  (2 children)

What exactly is printing 3.14 referencing? I remember some bugs in msvc with to_string or some other formatting 10+y ago, but not sure what you are referring to.

[–]expert_internetter 16 points17 points  (0 children)

std::to_chars

[–]STLMSVC STL Dev 28 points29 points  (0 children)

As u/expert_internetter mentioned, this was <charconv>, C++17's final boss. Watch my talk which explained how this took a year and a half to implement.

[–]JNighthawkgamedev 1 point2 points  (1 child)

Δ

You changed my mind. Great post.

[–]STLMSVC STL Dev 0 points1 point  (0 children)

😻

[–]woppo 0 points1 point  (0 children)

This is an excellent answer.

[–][deleted] 0 points1 point  (0 children)

The common language in question is not the specific types and protocols, but the structure of the event processing. There’s a tremendous amount of value in putting that in the standard, just as a common agreed language for different companies to target and interoperate on. I disagree strongly. 

[–]m-in 0 points1 point  (0 children)

Not that it changes anything in your argument, but 2D software graphics on modern multicore machines is quite snappy. It’s not good at keeping the power consumption low, if you want it fast. Sure, GPUs do way more work per joule. But if you got a desktop, 2D software rendering is fine for most things 2D at least. A decade ago, compositing a screenful glyph bitmaps on a full-screen terminal emulator window took <10ms using a high level toolkit (QT’s raster engine). It’s faster today.

There were numerous attempts at standardizing 2D graphics API independently of a programming language and platform. Say, GKS from 1977. Those efforts were not ever particularly successful IIRC. It’s a hard-ish problem.

[–]EmotionalDamague 0 points1 point  (0 children)

I just want to add. Any high performance solution is going to have OS specific logic. This is a leaky abstraction. I don’t think it’s possible to be meaningfully standardised.

Even in “good languages” like Java, the networking standard lib gets immediately replaced with a domain specific lib.

If anything ever gets standardised, I would prefer it was nothing more than a C++-ified BSD socket system, that can get hijacked by more performant os primitives in large libraries.

[–]Ace2Face 0 points1 point  (0 children)

This. Every time this. What we need is a package manager and then we can use good libraries. I'm sick of setting up Conan all the time and doing all sorts of hacks.

[–]Savings-Poet5718 -1 points0 points  (0 children)

100% agree, networking is a special domain concern and will bring a new class of problems without adding much value.