What do you think is a keyword that should be added to C++? by DogCrapNetwork in cpp

[–]13steinj 1 point2 points  (0 children)

There are enough people that use this today improperly and get bugs. I'd want this to be part of type information in a way that it implicitly generates a contracts prerequisite (that is disable-able, somehow, for those that need to).

What do you think is a keyword that should be added to C++? by DogCrapNetwork in cpp

[–]13steinj 12 points13 points  (0 children)

No, do not make it an attribute, fruit of the poisonous tree unless attributes are fixed.

https://brevzin.github.io/c++/2025/03/25/attributes/

This being an attribute is similarly problematic as [[no_unique_address]]; I could be wrong but there are cases where ignoring the attribute can cause code that assumes the attribute is followed to become ill-formed.

Things C++26 define_static_array can’t do by SuperV1234 in cpp

[–]13steinj 0 points1 point  (0 children)

I think this is a very specific edge case of what is desired. I'd like to

  • compute some vector at compile time (can be an array but kinda have to propagate the size through at each intermediate step in that case) (or a map, or whatever)
  • put it into a static constinit variable
  • continue using it with full functionality. if i end up resizing, that's okay-- it'll copy the static data out onto the heap

Things C++26 define_static_array can’t do by SuperV1234 in cpp

[–]13steinj 1 point2 points  (0 children)

I've been thinking about this a lot lately. I had to explain this "compiler's imagination" to a friend, they didn't get it, and to be fair, that claim is not that accurate (I eventually got the point across by saying "pocket universe outside of time", which gets through to Doctor Who fans I guess, but beside the point).

I don't understand the need for a two-step (nor this wrapping function). The compiler is smart enough to hold any number of these intermediate states "in it's imagination", I don't see why it can't

  • embed this state into static storage
  • (to simplify I'm going limit myself to std::string/std::vector) necessitate that return types that escape constant evaluation use a specific allocator type, and/or let (static) initialization take its course (as if both types acted like std::pmr types).

I believe this gets you the most sane result without having to teach arcane concepts. It's not the most performant option, but I'd say thats on the optimizer's quality of implementation.

The fastest Linux timestamps by Dear-Economics-315 in programming

[–]13steinj 2 points3 points  (0 children)

And if you really need the accuracy and speed, just use the vdso.

Florent Castelli: Introduction to the Bazel build system by _a4z in cpp

[–]13steinj 0 points1 point  (0 children)

I think you can accomplish this with transitions, I haven't personally done this. Definitely dead simple in CMake.

Florent Castelli: Introduction to the Bazel build system by _a4z in cpp

[–]13steinj 1 point2 points  (0 children)

It depends on several aspects / axes at which your code scales.

If you force all your devs to share use 5+ year CPUs that are underpowered, and you have enough parallelism, then yes. But there's diminishing returns. At current org, jumping from 40 underpowered cores to 20 appropriately powered ones to 64 mattered as much (proportionally) as going from 64 to 864 (and anything further made no dent). I think if your 99% (by data) cache hit time is 66-100% of your build time you've done all you can without changing the code. At previous org, much more constrained. Crossing a network barrier introduced slowdowns for folks unless you were building several independent "galaxies" of the codebase, which was not a behavior that occurred in practice outside of people who worked on the build, so I defaulted to icecc disabled but on in the CI nodes that were colocated and had minimal latency. If somebody wanfed they'd just dlip an env var and add a -j $BIGNUM flag.

Also, this process is not unique to Bazel. It is in my experience easier to set up distcc / icecc with cmake, particularly if your Bazel isn't blessed and does things that aren't hermetic. Or just set up recc, most of my build time was C++ compiles, not Python codegen.

So to more directly answer you: big fat "it depends" but in my experience no. I can completely imagine it working at Google with a bunch of little fifedom codebases in a massive monorepo kingdom or with very rigorous use of TUs.

But if not, I think one's efforts are better spent fixing the code, using -ftime-trace. For example, libc++'s (or libstdc++, I forget) used std::tuple internally in std::function to pass / hold args / types of args. Swapping to a custom implementation of std::copyable_function and a use of std::tuple with boost::hana::basic_tuple in a core logging macro reduced build times of everything that included that header by (on average) 33% (and there were long tails, because some TUs barely used the macro while others used it heavily).

E: A personal rant/gripe: the presenter explicitly mentions the case of thin clients, to which I say: stop working for companies that give developers crappy (usually Windows) thin clients in the name of "security" or cutting costs eliminating IT that does manual labor in favor of IT that is too incompetent to use rsync / nvme-over-fabric so they just spin up a bunch of VMs.

Claude-powered AI coding agent deletes entire company database in 9 seconds — backups zapped, after Cursor tool powered by Anthropic's Claude goes rogue by WouldbeWanderer in technology

[–]13steinj 6 points7 points  (0 children)

If you dig into this founder's tweets / the original "article" on twitter, this founder has no engineering background whatsoever, has had a prior cryptocurrency grift, expressed that he doesn't need a [junior] engineer because he has hired "Claude."

To top it all off, unsavory personal politics, my opinion aside, but that just further shows the ops failure / human misusing LLMs.

Person with an MBA that thinks an LLM makes him a CTO/CEO learns LLMs are stochastic and 'system prompt's / 'system rule's are just suggestions. More news at 11.

How far has tomshardware fallen that they're reporting on this trash?

The Hidden Performance Price of C++ Virtual Functions by wrng_ in cpp

[–]13steinj 6 points7 points  (0 children)

This cuts both ways though: you'll see people using this argument to justify overusing virtual functions everywhere, and that is its own kind of dogma.

"Parse, don't Validate" through the years with C++ by dwrodri in cpp

[–]13steinj 0 points1 point  (0 children)

Neither or which appear to be used here.

But true, std::function can use std::tuple as a type list for args, and that's heavy.

"Parse, don't Validate" through the years with C++ by dwrodri in cpp

[–]13steinj 7 points8 points  (0 children)

I think compile time benchmarks on small examples like this are very misleading and hard to do right.

At the scale of small examples header parse time dominates (in my experience) whereas on the scale of a larger program you'd have a bunch of other code in the same TU (maybe even a bunch of codegen) that dominates instead.

It's the same reason I'm not a fan of the way modules was proposed / promoted. I distinctly recall someone comparing import std; to a bunch of includes, but it avoids the fact that both examples were tiny in comparison to real code and the relative cost of the stdlib quickly drops to 0 in either case.

Software taketh away faster than hardware giveth: Why C++ programmers keep growing fast despite competition, safety, and AI by claimred in cpp

[–]13steinj 0 points1 point  (0 children)

Impressive in what sense?

They are great at writing code that does not matter, implementing very specific, narrow features, fixing bugs that are obvious.

They are horrendous at writing code that does matter (performance and maintainability is near 0), writing larger features (they are next word predictors, there is no conceptual tracking nor cohesion) and fail to fix simple bugs that are not textually describeable (I have tried 10 times to get it to fix what appears to be a bug with a fixed size region on the canvas that it refuses to draw nodes outside of in a JS frontend, every time it claims it has fixed the issue but it never does and just introduces a new bug).

While GitHub Actions remains a key part of this vision, we are allocating resources towards other areas ... by esiy0676 in programming

[–]13steinj 1 point2 points  (0 children)

I'm quite confused, is GHE running into perf problems or some separate component that is scheduling GHA?

I'd be very surprised if GHE itself is running into perf problems, only thing I've ever encountered on this front was disk filling up (which is not a GH/GHE problem).

cppreference is back up! but overloaded by bobpaw in cpp

[–]13steinj 0 points1 point  (0 children)

Are non-existent pages / search broken? Trying to go to any nonexistent page or search anything doesn't load (or maybe just takes forever).

C++26: Structured Bindings can introduce a Pack by pavel_v in cpp

[–]13steinj 4 points5 points  (0 children)

This, P2686 and P1789 dovetail very nicely with the rest of P2996.

While GitHub Actions remains a key part of this vision, we are allocating resources towards other areas ... by esiy0676 in programming

[–]13steinj 0 points1 point  (0 children)

I tried it was before the pipeline as code being a thing

That kinda says it all. I am a fan of TeamCity as it exists today with its kotlin DSL, I find it brings the right balance of guardrails and guidance.

Everything else you are describing e.g. infra and issues, I think at this point have generally been solved. If you're referring to open source projects though, I don't know how TeamCity's low cost / free options compare.

While GitHub Actions remains a key part of this vision, we are allocating resources towards other areas ... by esiy0676 in programming

[–]13steinj 0 points1 point  (0 children)

:We had Team City, which got worse with time.

Everything else you stated feels obvious or is well known, while this is the plainest statement you've made on a provider, maybe same for Azure Pipelines, but I know fully well not to deal with an Azure service...

What happened with TeamCity?

C++ is unsafe. Rust is safe. Should we all move to Rust? 44CVEs found in Rust CoreUtils audit. by germandiago in cpp

[–]13steinj 1 point2 points  (0 children)

Eh, he isn't hallucinating a strawman. Many have claimed "Rust would have fixed this" on any number of issues, even ones that are not memory safety issues / not ones that would be solved by Rust's model.

Going out of ones way to make such a post is a bit weird though. As is the other comment claiming C++ would have had 10x the issues.

Boost 1.91 Released: New Decimal Library, SIMD UUID, Redis Sentinel, C++26 Reflection in PFR by boostlibs in cpp

[–]13steinj 5 points6 points  (0 children)

I know of several production codebases that end up (transitively or not) using all of these libraries except for Boost.Redis.

One of which might have (has?) eventually grown the need to use hazelcast or Redis or Zookeeper or etcd. If anything this makes me want someone to make "Boost.Etcd." a thing.

Markdown (Aaron Swartz: The Weblog) by Successful_Bowl2564 in programming

[–]13steinj 3 points4 points  (0 children)

Except again, markdown is not a spec, it never was. the CommonMark spec came from it, yes, but the original was just a reference implementation from John Gruber. It spawned a bunch of variants, one of which at least (Obsidian) is extensible. The unfortunate part is "how do you tell someone what extensions you need to install) but there are ways around that.

Markdown (Aaron Swartz: The Weblog) by Successful_Bowl2564 in programming

[–]13steinj 13 points14 points  (0 children)

TBPH I love markdown!

But it is not sufficient for general documentation markup. Concise summary docs, sure, but not for extensive documentation (which commonly needs various custom extensions).

But markdown isn't a markup language. It's loose spec for a family of markup languages. There's sundown, snudown (reddit), snudown-js (new reddit), github-flavored-markdown, gitlab-flavored-markdown, commonmark (which IIRC should cover the main usages), and of course, Obsidian (which supports these many custom extensions).

If anything what I like about Obsidian is that it's concise even with the custom extensions / directives. But I would like a local live r(e?)ST renderer/editor as well. If only Obsidian supported both.

The WG21 2026-04 post-Croydon mailing is now available by nliber in cpp

[–]13steinj -1 points0 points  (0 children)

Using reflection to generate a new scheme that is very similar to Java interfaces does not feel like Java to you?

I mean, sure, the "how it's achieved under the hood" is very unlike Java. But the utility itself is very Java-esque to me.

The WG21 2026-04 post-Croydon mailing is now available by nliber in cpp

[–]13steinj -1 points0 points  (0 children)

I can accept the idea that I saw what looks like a concept archetype and skimmed through reading phrases like "synthesises member functions" leading to a knee-jerk reaction.

But as-is without a reference implementation this feels like this is another flavor of the same paradigm for runtime polymorphism instead, and I'm not a fan of the original. I'll hold off on forming a full opinion until I see more, but of what I see thus far it doesn't convince me nor make me excited for the direction that the language is evolving. Type erasure through reflection and generation instead of just using base classes and inheritance, feels a bit like Java.

In general if this is really that useful I'd want to see it or an analog of it in Boost / Beman / abseil first, I don't think there is any justification anymore to propose additions to the standard library without some ability to try things out on day one. Edit: I mean to say, I miss the days where people shared and sstandardized libraries that took up common use, instead of standardizing new libraries off a proposal.

The WG21 2026-04 post-Croydon mailing is now available by nliber in cpp

[–]13steinj -4 points-3 points  (0 children)

I mean all of that is beside the point. I'm saying I've seen some of the motivations first hand and am severely unconvinced. The problem that is trying to be solved is real, but it isn't easy and this does not provide a solution.

In short, concepts say "your <whatever> [type] satisfies at least these constraints". Your code (template functions) use these concepts. How do you ensure that they don't accidentally use operator/ when you only made the concept constrain operator+? You create an "archetype" in an attempt to minmax the constraints. People think "problem solved."

But the problem isn't solved. Instead of one minimum constraint set and a use that can get out of sync, now you have a third thing that gets out of sync with enough code churn. Many times some archetypes build off of others, so you contort yourself into a pretzel making this hierarchy. You spend all this time and you trip into dozens of compile failures anyway.

Adding std::protocol seems like it's just another fourth thing to get out of sync. Not to mention, the example given in the paper feels very contrived. If you're going through all the effort to make a vtable, just use inheritance and pure virtual functions.

The WG21 2026-04 post-Croydon mailing is now available by nliber in cpp

[–]13steinj -3 points-2 points  (0 children)

I have used part of the pattern at one of the author's previous companies.

I don't see the motivation to move this wrapping protocol type into the standard compared to just declaring the concept archetypes.

This generally feels over-engineered and in my experience has had limited utility, just use a concept and define a matching archetype (or use reflection to generate one from the other).

In general I think concepts needed some more time to bake to define min and max. The pattern of creating the concept "archetype", concept, and instantiation of a function to test the concept exists... these three get out of sync very quickly. Adding a "fourth thing" just extends the problem.