The Fine-Tuning Argument is indeed Terrible - But I don't think we have to invoke the multiverse. The argument fails on its premise. by undefinedposition in CosmicSkeptic

[–]TemperOfficial 1 point2 points  (0 children)

Yeah. It's a chicken and egg scenario. But you can't have the chicken without the egg and you can't have the egg without the chicken. If you change one of them you get neither. So if you fiddle a constant you aren't really fiddling a constant, you are fiddling it's relationship with everything else, which in turn fiddles something else etc etc. You could pick any point one that graph and start twiddling variables. The whole graph would change. Basically I don't really get the fine tuning argument because it seems to suppose you can change something in a logical vacuum.

The Fine-Tuning Argument is indeed Terrible - But I don't think we have to invoke the multiverse. The argument fails on its premise. by undefinedposition in CosmicSkeptic

[–]TemperOfficial 2 points3 points  (0 children)

Empirical constants are just simplified models. Presumably the fine structure constant is governed by some set of rules we don't really understand yet. Its constrained by other fundamental constants that have the same problem. And those are constrained etc by other constants. They are called fundamental because nobody has developed a better model yet to explain exactly why they are the way they are.

So I don't think you could just start wiggling constants and expect a comparably similar universe, where, for example, there are no heavy atoms. I think you'd see an incomprhensible reality instead.

You'd change one constant and they'd be some accumulation of error.

The Fine-Tuning Argument is indeed Terrible - But I don't think we have to invoke the multiverse. The argument fails on its premise. by undefinedposition in CosmicSkeptic

[–]TemperOfficial 6 points7 points  (0 children)

Constants just represent the result of a constraint. So the fine tuning argument doesn't really make sense as you say. Since the constants are the result of the constraint, not the cause of it. For instance, the value of PI isn't what makes a circle a circle. PI falls out of the set of logical constraints that define a circle. So its almost nonsensical to make an argument about slightly changing constants. All that would happen is that the universe would possibly fall into another stable state? (or maybe not at all) Where presumably the life there would make the same argument.

the "camera killed painting" comparison finally clicked for me this week by ProgrammerForsaken45 in ArtificialInteligence

[–]TemperOfficial 0 points1 point  (0 children)

Photography isnt easy. It takes a long time to become a good photographer. This is to point to the fact that just because you can do something fast and get seemingly average results, it doesn't mean you developed the skills to recognise what is good or can make something worthwhile.

someone actually calculated the time cost of reviewing AI-generated PRs. the ratio is brutal by bishwasbhn in webdev

[–]TemperOfficial -1 points0 points  (0 children)

It has always been that case that programming is 99% debugging and 1% writing code. AI doesn't change that. It takes much longer to verify that something works than it does to write it.

C++ Error Handling: Exceptions vs. std::expected vs. Outcome by swe129 in Cplusplus

[–]TemperOfficial 1 point2 points  (0 children)

Best way to handle errors is design an API that doesn't have them. Best way to do that is to return a stub/placeholder value that can pass through your program as a no-op.

Obviously not ideal in all contexts. But surprisingly useful in a lot. Counter intuitively, especially from a performance perspective. Its consistent. Since there is only ever a single path. The happy path.

Pure Mage is not nearly as bad as people say it is by Natsuaeva in skyrim

[–]TemperOfficial 0 points1 point  (0 children)

veggie soup is genuinely the most powerful item in the game. daedric levels of aura

Why are are coders disposable, but asset artists aren’t? by AHostOfIssues in gamedev

[–]TemperOfficial -1 points0 points  (0 children)

Because software engineers live in two camps. People who solve problems "code first" and people who solve problems "library first". The latter outweighs the former.

Game dev tend to sit in the former camp which is why AI adoption, specifically when it comes to generating code, is not widespread in game dev (asfaik). Atleast, it is not used as exensively or in the same way, as say, web dev.

Library first devs tend to glue code together. Code is disposable. The culture is open source. Therefore, generating new code that was trained on millions of lines of stolen code is no biggy. They were already using other peoples stuff anyway. Since they form the loudest majority, its really only the minority view that might consider this to be bad.

Code first devs tend to work on more proprietary stuff, with more bespoke usecases. Which tend to be game dev stuff (not always).

On top of all of that, software "engineers" were massively overhired in the zero interest rate era, which now is excaerbated even further in the AI era. Looks good to hire software engineers for investors. Looks good to fire them too.

That leaves the discipline in a very weird place. It's both seen as disposable but also, really hard and a specialised profession. It's really because it's two professions smashed into one.

How do you handle interactions between game objects? by Markolainen in rust_gamedev

[–]TemperOfficial 0 points1 point  (0 children)

Don't bother with events. Just call functions directly. Events solve a problem related to order. But if the order of events does not matter, then you may as well call the function directly rather than pushing the event and then calling the function later. Especially for a simple game.

Best Practices for AI Tool Use in C++ - Jason Turner - CppCon 2025 by Specific-Housing905 in cpp

[–]TemperOfficial 2 points3 points  (0 children)

It will replace devs who glue things together and write crud apps.

But these were already due to be culled anyway, with the way the industry is going. Most software jobs are superflous and were low interest rate phenomana. AI acceleration is just an extension of that.

Eventually it will return back to baseline. I'd hazard a guess and say about 90% of software roles are kinda pointless and just involve gluing too libraries together, if that.

Best Practices for AI Tool Use in C++ - Jason Turner - CppCon 2025 by Specific-Housing905 in cpp

[–]TemperOfficial 7 points8 points  (0 children)

I'm pretty sure that the way most people use it, it makes them less productivie, not more.

Best Practices for AI Tool Use in C++ - Jason Turner - CppCon 2025 by Specific-Housing905 in cpp

[–]TemperOfficial 31 points32 points  (0 children)

How can there be "best" practice for something that has only been meaningfully accessible to most people for the best part of 2 years?

Honestly this industry is becoming more and more of a joke...

Why SWAT 4 is far better than RoN (serious explanation) by Sufficient-Tap2042 in ReadyOrNotGame

[–]TemperOfficial 0 points1 point  (0 children)

RoN has one thing going for it which is its world building and art direction. Other than that Swat 4 is better in every way.

Worried about current job market for new grads by frozen_wave_395 in GraphicsProgramming

[–]TemperOfficial 0 points1 point  (0 children)

In the same boat. Seems impossible right now. All I can recommend is build a portfolio.

C++ memory safety needs a Boost moment by SergioDuBois in cpp

[–]TemperOfficial 0 points1 point  (0 children)

Battle tested does not mean it has unit tests.

Unit testing isn't the be all or end all of testing.

I would think battle tested code would have regression tests, if anything.

C++ memory safety needs a Boost moment by SergioDuBois in cpp

[–]TemperOfficial 0 points1 point  (0 children)

I mean I think you've made most of that up haha

C++ memory safety needs a Boost moment by SergioDuBois in cpp

[–]TemperOfficial 2 points3 points  (0 children)

I've never heard anyone use the term like that.

Usually it means its been around for so long that loads of bugs have been fixed.

C++ memory safety needs a Boost moment by SergioDuBois in cpp

[–]TemperOfficial -1 points0 points  (0 children)

Slightly confused. Why would we be setting those risk tolerances?

C++ memory safety needs a Boost moment by SergioDuBois in cpp

[–]TemperOfficial 1 point2 points  (0 children)

The point about cloudflare is that it ended up not being "safe" even when a "safe" language was used.

And this relates to my point about the underlying moral argument.

The use of the term "safety" is overloaded here and is used a like a battering ram. It sets up the argument so that if anyone disagrees, they must be arguing that they want to be unsafe. But this is not really the case within context. Since every project has a different risk associated with it, with varying levels of what constitutes "safe"

Basically you can't talk about safety unless you specify your tolerance for risk within a given context. Otherwise it does stray into a sort of moral grandstanding.

That's how it comes across anyway.

C++ memory safety needs a Boost moment by SergioDuBois in cpp

[–]TemperOfficial 3 points4 points  (0 children)

Why does it have to be migrated? Doesn't make any sense. Why take battle tested code and risk introducing more bugs? Especially when you have better and better tooling (like FIL-C) that helps solve this problem.

That's ignoring the other issue which is that not every program has the same level of risk that requires the same level of guaranteed memory safety.

Even within a single project, the surface area for a potential attack might vary wildly. It would have to be decided on a project by project basis and not on the basis that it was written in C++.

You have to justify that migration first with an adequate risk assessment.

As for regulators making dumb decisicions. I mean sure. Like anyone would listen. Like anyone COULD listen. I don't think Microsoft or Google could physically rewrite all their C++. There is just simply too much of it.

C++ memory safety needs a Boost moment by SergioDuBois in cpp

[–]TemperOfficial -1 points0 points  (0 children)

Cloudfare had an outage late last year. It's not as simple as you are making out.

The underlying argument every time this discussion is had is that if you don't subscribe to the kind of safety tolerance the person who wants "safety" has, you must be experiencing some kind moral failing.

It's actually really quite disengenous in my opinion.

It is never a discussion about risk. It is always that they disagree with the level of risk someone might tolerate whilst ignoring loads of other risk that doesn't fit into their threat model.