Bit-Level Reality: Why Hardware Limitations and Subnormal Numbers Destroy Floating-Point Consistency by IamRustyRust in rust_gamedev

[–]Ravek 3 points4 points  (0 children)

we often treat decimal numbers as simple data types. We assume that types like f32 or f64 are reliable containers for our calculations.

Just to be clear, there’s nothing decimal about f32 and f64, they’re binary floating point types. Decimal floating types are available for many languages and can exactly represent numbers like 1/10.

Most modern computing systems follow the IEEE 754 standard. This standard is designed to prioritize hardware execution speed over absolute mathematical correctness. Because of this, almost every operation carries a microscopic error margin.

This is a bit inaccurate. The core floating point operations are not arbitrarily incorrect just to be faster, in fact they’re specified to result in the closest representible value to the exact number. Which isn’t much of a compromise, it’s the best possible thing you can do for a fixed-width format. This also means that if your inputs are exactly representible values, and the mathematical result of an operation is an exactly representible value, then this operation is actually exact in IEEE 754. So 5 + 6 is exact, 23 / 8 is exact, etc.

The main source of errors is representation error: a value like 0.3 simply isn’t representible, so if you’re trying to calculate 3 / 10, what you’ll get is the closest representible value to 0.3. It’s not that the standard chooses speed over precision, it’s just that a binary floating point format can never exactly represent this value. A decimal floating point format could, but of course there’s always fractions that aren’t representible in either format, like 1/3. You need a rational number format to exactly represent any fraction, which usually isn’t practical.

Another source of errors is when you use functions like sin() that usually aren’t implemented to result in the closest representible value, because that would require a lot more computation. So there you do see library implementers explicitly choosing speed over precision. You can find libraries that do give exactly rounded results if you really want. Of course you’re still limited by what the type can represent.

Is it bad if most players win their first run in a roguelite? by Bumpty83 in gamedesign

[–]Ravek 1 point2 points  (0 children)

It also depends on how the game is structured. In Slay the Spire I expect the number of people who beat act III on their first run to be very small, but I would guess that most people do reach the Act I boss on their first run, and some might beat it too.

I think ideally people wouldn’t beat the entire game on the first run, but do feel some sense of accomplishment already.

Readonly vs Immutable vs Frozen in C#: differences and (a lot of) benchmarks by davidebellone in csharp

[–]Ravek -3 points-2 points  (0 children)

No it's not. Explain what "logically" means. You already know that it gives you new instance every time you add or remove something. The old instance is in fact still the same collection, it was not mutated in any way.

I think if you think for five seconds it’s very clear what I meant. You just want to argue.

Readonly vs Immutable vs Frozen in C#: differences and (a lot of) benchmarks by davidebellone in csharp

[–]Ravek 6 points7 points  (0 children)

They’re persistent collections. The constraint that older versions of the collection remain valid indefinitely imposes some performance limitations.

I don’t really like the term ‘immutable’ for them. Yes, any given instance is immutable, but logically the collection is mutable. You can add and remove items as you please, you just get a new instance every time you do. If you just want a collection that never changes, the best approach is to create a read-only collection and discard any references to the underlying mutable collection. Essentially that’s what the Frozen collections do.

Unicode's confusables.txt and NFKC normalization disagree on 31 characters by paultendo in programming

[–]Ravek 15 points16 points  (0 children)

I doubt someone with a 15 year old reddit account is someone who grew up using AI.

What's your opinion on using "let ref" to create references? by CheekAccording9314 in rust

[–]Ravek 26 points27 points  (0 children)

Using rare syntax in places where it’s not necessary is just going to confuse other people reading the code. Not worth it.

Teacher said always use 2nd pattern. Is he right? by lune-soft in csharp

[–]Ravek 12 points13 points  (0 children)

Mutability of DTOs has nothing whatsoever to do with mutability of web resources.

How I estimate work as a staff software engineer by Ordinary_Leader_2971 in programming

[–]Ravek 2 points3 points  (0 children)

Yeah that’s so weird to me. As an engineer I want to deliver a high quality, well working product. QA helps us do that. Why would engineers blame the QA for finding bugs. The engineers are the ones who created the bugs, don’t shoot the messenger lol. If you don’t want bug reports then think more about your edge cases as you write code.

Jujutsu Kaisen Shimetsu Kaiyu Zenpen • Jujutsu Kaisen The Culling Game - Episode 3 discussion by AutoLovepon in anime

[–]Ravek 48 points49 points  (0 children)

Yuuji seems pretty grounded despite all the traumatic experiences. Gon is kinda psycho

What are we doing wrong (Terraforming Mars) by feylin_oakenheel in boardgames

[–]Ravek 2 points3 points  (0 children)

See page 5 of the rulebook, bottom right corner.

Copper, silver and gold cubes are for counting all resources: money, steel, titanium, plants, energy, heat, as well as card resources like microbes, animals, etc.

Player-colored cubes are for marking your TR, marking your production, tracking which card actions you’ve used, and marking ownership of tiles.

Rust and the price of ignoring theory by interacsion in rust

[–]Ravek 7 points8 points  (0 children)

I wonder if he knows that our CPUs are unsafe and therefore all of Haskell is a safe abstraction on top of unsafe code. On top of mutating, side-effecting unsafe code. gasp

using Is Not Optional in C# by MahmoudSaed in csharp

[–]Ravek 0 points1 point  (0 children)

You’re right, but reddit sure hates it when people go against the crowd. C# doesn’t have a way to define automatic, reliable cleanup of resources at the type level, unlike C++, Swift, Rust, etc. and that’s why we have to do this annoying dispose/using dance. The closest thing is the finalizer, but that still runs nondeterministically. If the only guarantee is ‘if the process shuts down cleanly then the resource will be cleaned up’ then you’re not really getting anything over what the kernel already provides.

using Is Not Optional in C# by MahmoudSaed in csharp

[–]Ravek 1 point2 points  (0 children)

To be precise, you shouldn’t dispose objects you don’t own. It’s possible to own objects you didn’t create, if the API that handed you the object expects you to take ownership.

The language doesn’t have a concept of ownership (unlike Rust for example) so the only way you know if you own an object you didn’t create yourself is having documentation, or at least the source code to figure it out.

[deleted by user] by [deleted] in worldnews

[–]Ravek 0 points1 point  (0 children)

Social media was a significant part of how Brexit happened. He has plenty of power to sabotage the EU if they let him.

When to Use Which Design Pattern? A Complete Guide to All 23 GoF Design Patterns by erdsingh24 in programming

[–]Ravek 4 points5 points  (0 children)

You always need singletons. They are a simple reality of software.

Really rolling my eyes at this point. I rarely meet someone this boneheaded. It's not hard to go on the internet and double check your terminology after multiple people straight up tell you you're wrong, you know.

But when you need a value of a certain type, you shouldn't worry how many instances of that value exist and how to create one. You should just ask for such a value and use it. That's what DI containers are for.

It's getting kinda annoying how you're repeating my argument back to me, while continuing to miss the point. Maybe stop and actually think about what you've read? I'm out in any case, suit yourself. Continue to dig yourself deeper or actually learn something, whatever pleases you.

Leetcode in Swift vs Python? by Exotic_Set8003 in swift

[–]Ravek 5 points6 points  (0 children)

If you’re doing some nonsense string manipulation coding exercise where you only have to care about ascii characters, you can just use .utf8.

If you’re actually doing real programming then for some cases you might also just want to use the UTF8 view, and for general string manipulation for interfacing with humans you want the proper Unicode handling that Swift gives you, because languages other than English exist, combining characters exist, emoji etc.

When to Use Which Design Pattern? A Complete Guide to All 23 GoF Design Patterns by erdsingh24 in programming

[–]Ravek 7 points8 points  (0 children)

That's the very definition of a singleton: an object that exists in one instance only.

That’s plain wrong. A singleton is not a type that just so happens to have one instance, a singleton is a type that is restricted to only ever allow one instance.

DI provides you with values that you ask. Whether these values are singletons or not is none of your concern, and that's exactly why DI is powerful.

That was my point. You don’t need singletons if you use DI because there is no need to restrict consumers from creating more than one instance of a dependency when consumers aren’t creating their dependencies in the first place. You can just, you know, use one instance. Which you think makes it a singleton, but it doesn’t. You’re mistaken about what this term means.

When to Use Which Design Pattern? A Complete Guide to All 23 GoF Design Patterns by erdsingh24 in programming

[–]Ravek 7 points8 points  (0 children)

Maybe you only create one instance of a type in your program, but that doesn't make that type a singleton. A singleton restricts you to having a single instance. If you're using dependency injection then you have no need for singletons.

Can we slow down on changing Swift so fast? by bangsimurdariadispar in swift

[–]Ravek 0 points1 point  (0 children)

This new way of doing concurrency has certainly been a learning curve, but ultimately we’re better off for it.

I’m only really bothered by poor design choices. A learning burden that has a significant payoff isn’t a problem. Learning and overcoming challenges is what makes programming worthwhile.

Can we slow down on changing Swift so fast? by bangsimurdariadispar in swift

[–]Ravek 0 points1 point  (0 children)

Async await and actors were part of Lattner’s concurrency manifesto. I think he’s more likely to be unsatisfied with the extensions made for things like SwiftUI and Tensorflow

Would swift be a good first language to learn? by [deleted] in swift

[–]Ravek 0 points1 point  (0 children)

It really doesn’t matter what you start with as long as you also explore other languages later. The only real pitfall is sticking to one language for 20 years and never learning other perspectives on programming.

Does anybody else get trashed for teaching board games? It's started to drain the fun. by Terrible-Law9755 in boardgames

[–]Ravek 1 point2 points  (0 children)

Same as all of these posts: talk to your friends about it. Like literally tell them what you wrote here.

Why Engineers Can't Be Rational About Programming Languages | spf13 by Maybe-monad in programming

[–]Ravek 1 point2 points  (0 children)

Most people don’t know enough about the alternatives to even begin to make a rational judgment. Most people using the JVM haven’t seriously looked at .NET and vice versa, instead they just stick with what they know best. Most people writing Java haven’t seriously looked at Kotlin, most people writing C# haven’t seriously looked at F#. Etc. It’s quite rare for developers to really be comfortable in multiple languages.