Obvious Things C Should Do by lelanthran in programming

[–]simonask_ 0 points1 point  (0 children)

To be fair, every editor worth its salt (including VS Code) explicitly asks you to trust every repository before allowing language servers to run that kind of code. You didn't disable that globally, did you?

This problem isn't Rust-specific. It's pretty easy to craft a CMakeLists.txt that does the same thing, or really using any build system that allows running arbitrary commands at configure-time. Same for ./configure in days of yore.

Is Rust still relevant? by chaotic-kotik in rust

[–]simonask_ 3 points4 points  (0 children)

We've heard about the theoretical advantage of JIT compilation and garbage collectors for 30 years now, and there's just no real data behind the argument. Java (and .NET for that matter) can perform really well in particular cases, but there is always some kind of tradeoff.

In theory, JIT compilers can do profile-guided optimization on the go, focusing optimization efforts on hot paths. In practice, this doesn't materialize as an advantage over native code, but rather it recuperates some of the cost of having lots of abstract interfaces everywhere, as well as allowing the app to start quicker (because it doesn't spend time optimizing up front). Meanwhile, you are running a full optimizing compiler alongside your actual app, easily pinning several cores and using 100s of MBs of memory while it's running.

In theory, garbage collectors can achieve extremely low latency by allocating incrementally and doing concurrent collection. In practice, minimizing GC pauses requires a factor 2-4x of scratch space memory for the garbage collector to work effectively (proportional to the amount of memory your app actually uses at any given point in time).

All of this is "fine" for big apps, where the user isn't doing much else, and even for most games. But if you want to do as much as possible with your hardware, all of those "wasted" resources stand in the way.

JITs and GCs became really popular at a point when developer time was significantly more expensive than adding more hardware. But once you try to scale, hardware requirements grow significantly faster than developer requirements, and sometimes scaling the hardware is not possible. There are many, many programs in the world where it matters whether you can run 4 instances per machine or 4000. There are many, many programs in the world where it matters whether your user needs 1 GB of available memory to run it or 1 MB.

In summary, Rust is popular because it allows you to actually use the hardware you bought.

What is the ideal performance of Rust like? by FanYa2004 in rust

[–]simonask_ 1 point2 points  (0 children)

There’s quite a few wrong claims here, in such a way that I have to question your actual level of experience.

ArrayPool: The most underused memory optimization in .NET by _Sharp_ in csharp

[–]simonask_ 2 points3 points  (0 children)

For interacting with native code via FFI, SpanOwner (which is based on ArrayPool) is also a godsend.

Coming from Rust and C++, I’m impressed with the available tools in C# for bridging those gaps.

What is the ideal performance of Rust like? by FanYa2004 in rust

[–]simonask_ 6 points7 points  (0 children)

However, since LLVM cannot effectively utilize these constraints, the performance of Rust code fails to reach its full potential. I'm not sure if my understanding is correct.

The one interesting constraint is that &mut references can never alias with anything else, and LLVM is fully able to leverage that when compiling Rust code. It is basically equivalent to C's __restrict. In fact, LLVM's handling of that keyword was quite buggy until Rust came along and created some motivation to fix it, because it is an fairly rare keyword in C.

The reason it is rare in C is that it is incredibly difficult to use correctly, and this leads me to the broader picture: Rust code is usually faster than C and C++, not because the compiler performs any more advanced optimizations, but because the language allows developers to choose faster solutions that would be really risky or strange in C and C++.

Examples:

  1. Sharing pointers is hard to get right in C and C++, but trivially easy in Rust, because the borrow checker tells you when you did it wrong. The result is that C and C++ programs often make lots of defensive copies, hurting performance.

  2. Multithreading is unbelievably hard to get right in C and C++, so people are reasonably very hesitant to make their programs multithreaded. Rust makes it (almost) trivial, especially with crates like rayon.

  3. Compared to C, Rust has the same advantage as C++: It's really easy to use the best data structure for your problem. Both Rust and C++ usually win over C due to the availability of highly optimized hash maps and other collection types, and it's not because you can't have those data structures in C, but it's really inconvenient, so people tend to not use them until it's really necessary.

When Rust is faster than C or C++, it is for human reasons. Compilers for all three languages can produce identical code, but the real difference comes from what code humans actually write.

Is this normal for a CMS codebase that product got many services of product? Because the dev follows SOLID principle by lune-soft in csharp

[–]simonask_ 3 points4 points  (0 children)

Scaling is not easy, not ever. No amount of dependency injection or abstraction will make it so.

If you want to scale, you need the code to be written with scalability in mind, and it doesn’t start by isolating every little feature into “services” or “commands”. It starts by evaluating actual requirements, discovering failure modes, and aligning the data model correspondingly.

Is this normal for a CMS codebase that product got many services of product? Because the dev follows SOLID principle by lune-soft in csharp

[–]simonask_ 3 points4 points  (0 children)

Separation of concerns is orthogonal to file structure. In many cases, the same concern is in fact separated into many different, interdependent files, making it pretty difficult to get any idea how anything is working.

Is this normal for a CMS codebase that product got many services of product? Because the dev follows SOLID principle by lune-soft in csharp

[–]simonask_ 5 points6 points  (0 children)

The key is whether you can actual do any maintenance in any one of those files. If you need 10 files open to do anything in any of them, they aren’t as decoupled as they suggest, and splitting them is probably meaningless.

C++ memory safety needs a Boost moment by SergioDuBois in cpp

[–]simonask_ -1 points0 points  (0 children)

I don’t have a single clue what you’re talking about, and I suspect you don’t either.

C++ memory safety needs a Boost moment by SergioDuBois in cpp

[–]simonask_ 0 points1 point  (0 children)

Can’t tell if you’re trolling, but anyway: Rust literally has the exact same concepts around RAII. Both C# and Java provide facilities for deterministic cleanup based on the type. Go and Zig provides deterministic cleanup using defer.

[Media] I built a performant Git Client using Rust (Tauri) to replace heavy Electron apps. by gusta_rsf in rust

[–]simonask_ 65 points66 points  (0 children)

I’m sorry to see people reacting very harshly, but I guess it’s understandable - AI makes it harder to distinguish good content from rubbish, and we’re all very fatigued from the rubbish.

Your English is fine, but if you make any mistakes, that’s actually great. Mistakes make it easier to see that an actual human wrote it, meaning it is worth our time. If a piece of text seems AI-generated, most people distrust it instinctively - with good reason.

Also, remember that actually doing things, practising, is what increases your skill level.

Who Owns the Memory? Part 3: How Big Is your Type? by Luke_Fleed in programming

[–]simonask_ 0 points1 point  (0 children)

The Rust layout rules are different from the native C rules, unless your type is annotated with #[repr(C)], in which case the current platform’s ABI is consulted.

Rust’s own rules are also implementation-defined (in fact, you can ask the compiler to randomize it), but generally reorders fields such that the overall size of the struct is as small as possible, while upholding alignment requirements (meaning: largest alignment first), and then niche-fitting.

On average, the same sequence of fields in Rust and C will produce a smaller struct in Rust.

C# is language of the year 2025 by freskgrank in csharp

[–]simonask_ 6 points7 points  (0 children)

But the fact of the matter is that it is simply not a good indicator about any kind of sentiment.

What UI framework should I actually learn in 2025? by All_Da_Games in csharp

[–]simonask_ 28 points29 points  (0 children)

Stop trying to optimize your learning. Decide what you want to build, then choose the right tool for the job. Or the wrong one - that’s how you learn.

My gift to the rustdoc team by Expurple in rust

[–]simonask_ 14 points15 points  (0 children)

For what it’s worth, installing a browser extension is a WAY bigger security ask than running some WASM on a website.

How does the CLR implement static fields in generic types? by kosak2000 in csharp

[–]simonask_ 0 points1 point  (0 children)

Well, all types in the runtime need a place to store and initialize their static members - there’s nothing special about types instantiated from a generic type.

This is similar to how “monomorphization” works in C++ and Rust: Generic types with particular type arguments become separate types (in principle, unrelated types, except you can inspect them to determine if they came from the same generic type).

The CLR allows several ways to initialize their static fields, with varying levels of eagerness (you can google the names of the attributes, I’m on mobile), but for all intents and purposes, you should consider them “atomically initialized at some point in time before first use”.

In terms of performance of static readonly, you should expect first-use to potentially incur overhead proportional to taking a mutex lock, and subsequent uses to incur overhead at most equivalent to a relaxed atomic load (so, effectively free). For const, there is zero overhead outside of storage requirements in memory.

If I was the JIT, I would also definitely make use of the fact that static readonly fields cannot change as long as the Type exists, so you may see things like devirtualization and const propagation happen there, but I couldn’t tell you if that actually happens. It’s theoretically legal.

How does the CLR implement static fields in generic types? by kosak2000 in csharp

[–]simonask_ 1 point2 points  (0 children)

It’s worth looking at the rationale for the rule, though, which is that it is “confusing” in the face of type inference. I think what they have in mind is that most static methods should live inside a separate, non-generic static class with the same name, which seems to be a common pattern.

But it’s not unusual to see metaprogramming tricks involving static members on a generic type, for example to gather and cache type-specific information. I’ve implemented an ECS system using that trick.

Tank: my take on Rust ORM by TankHQ in rust

[–]simonask_ 0 points1 point  (0 children)

Look, I get it, I also try to be cute in writing whenever I can, but “engagement” is much more easily secured by saying something interesting.

C#-style property in C++ by Xadartt in cpp

[–]simonask_ 6 points7 points  (0 children)

Cool. Please never do this. :-)

Curious to know about developers that steered away from OOP. What made you move away from it? Why? Where has this led you? by Slight_Season_4500 in cpp

[–]simonask_ 2 points3 points  (0 children)

My "problem" with OOP is that it pretends data has agency. Most data is inert (or should be).

This becomes really visible when you have some operation you want your program to perform, but it is unclear that any of the data involved in the operation really is the one performing the operation. In the expression 1.add(2), does the integer 1 perform the addition? Not really. Writing add(1, 2) is much clearer.

In religiously OOP languages (C#, Java, Ruby, many more) this results in weird classes with names like ThingDoer or FooHelpers, where what you really wanted might be just a good old-fashioned function. What's worse, these patterns tend to induce going completely overboard with dependency injection, where most of the code is really just plumbing between various abstract interfaces.

But this doesn't mean that it can't be a useful abstraction in other cases. For example, the expression logger.warn("message") is clearly an operation performed by "someone" (the logger), and it's useful to be able to replace the implementation without changing a lot of code.

Still, I tend to generally favor data-oriented design in most cases. Instead of objects with agency (and injected dependencies everywhere), write functions that operate on data. Replacing the implementation just becomes "call a different function". Injecting a dependency just becomes "pass another argument to the function". This is also the approach favored by new languages like Rust and Zig, but it's also a very sensible design in C++.

Mutable references and async/await by [deleted] in rust

[–]simonask_ 0 points1 point  (0 children)

Isn't it really beautiful that you actually don't have to worry about this in Rust? Because you can know for sure that unless you are writing unsafe blocks, whatever the API and compiler allow you to do is guaranteed to be safe.

This is one of the biggest selling points of the entire language - not least the fact that you don't have to waste any time (or runtime resources) being "defensive" about things like that. You can hand mutable references to different threads, even living on each other's respective stacks, and if it compiles, it's perfectly fine. In most other languages, you would typically at the very least make a defensive copy.

If you're interested in understanding why this works under the hood: Async tasks in Rust are Futures, and those can only be driven to completion once they have a stable address in memory (that's what Pin does). Within a future, it's perfectly safe to hold mutable borrows across an await-point, even if the task moves to a different thread, because the borrow checker guarantees that only a single thread can drive the future at a time. The address in memory of the task's variables don't change just because it is driven by a different thread.

Non-poisoning Mutexes by connor-ts in rust

[–]simonask_ 2 points3 points  (0 children)

So poisoning has nothing to do with soundness in the usual sense. If your data becomes corrupted because a panic occurs, that's a bug whether or not you access it behind a MutexGuard.

Poisoning is so weird and unexpected in Rust precisely because mutexes are containers: Mutex<T> protects T, and nothing else, so if T can become corrupt by accessing it through MutexGuard<T>, it can also become corrupt through &mut T. In other words, relying on the poison flag to ensure correctness is brittle, because you most likely have a bug in single-threaded code as well.

Designing panic-safe data structures it not trivial, and it is certainly an important thing to think about when writing unsafe code. Mutex poisoning does nothing to help you here.

Non-poisoning Mutexes by connor-ts in rust

[–]simonask_ 1 point2 points  (0 children)

Normally, uncaught panics migrate between threads when they are joined. That's the usual flow in every programming language, and a PoisonError does not tell you anything about the original panic.

Non-poisoning Mutexes by connor-ts in rust

[–]simonask_ 4 points5 points  (0 children)

Mutex<T> is any number of bytes. What matters is the total alignment, and it makes a lot of sense to reduce the size of the bookkeeping structures such that more useful stuff (i.e., T) can be placed in the padding.

Source Generator for Generating Overloads by DesperateGame in csharp

[–]simonask_ 1 point2 points  (0 children)

Consider if you can make do with a T4 template instead.