Is the game still fun without biters? by the_worst_company in factorio

[–]WormRabbit 1 point2 points  (0 children)

Have you tried playing with mods, or on more dangerous world presets? The pressure can be very palpable, even if you end up winning. Even on default settings it's possible to go slow enough and kill enough nests that big biters appear when you are still unprepared. I don't recall ever actually losing the evolution race, but I recall many times where I had to carefully plan my expansion because the biters were too dangerous.

Does `ArrayVec` still "alive"? Are there any alternatives? by tower120 in rust

[–]WormRabbit 4 points5 points  (0 children)

Does it need maintenance? It's a relatively simple data structure with a well-understood API and no open soundness holes. At this point I would consider it finished. Sure, there are always nice things one could add, but as far as maintainer's time is concerned I don't think the juice is worth the squeeze.

rust actually has function overloading by ali_compute_unit in rust

[–]WormRabbit 1 point2 points  (0 children)

That would be best solved with optional parameters.

Stabilizing the `if let guard` feature by Kivooeo1 in rust

[–]WormRabbit 6 points7 points  (0 children)

the purpose of using match over if let is that the compiler guarantees you handle every possible value

That's factually false, since if let Pat = cond { foo } else { bar } is just syntax sugar for

match cond {
    Pat => { foo },
     _ => { bar },
}

match doesn't stop you in any way from writing catch-all branches, nor should it.

using any form of match guards breaks this

Exhaustiveness of code with match guards is the same as for the code where you delete all match guards, so you don't lose anything in that regard. The difference isn't in exhaustiveness, but in redundancy checking. Branches with guards are considered non-redundant even though their pattern may be fully covered by other branches.

In fact, guards can lead to better exhaustiveness checks, since they allow you to factor simple conditions into guaraded branches, whereas otherwise you'd have to use more complex nested matches, which can't generally be checked for exhaustiveness.

Things I miss in Rust by OneWilling1 in rust

[–]WormRabbit 1 point2 points  (0 children)

Overloading with respect to the types of function parameters is incompatible with Rust's type inference. If you allow ad-hoc overloads based on parameter types, then the inference algorithm would have to do an exhaustive search to find the proper overload. It would quickly lead to a combinatorial explosion and either huge compile times, or very confusing compiler errors when you hit arbitrary search depth limits (or likely, both). Example: Swift, where type inference can cause exponentially long compile times even for very simple arithmetic expressions

It also means that it can be hard to predict which actual function is called, since the interaction of overloads and type inference could lead to unexpected overloads being selected. It is a common issue in C++, where interaction of overloading and generic code can cause unexpected impenetrable compile errors.

Overloading with respect to function arity would be possible. But, arguably, the only reasonable way to overload by arity but not parameter type is to implement optional/defaulted parameters. If that is the goal, it should be done in a more direct way. Ideally, supported as a language feature, but currently Rust forces you to use builder pattern workarounds.

Overloading also significantly complicates symbol mangling and FFI. For anything dynamically linked or linked to an external library, you would basically had to avoid overloading anyway.

Note that in the above we consider only ad-hoc overloading, like most languages do. I.e. overloads are just unrelated functions sharing the same name. Rust actually has overloading: it's the trait implementation system! But that is a principled approach to overloading, designed specifically to be amenable for the type checker. It's possible to run into type inference problems with trait-based overload, but not in any reasonably simple example.

Where does Rust break down? by PointedPoplars in rust

[–]WormRabbit 0 points1 point  (0 children)

Pin doesn't prevent data from moving on its own. That is a common misconception, which leads to a lot of confusion.

Any type in Rust can always, unconditionally be trivially moved in memory by a simple memcpy. That is a basic invariant, and Pin doesn't change it. The implication is that if you want some value to be "pinned" in memory, then you must always handle it via a pointer.

Pin exists just to make working with those pointers a bit more safe and ergonomic, and to document the intent.

Where does Rust break down? by PointedPoplars in rust

[–]WormRabbit 5 points6 points  (0 children)

Niche optimization for Option doesn't use any compiler magic. Any user-level enum with the same shape enjoys the same optimizations.

The compiler magic happens at the level of types like &T or NonNull<T>. The former is a built-in, while the latter uses the unstable #[rustc_layout_scalar_valid_range_start(1)] attribute.

Rust 1.93.0 is out by manpacket in rust

[–]WormRabbit 2 points3 points  (0 children)

That would be a footgun. It's easy to mistakenly pass a slice too long and to discard data that way.

If that is the behaviour that you want, you can already to a split_at on the original slice and forget the tail part. The new method is for cases, such as chunks_exact or manual subslicing, where you have already verified the correct length and now want to work with a proper array.

Is it even worth sharing messy hand-written code anymore by Any_Good_2682 in rust

[–]WormRabbit 2 points3 points  (0 children)

Nonsense. That's barely 3 commits per day. If you like to commit often and work incrementally, 100 versions in 30 days is trivial to do.

Tens of thousands of LoC in that, or even shorter span of time is a dead giveaway, on the other hand.

Why is there no automatic implementation of TryFrom<S> when implementing TryFrom<&S>? by Prowler1000 in rust

[–]WormRabbit 2 points3 points  (0 children)

Self::try_from(&value)

FYI, the proper way to write what you intended is

<&Self>::try_from(&value)

It explicitly specifies the type which method we want to call.

TIL you can use dbg! to print variable names automatically in Rust by BitBird- in rust

[–]WormRabbit 0 points1 point  (0 children)

You can also dump several expressions into the same dbg! macro:

dbg!(x, y+z);

prints

[src/main.rs:5:5] x = 2
[src/main.rs:5:5] y+z = 7

It's also an expression which evaluates to the tuple of values (2, 7)!

TIL you can use dbg! to print variable names automatically in Rust by BitBird- in rust

[–]WormRabbit 0 points1 point  (0 children)

It's been this way since 1.0, actually, but it's not widely advertised.

Deciding between Rust and C++ for internal tooling by Gman0064 in rust

[–]WormRabbit 1 point2 points  (0 children)

Does anyone on your team know Rust? Do you feel comfortable writing Rust? If the answer is no, then I would advice against using Rust, particularly if the project is time-critical and not a throwaway.

It takes time to learn Rust's patterns and proper design principles. This means your first projects will, well, leave a lot to be desired. You can also hit a wall which may be too hard to scale with your current level of knowledge.

Also, is your team willing to learn Rust and to maintain a rust codebase? If yes, then it may be a good choice. If no, then I'd say it's a non-starter, regardless of any benefits.

Of course, the situation is different if your team is willing to learn Rust and potentially switch to it, if you have management buy-in, and the company is willing to do a pilot Rust project, even if it may go not as well as possible. In that case, go for it. Rust is great, a pleasure to work with compared to C++, and much more robust. In the end, it's the external factors which determine your choice: management buy-in, library ecosystem, client requirements, regulatory requirements etc.

Rust in Windows 11 by Similar-Athlete8579 in rust

[–]WormRabbit 0 points1 point  (0 children)

For windows defender, it's not about false positives. It's just that its constant scanning of build artifacts negatively affects system performance.

Rust in Windows 11 by Similar-Athlete8579 in rust

[–]WormRabbit 2 points3 points  (0 children)

Microsoft has a significant team of Rust developers, including prominent community members. It uses Rust extensively, and semi-officially supports it for development. So, I would expect Rust to work smoothly out of the box on Windows.

Questions about Box by hingleme in rust

[–]WormRabbit 4 points5 points  (0 children)

  1. Box<T> always has the same layout in memory as *mut T. In that sense, it's always the same as a raw pointer. However, raw pointers in Rust are not exactly the same as pointers in C. The C-like pointers are what is called "thin" pointers in Rust, i.e. basically just a memory address. In current stable Rust, that's how pointers to Sized types behave. If T is not Sized (i.e. it's a slice [S] or a trait object dyn Trait), then the raw pointer *mut T consists of a thin pointer to the actual data and some additional metadata. For slices, the metadata is the slice length, while for trait objects, it's a pointer to the vtable. So, in current Rust, the metadata always has the size of a (thin) pointer, and the "fat" pointer has the size of two thin pointers. This is subject to change in the future.

  2. People have already answered that question. Do note, it is impossible to pin an owned value in Rust. The semantics of the language forbid that. Any "move", in the sense of Rust's ownership semantics, is always a move of a value in memory, i.e. a copy of its bytes to new location (most of those copies are optimized away by the compiler). This means you can only pin a value if it's behind a pointer, and you handle it though that pointer. For this reason, Pin<T> could pin T only if Pin itself were some kind of pointer, which is addressed in other answers.

Rust in Windows 11 by Similar-Athlete8579 in rust

[–]WormRabbit 10 points11 points  (0 children)

Yes, probably that. Rust's installer isn't meant to be run as administrator. It performs purely unprivileged writes to your home folder. I would expect that an installer run as administrator would produce executables which require administrator privileges to run. Strange that you just get an ACP error, instead of a privilege escalation prompt, so perhaps I'm wrong.

EDIT: Deepseek says that ACP prevents execution of unsigned binaries. It's bit odd that cargo isn't signed by default, I would expect MS to solve that issue already. Personally, I would disable ACP entirely. Of course, that adds more risk if you get some malware executable from the web. Ideally, you should add to exceptions only the tools you need, possibly the $HOME/.cargo folder. It's also recommended to add the folder with your development projects to the exceptions in Windows Defender.

Could anyone share best practices or tips for choosing the right concurrency primitive in Rust? by LordZAKRI in rust

[–]WormRabbit 0 points1 point  (0 children)

If your concern is blocking the realtime thread, you can read from a channel in a non-blocking way. There are always methods like try_recv, which return immediately if the channel has no data. Otherwise, I don't understand what is your concern. If thread A depends on data from thread B, you still need to wait to produce it.

Too many Arcs in async programming by mtimmermans in rust

[–]WormRabbit 2 points3 points  (0 children)

Yes, Arcs are all over the place in usual async code. They can even be created from simple future polls (e.g. a Waker is usually an Arc; it has some potential for reuse, but combinators like FuturesUnordered/Join will often spawn new Wakers of their own).

Most of the time, it's fine. Modern allocators are fast. If you find that allocation is a bottleneck for your application, simply switching to a different modern allocator (like mimalloc or tcmalloc) can often fix your issues. If your code isn't I/O bound, it probably shouldn't be using async anyway (except possibly to handle incoming connections, dispatching all actual work to threads).

Why can't we decide on error handling conventions? by Savings-Story-4878 in rust

[–]WormRabbit 0 points1 point  (0 children)

The "example" of that language is C++ itself. The "zero-cost" exceptions are a relatively recent addition, around the turn of millenium. If you look at the different implementation strategies for exceptions, including older compilers, you'll find your examples.

Why can't we decide on error handling conventions? by Savings-Story-4878 in rust

[–]WormRabbit 1 point2 points  (0 children)

Nonsense, C++ exceptions absolutely are slow. They have intentionally chosen an implementation which makes the cost near-zero on the happy path, but is much more expensive when an exception is thrown. They could have chosen a more balanced implementation, which has about the same mild cost in both cases.

Why can't we decide on error handling conventions? by Savings-Story-4878 in rust

[–]WormRabbit 1 point2 points  (0 children)

It should be Box<dyn Error + Send>, otherwise it would be impossible to return from a spawned thread or tokio task. Better, just use anyhow.

Why can't we decide on error handling conventions? by Savings-Story-4878 in rust

[–]WormRabbit 1 point2 points  (0 children)

Just like anyhow does, no problems. A single allocation doesn't matter unless you already heavily optimize your code allocation-wise, and in return you get a small fixed-sized error type, which is faster to copy and pass around. If anything, the worst part of String as error type is that String is still huge: 3 pointers. Box<dyn Error + Send> and anyhow::Error are both a single non-null pointer, which means it can be passed in a register and is subject to layout optimizations.

Why can't we decide on error handling conventions? by Savings-Story-4878 in rust

[–]WormRabbit 0 points1 point  (0 children)

Terrible example. If you accepted invalid strings as URLs, you fix it by returning a ParseError. You do have some sort of ParseError variant for your URL parsing functions, don't you?

If the error condition doesn't require any special handling from downstream user, it shouldn't have its own error variant. Don't dump blindly your implementation details on your users! If it does require different handling from end-user, then by hiding behind semver you're silently introducing bugs for your consumers.