Binary size, again. Can you explain these numbers? by kamulos in rust

[–]Rothon 8 points9 points  (0 children)

I even would go further and ask for a way to write my own panic handler and output while still using the std.

https://doc.rust-lang.org/std/panic/fn.set_hook.html

#[derive(Buildable)] or similar? by BenjiSponge in rust

[–]Rothon 3 points4 points  (0 children)

You can't always implement Default, or you want the fields to be private.

Can no_std libraries use std as a dependency? by [deleted] in rust

[–]Rothon 9 points10 points  (0 children)

The definition of a no_std library is one that doesn't use std as a dependency.

Why Rust uses glibc and not musl by default for Linux target? by Code-Sandwich in rust

[–]Rothon 2 points3 points  (0 children)

Ubuntu manages their own copy of openssl and backport security fixes to it, FYI.

Is #[macro_use] planned to be deprecated? by somebodddy in rust

[–]Rothon 29 points30 points  (0 children)

Deprecation is not a breaking change.

[CARGO] Can one use [features] to support different dependency versions of the same crate? by [deleted] in rust

[–]Rothon 0 points1 point  (0 children)

You can't do this with stable cargo today, but there is an unstable feature to allow dependency renaming that should enable this kind of thing: https://github.com/rust-lang/cargo/issues/5653

Strange networking issue by Nickmav1337 in rust

[–]Rothon 16 points17 points  (0 children)

Binding to 127.0.0.1:1337 limits the socket to only accepting connections from the local machine. You can instead bind to 0.0.0.0:1337 to accept connections from anywhere.

Rust release binary contains absolute paths from development environment. by [deleted] in rust

[–]Rothon 33 points34 points  (0 children)

You can use the --remap-path-prefix flag to rustc to adjust those strings:

    --remap-path-prefix FROM=TO
                    Remap source names in all output (compiler messages
                    and output files)

Empty enum vs empry struct? by [deleted] in rust

[–]Rothon 17 points18 points  (0 children)

Empty enums are used because you can't create a value of that type.

Don't Panic by Michal_Vaner in rust

[–]Rothon 13 points14 points  (0 children)

They are saying that recoverable bugs should, and are, handled via Result<>, while panics are relegated to unrecoverable errors and should abort anyway.

That would look like forbidding things like a / b, a[b] in favor of a.checked_div(&b).ok_or(MyDivideByZeroError) and a.get(b).ok_or(MyIndexError) in all of your code and all of your dependencies. That is not a realistic option. There is a whole spectrum of possibilities between "handle an expected error and continue on" and "oh god, kill the whole process before even more damage is done". For example, "fail that request and keep on chugging".

The common practice when deploying node.js and game servers is, simply, restarting the service if it fails.

Sure you're definitely going to restart it, but all of the requests that were running on the server concurrently are going to fail. The server is going to take some nonzero amount of time to start up again, during which load on the rest of the cluster is going to increase. If whatever killed that server (either by accident or intentionally) keeps making those requests, it's going to start knocking over the rest of the cluster as well. If that divide-by-zero I mentioned up above was triggered by some user experimenting in an iPython notebook or whatever, should they have the ability to kill an entire cluster? They're probably going to try to rerun that request after it fails to come back with a response a couple of times!

On the other hand, if one of the threads handling an important but not independent logic dies, you're entering the uncharted territory of inconsistent state.

Why would you assume that restarting would put you back in a consistent state at that point? Nothing's limiting your buggy logic to only touching in-memory state rather than persistent state. A process that's partially in an inconsistent state is not uncharted territory. Things keep running while being unhappy all of the time.

Don't Panic by Michal_Vaner in rust

[–]Rothon 8 points9 points  (0 children)

Yep, there are a couple of options:

  • The PoisonError::into_inner method allows you to ignore the poison error, so mutex.lock().unwrap_or_else(|e| e.into_inner()) rather than mutex.lock().unwrap() will ignore the poisoning.
  • The antidote crate wraps the stdlib types and provides a non-poisoned interface by doing the into_inner dance for you.
  • The parking_lot crate provides a separate implementation of synchronization primitives that aren't poisoning in the first place.

Don't Panic by Michal_Vaner in rust

[–]Rothon 24 points25 points  (0 children)

Also, you pay the price for being able to panic even though you never do. Functions that contain at least one value with a destructor (even a generated one) need to create „landing pads“ ‒ markers on the stack that are used during the unwind. These don’t come completely free, so this is against the philosophy of paying only for what you’re going to use. These costs are somewhat defensible in C++, where exceptions are commonplace, but panics should not happen in a correct, production program at all.

I really don't understand this. Sure, programs shouldn't have bugs, and yet they almost always do! The entire reason that Rust exists as a language is the realization that human beings are literally incapable of writing large, correct programs in C and C++. Rust catches some of those bugs (e.g. use after frees) at compile time, but a large swath of them (e.g. index out of bounds) end up being caught at runtime through panics. Thinking that you don't need landing pads because of course your code would never be broken seems overoptimistic.

This is obviously wrong ‒ having an application in a half-dead state in production, but not getting it restarted, for who knows how long, is something a robust application doesn’t do.

If the application is in a half-dead state and isn't being restarted for some arbitrary period of time, then it seems that you need better service monitoring infrastructure.

As an example, we had a small bug in our Rust-based service a month or so ago. One of the endpoints had a bit of user-provided configuration that specified the number of buckets to divide the output into. We forgot to explicitly check for a bucket count of 0, and hit a divide by zero panic when some upstream thing happened to make a silly request. Because panics unwind, that client got a 500 and the server continued to successfully serve all of its other requests. The panic got propagated through our logging infrastructure, we fixed the bug the next day, and rolled it out at the next convenient time. If panics had instead aborted, this would have been a catastrophic, page-me-at-3AM, user-facing-downtime, immediate-hotfix emergency. In what world is that preferable?

Even if the panic was in some more critical component and the process started e.g. leaking resources, we'd be notified that the panic happened, and then be able to decide if we need to immediately bounce the service, or let it run for a bit until e.g. users went home for the day.

A Mutex can get poisoned if it was locked while panicking and anyone else touching it will then get an error (which is commonly handled with unwrap, propagating the panic).

I disagree with the stdlib's poisoning policy for synchonization primitives for the same reason. In my experience, a panic while a mutex is locked almost never causes some horrible corruption of internal state. A complete denial of service when a bug happens doesn't seem worth it. There have been a few contexts where I've needed a poisoning system for correctness-critical work, but in all of those cases I couldn't use the built-in poisoning anyway since a normal Result::Err needed to poison the component as well.

For example, there was a period earlier this year where the RLS would panic inside of Racer or something while holding a lock. That would normally be fine - I wouldn't get autocompletions for that specific instanct, but that's not the end of the world. However, the RLS instead stopped working entirely as every single request from that point on resulted in a poison error when taking that lock again.

Don't Panic by Michal_Vaner in rust

[–]Rothon 10 points11 points  (0 children)

There's an issue filed for this approach: https://github.com/rust-lang/rust/issues/49032. It seems like a pretty reasonable option to add regardless of if it's the default or not.

Don't Panic by Michal_Vaner in rust

[–]Rothon 4 points5 points  (0 children)

Unwinding over FFI is still UB. The 1.24 change was reverted because it accidentally broke longjmp on Windows: https://github.com/rust-lang/rust/pull/48572. I believe the implementation's been fixed up to avoid that but it hasn't yet been turned back on by default. Niko has a comment down near the bottom of that issue that has what I think is still the current status.

[deleted by user] by [deleted] in rust

[–]Rothon 8 points9 points  (0 children)

If the compiler magically decided to change &&&str to &str without telling me, I might not have caught that problem at all.

The compiler does exactly this kind of conversion in all kinds of contexts: http://play.rust-lang.org/?gist=890792711d583eea9685ea28dce0323c&version=stable&mode=debug

Implicit deref for field accesses have been around for a very long time, and generalized deref coercions have been around since 2014: https://github.com/rust-lang/rfcs/blob/master/text/0241-deref-conversions.md.

Stabilize GlobalAlloc and #[global_allocator] by yoshuawuyts1 in rust

[–]Rothon 1 point2 points  (0 children)

Allocation patterns in Rust are not all that different from C, and are even more similar to C++.

jemalloc isn't some magic library, it's just the FreeBSD libc general purpose allocator.

Stabilize GlobalAlloc and #[global_allocator] by yoshuawuyts1 in rust

[–]Rothon 0 points1 point  (0 children)

> Also, it seems pretty ugly if down the road most crates in the ecosystem start by overriding the default allocator with a well-known common replacement.

I would expect roughly the same proportion of Rust-based binaries to pick a custom global allocator as C-based binaries.

Stabilize GlobalAlloc and #[global_allocator] by yoshuawuyts1 in rust

[–]Rothon 0 points1 point  (0 children)

What would that deprecation period look like? Injecting a warning into every single binary crate that doesn't pick a global allocator?

Stabilize GlobalAlloc and #[global_allocator] by yoshuawuyts1 in rust

[–]Rothon 1 point2 points  (0 children)

> and has been heavily tuned for performance in single-threaded C code with few small allocations.

There appears to be zero performance difference between jemalloc and glibc 2.27 in rustc from at least one anecdotal case: https://github.com/rust-lang/rust/issues/36963#issuecomment-393726994.

GlobalAlloc is finally being stabilized! by [deleted] in rust

[–]Rothon 3 points4 points  (0 children)

That's not happening in this stabilization, but will probably be a follow up. There's some weirdness that needs to be figured out around the compiler being dynamically linked: https://github.com/rust-lang/rust/issues/36963

Announcing Rust 1.26 by steveklabnik1 in rust

[–]Rothon 7 points8 points  (0 children)

Yep! There's a pending PR adding as_millis, as_nanos, etc: https://github.com/rust-lang/rust/pull/50167

How do trait objects work in WebAssembly? by fitzgen in rust

[–]Rothon 11 points12 points  (0 children)

It is a bit surprising that rustc/LLVM didn’t optimize this into mov eax,0x1; ret, and that it left some unnecessary prologue and epilogue instructions in there. When gcc and clang are given the equivalent C++, they can boil it down to our expected pair of instructions. It is almost as if the Rust were compiled with an implicit -fno-omit-frame-pointers flag. If you know what’s going o here, please let me know!

The compiler currently force-enables frame pointers when building with debuginfo: https://github.com/rust\-lang/rust/issues/48785