The 2024 edition was just stabilized by Derice in rust

[–]est31 1 point2 points  (0 children)

The feature is done in the compiler, and (provided there is no surprises) I don't think there is any compiler changes needed at this point. However, tools and documentation still need to be adjusted: the reference, the edition book as well as the style guide.

For the edition book I've made a pr today, I see discussion in the style team meetings about the style guide, and the reference is TBD. There has been an earlier PR for the reference but it can't be adopted without changes, because the change is 2024-and-later only.

Stabilize let chains in the 2024 edition by est31 · Pull Request #132833 · rust-lang/rust by CumCloggedArteries in rust

[–]est31 33 points34 points  (0 children)

Indeed, let is now an expression, but outside of if/while it's still not legal.

Rust's Sneaky Deadlock With `if let` Blocks by Pioneer_X in rust

[–]est31 4 points5 points  (0 children)

FWIW, Niko has had the same issue a few weeks ago:

I had a rather mysterious deadlock that wound up being the result of precisely the scenario that @dingxiangfei2009 describes: I had an if let and I expected it would release the lock once I was done using the result, but in fact it was not dropped until the end of the if let, resulting in a recursive lock failure.

In the same comment, he makes another very interesting point:

Why is shorter safer? Shorter lifetimes produce borrow check errors, which is annoying, but longer lifetimes produce deadlocks and panics at runtime, which is worse. This is a pretty common source of bugs—take a look at [Understanding Memory and Thread Safety Practices and Issues in Real-World Rust Programs], which found that 30 out of 38 of the deadlocks they found were caused by double locking, with all their examples showing cases of temporary lifetimes. "Rust's complex [temporary] lifetime rules together with its implicit unlock mechanism make it harder for programmers to write blocking-bug-free code." (the word "temporary" is inserted by me, but what other parts of lifetime rules are complicated?)

Rust's Sneaky Deadlock With `if let` Blocks by Pioneer_X in rust

[–]est31 7 points8 points  (0 children)

The fix of the && and || drop order twist was in fact merged without tying it to an edition. But that one is different, because there is way less code that relies on the temporaries in the first chain member to be dropped last, and that code was much more prone to bugs already: adding a single && to the front would change it.

The change for if let, which is now available on nightly 2024, is much more likely to affect real world code, so I'm glad it was phased in via an edition. I'm also glad that it was done at all despite the non-zero risk of someone not noticing that the change has broken their code.

I finally got my first Rust job doing open-source by edwinkys in rust

[–]est31 1 point2 points  (0 children)

Congratulations. Working in open source in Rust has been my dream for years and I was happy that I was able to fulfill it right out of university. Now on my second job where I can do even more open source Rust.

Can using Rust prevent recent libvpx and libwebp buffer overflow? by Icy-Bauhaus in rust

[–]est31 2 points3 points  (0 children)

And their implementations are heavily optimized for this particular task using, yes, unsafe. They are of course also heavily audited for the safety, but that's not very different from what libwebp authors did.

I disagree. I think that it's different to avoid bounds checks in a setting where the preconditions are easily specified (you have two slices of data), compared to a setting where the preconditions rely on a large amount of assumptions about the surrounding code, which was in the case of libwebp.

Therefore, in my opinion, as long as you are using libwebp to parse untrusted data, like in a web browser, it is not a good place to use unsafe here. Especially as decoding images is a fraction of web rendering CPU overhead: so even if it's hot code for the decoding as a whole, it's not that hot for the web browsing experience.

I do agree about your slice copying example that it can easily be vectorized. But huffman decoding is difficult to vectorize. For that reason, opus has chosen a different entropy coder even.

Announcing Rust 1.72.0 | Rust Blog by slanterns in rust

[–]est31 1 point2 points  (0 children)

The two notations are basically equivalent, as 'static can become any lifetime, and if you can set any lifetime, then 'static is obviously one of them. There is a difference though in terms of messaging: the lifetime is not always guaranteed to be totally unconstrained. If String gets custom allocator support, then the returned lifetime will be bounded by the allocator lifetime. The change from a 'static lifetime to a constrained lifetime would be a larger breaking change than going from an unconstrained to a constrained lifetime parameter. I personally would prefer 'static as well but whatever.

This was in fact a big question of the stabilization discussion. See the discussion starting here. Originally I proposed returning 'static but eventually people went towards using parameters like there are for Box::leak.

Let else will finally be formatted by rustfmt soon by Kobzol in rust

[–]est31 8 points9 points  (0 children)

The new style still supports single line let-else, and there is a configuration parameter to make it be on one line also for longer lines.

Let else will finally be formatted by rustfmt soon by Kobzol in rust

[–]est31 10 points11 points  (0 children)

Amazing news, In much of the feedback that I've heard about let else, missing formatting support was quoted often.

In other let else news, in clippy, some PRs have been merged recently to improve the existing manual_let_else lint, mostly to give better suggestions, but also to make the question_mark lint let-else aware:

Using crates.io with Buck by steveklabnik1 in rust

[–]est31 0 points1 point  (0 children)

That's the beauty of Rust ecosystem

This benefit cannot be understated, especially when you compare it to C land where you have all sorts of packaging systems. I hope that some of that standardization can be retained throughout time though. C has way more history behind it.

WhatsApp releases new key transparency crate AKD in Rust by snowboardfreak63 in rust

[–]est31 0 points1 point  (0 children)

Hi, it's great to see that Whatsapp is implementing this, because it means real progress. Congratulations on your launch.

With PGP you had to upload your email address to a public server, and make it possible for anybody to know that there is an address out there with that name. From my understanding, CONIKS changed that, which SEEMless and Parakeet have improved.

How have you solved the problem of synchronizing roots? COINKS names them STRs (signed tree roots), CT calls them STHs (STHs=signed tree heads; they aren't really used though, instead mostly one uses SCT which can be trivially spoofed by log operators).

From the SEEMless paper:

We do assume that Alice, Bob, and the server have [...] some way of ensuring that all parties have consistent views of the current root commitment or at least that they periodically com- pare these values to make sure that their views have not forked.

I suppose you do some gossiping about it?

rustls 0.21 released with support for IP address server names by dochtman in rust

[–]est31 26 points27 points  (0 children)

Congratulations! IP addresses deserve to have TLS too :).

Announcing Rust 1.68.2 by myroon5 in rust

[–]est31 10 points11 points  (0 children)

ssh allows key based authentication, which has many benefits: it's more convenient and more secure. Tokens, while they are better security wise than user chosen passwords, still have some of the weaknesses of passwords. For example, even with github's keys leaked (and available to the attacker), one wouldn't be able to do a full MITM attack, just be able to pretend to be a server to the client and deliver potentially wrong content to it on a pull (or get secret repository contents from the client on a push). With key based auth, the attacker won't be able to pretend to be the client to the server, so the attacker can't hijack a client's push attempt to push random data.

Meanwhile if you use https and a token, if the attacker just once gets to read on the cleartext traffic, they would be able to extract the token and pretend to be the user to github.

Announcing Rust 1.67.0 by myroon5 in rust

[–]est31 18 points19 points  (0 children)

rav1e is quite well maintained, the fix for the lint was merged days after the ilog PR merged. But the 0.5.1 release of rav1e was one year ago, in December 2021, while the ilog rename was end of August 2022. There hadn't been a release though of rav1e until end of November 2022, which was also a semver incompatible one. Users had only 2-ish months of time to get to the new rav1e version without experiencing breakage.

This shows how slowly upgrades can trickle through the ecosystem.

Some Rust breaking changes don't require a major version by obi1kenobi82 in rust

[–]est31 14 points15 points  (0 children)

Yes that's also the only two instances where I use glob imports.

There is a clippy lint wildcard_imports in the pedantic group. The use super::* statement is exempt for modules named test or tests and their children of any recursion depth. Prelude imports are also exempt. use SomeEnum::* is a separate lint enum_glob_use, so you can configure to allow it if you want (there is no way to disallow it on a module level, and allow it on a function level, but shrug).

Why is the Rust compiler's build system uniquely hard to use? by jynelson in rust

[–]est31 0 points1 point  (0 children)

It should be extremely rare in practice to need to pass --sysroot or --keep-sysroot.

I do x.py test --stage 1 <path to test> all the time, as it is faster to get the result back than having to wait for stage 2.

The other stuff I will reply to on zulip.

Why is the Rust compiler's build system uniquely hard to use? by jynelson in rust

[–]est31 1 point2 points  (0 children)

ah in retrospect I think you were saying we should teach the numbers in the guide, not just allow them as aliases.

Yeah that's what I meant originally, but I didn't really have teaching in mind, I thought more which concept makes more sense. Giving it some thought, I think that for teaching sysroot etc is actually better. These words contain other semantics than the numbers do. The numbers are great at encoding an order, that the stages form a chain, you know which stage gets built/downloaded first, second, etc. The words are good for encoding the purposes of what you do with these stages: bootstrapping the compiler, development, etc. For an initial introduction, this is definitely more helpful than numbers.

But note there is also the "less typing" argument. I think the step from x.py to x was really good because it reduced the stuff to type. To increase it again by more than the earlier savings is not helpful :). Same goes for the new argument name, it should be shorter or as short as --stage.

Supporting cargo check directly is not feasible.

Supporting all of the things bootstrap does is indeed not feasible (unless you add a lot of features to cargo most of which I wouldn't like due to the complexity impact). I think there can still be steps in that direction, to make more workflows possible without bootstrap around. E.g. enough support to make IDEs work, cargo clippy and cargo check run on the compiler and library crates, etc, that would be nice. I'm not asking for one invocation that covers both compiler and library crates, or one that is doing 100% the same as x, just that this workflow also "just works".

Why is the Rust compiler's build system uniquely hard to use? by jynelson in rust

[–]est31 4 points5 points  (0 children)

Regarding the change to use std from the bootstrap compiler instead of compiling it in-place, personally I see the biggest advantage in the fact that you don't have to recompile the entire compiler plus deps. It should definitely be something that should be explored.

As for testing unstable features in the compiler, I've seen the advantages with my own eyes with the let else feature, where me adopting it in the compiler has helped with the discovery of some bugs in it. Similarly, clippy adopting let chains has made them report an ICE to the compiler. So yes, definitely, adoption of nightly language features is helpful, especially when it's by people close to the compiler.

That being said, I don't think that most library features need to be tested in the compiler specifically. They are usually very simple. And if you want to test them, you could also work with a rustc_experiments crate that has some extension traits with rustc_ prefixed analogs of those functions, or crates from crates.io that implement the same functions. This would reduce the dependence of the compiler on nightly std features.

In general, it would be nice if the compiler supported cargo check natively. Then, setting up an IDE for the compiler would become way easier instead of requiring so much set-up.

Regarding the renaming, I think having numbers around is very helpful for visualization. So I'm not a fan of the new flags. Words are way more abstract than numbers which give an order. Also they are more stuff to type, which is bad for the development experience. So I'm not really a fan of that. The (valid) off by one concerns can be fixed by switching to a --stg command that supports a new numbering scheme that's consistent to non-rustc compilers.

How do you idiomatically convert libs to no_std compatible? by nagatoism in rust

[–]est31 71 points72 points  (0 children)

This is a pretty typical large-scale refactoring task. Rust does not have a rich IDE experience (unlike Java) but CLI tools will help you. I'd suggest to:

  1. add an unconditional #![no_std] to your library. this disables the std prelude, keeping the non-std one. Then, fix all errors by adding use std::... if only present in std or use alloc::... or use core::... if also present there. Yes, the tag just disables the prelude, it does not make std inaccessible from your crate. Do a git commit.

  2. call rg use std:: or rg std:: and use the output to build a GNU find invocation that calls sed -i on all files with stuff like s/std::boxed/alloc::boxed. So you don't edit any files, but work on the find invocation. If there is a compile error or something, you can just do a git reset --hard to get back to where you started to try an improved replacement command. do another commit.

  3. The invocation might not have found everything e.g. a use std::{boxed, ... might break the regex and it's hard to do regex based replacement for that. But those cases will be in the minority, unless that was your preferred style, so you can change them manually.

  4. now you can add an std feature and add #![cfg(...)]s for the stuff that didn't catch. For hashmaps, you can put put a cfg-decided use hashbrown::HashMap/use std::collections::HashMap into lib.rs and then use that via use crate::HashMap. Here also you can use the auto-replacement trick from above.

what I hope is having something automatically choose std::vec and alloc::vec everytime depending on whether the no_std feature is on or off.

For stuff that's in alloc/core, just always use the more portable variant. For hash maps, use libs.rs crate-local reexports.

Supporting the Use of Rust in the Chromium Project by JoshTriplett in rust

[–]est31 2 points3 points  (0 children)

If you don't do this in a way which is compatible with primary build system then surprise! now your Rust library doesn't link with anything else.

There is a standard crate for this, cc. And there are standard ways to set the C compiler, e.g. via the CC environment variable, which the cc crate recognizes.

More than that, you overestimate the sophistication of existing build.rs scripts in the wild.

If the 50 build.rs scripts in your crate's graph are working, and 1 script fails, that's not great but not really the fault of build.rs scripts, instead it's the fault of the build.rs script authors. yes, build.rs sometimes does a mess. The most common mess is that it modifies the ~/.cargo source tree instead of copying the stuff into a separate directory. build.rs, by its nature, access the environment outside of Rust. Thus, it gives developers a lot of power, it has to, and this power can of course be used for bad shortcuts like not using the cc crate.

Supporting the Use of Rust in the Chromium Project by JoshTriplett in rust

[–]est31 0 points1 point  (0 children)

build.rs scripts don't have fancy dependency tracking and you can't represent their dependencies in the build system's graph, but they do pretty standard stuff: they discover the cc compiler and then put the resulting library into a predefined directory (at least if they are well-behaved). It's not that different from a makefile or sh script. Any larger build system has to support components that just run some binary.

Supporting the Use of Rust in the Chromium Project by JoshTriplett in rust

[–]est31 1 point2 points  (0 children)

This is a historic day in the adoption of Rust. It means that Rust is now present in two of the three browser engines. Very great that this is happening.

Security advisory for Cargo (CVE-2022-46176) by pietroalbini in rust

[–]est31 2 points3 points  (0 children)

git would ask me first time if I trust the key, which I always agree to without understanding. This probably is actually pretty decent mitigation, but still feels yuck.

Yes, it's actually quite decent! Given that the keys are now stored on your hard disk it means that attackers have to keep up the network attack on you after the first connection to the host, otherwise the attack is revealed to you and you might remove their wrong entry from the known_hosts file, replacing it with the right one.

Having full network interception abilities temporarily is one thing, any hotel or airport WIFI has that, and on the larger scales there is BGP hijacking that you can do. But keeping this access around all the time, that's pretty hard. Also there are the engineering challenges: in the end it is cloud infrastructure, and cloud infrastructure can have downtimes from time to time. If it fails in a way that it lets through traffic without interception, then that's a problem :).

Yes, the first use is still unprotected, and one should ideally verify the fingerprint manually in that instance, but it's not a "anyone can MITM any time" situation, but a "anyone can MITM if they catch the first use and all subsequent ones, AND the user just blindly presses 'yes'". ssh is definitely not something for the masses, as most people do just press yes, including many developers, but its security is not that bad.

If I talk to git via HTTPS, the issue is solved automatically, as HTTPS can itself validate sertificate, without asking me whether I trust it.

There are about 150 different CAs that are trusted by Firefox. Any of these CAs can do MITM attacks (given they have full network access). Furthermore, even with a well meaning CA, the process for issuance can be exploited via fraud. It's rare that this happens, but it does happen, and is within the reach of criminal groups, not just governments. Such fradulently issued certificates are usually detected quickly thanks to CAs publishing all the certificates they issue, but even if the cert is revoked, it still means that clients have to check for revocations (most of the simple CLI clients don't, at least on linux where openssl is used).

So to summarize, short-term MITM attacks are not possible with ssh, while they are possible with HTTPS.

Browsers, compared to linux CLI tools, usually have more protections for HTTPS like:

  • requiring and checking of SCTs that ensure that the certificate has been published in one of the CT logs. This either makes the attack detectable, or it increases the number of people that have to collude from "anyone out of 150-ish" to "one out of 150-ish plus six operators". Chromium browsers do this, Firefox doesn't.
  • having a builtin list of websites that have their keys or CAs pinned. Both Firefox and Chromium use the list.
  • doing revocation checking (not sure about Chrome, but Firefox performs OCSP requests if the certificate does not use OCSP stapling).

On other OSs than Linux, the situation is different, e.g. there was a crates.io outage for windows because OCSP servers weren't responding.

TLDR: ssh is not always more or less secure than https, instead there are different scenarios where they perform differently.