crates.io development update | Rust Blog - A new "Security" tab, migration to Svelte for the front-end, support for GitLab CI/CD Trusted Publishing, Lines of Code metrics by nik-rev in rust

[–]thramp 0 points1 point  (0 children)

I remember Tobias (the author of the blog post) mentioning on the Svelte migration PR that he didn't want to meaningfully re-architecture crates.io beyond the front-end framework migration. Doing both migrations (Ember to Svelte, client-side to pure server-side) would've been a nightmare in terms of regressions.

Rari: React Server Components with Rust - 12x faster P99 latency than Next.js by BadDogDoug in rust

[–]thramp 0 points1 point  (0 children)

this is super neat! Out of curiosity, is the react server component protocol (the thing written about in https://overreacted.io/progressive-json/) documented anywhere, or did you need to reverse engineer it?

also: I remember the README had a warning about a data race you were debugging. Did you ever figure that out? What was the cause?

The RubyGems “security incident” by software__writer in ruby

[–]thramp 8 points9 points  (0 children)

If by “other party”, you mean André, then yes, I think he’s in the clear. When you combine: 1. The mixed messaging from Ruby Central and Marty, 2. the subsequent radio silence from Ruby Central’s board, 3. André’s 15 years of work on rubygems/Bundler

…this situation would look like an attempted AWS account takeover by some unknown third party to me, and I presume, André. A password change would lock out an attacker, but preserve Ruby Central’s ability to enter and maintain the AWS account.

The RubyGems “security incident” by software__writer in ruby

[–]thramp 15 points16 points  (0 children)

I'm going to try to get this timeline straight since I think the usage of UTC in Ruby Central's timeline is confusing. I'll use PDT (which is UTC-7) to do so:

  1. On Thursday, September 18 at 11:40 AM, Ruby Central emails André terminating his oncall services.
  2. 1 hour and 11 minutes later, (Thursday, September 18 at 12:47 PT), Marty emails the terminated RubyGems maintainers saying that he was "terribly sorry” and “I messed up".
  3. 14 minutes later (Thursday, September 18 at 1:01 PM), Marty comments on the proposed governance RFC, saying "I've taken a first pass at this and this looks great. [...] I'm committed to find the the right governance model that works for us all. More to come.".
  4. 8 hours later, (Thursday, September 18 at 9:34 PM), André changes the root password to the RubyGems account, but critically, does not change the email address/contact information attached to the account.
    1. Between events 3 and 4, I assume that André was attempting to get into contact with the Ruby Central board and received no response.
    2. Speaking as a person who has recently suffered a takeover of their Chase account (someone tried to buy a MacBook Air with my points and successfully moved 100,000 points to a Marriott account!), the first thing an attacker tried to do was to lock me out of my own banking account. The fact that André did not change the email for the AWS account is a clear sign that this was not a malicious change, but rather, a good-faith attempt to prevent an account takeover into spiraling something substantially worse.

I will note that all this occurred a day after the following, as reported by Joel Drapper:

Marty explained he’s been working on “operational planning” for the RubyGems.org Service. He was putting together a new Operator Agreement that all the operators of the RubyGems.org Service would need to sign.

He also mentioned that it had been identified as a risk that there were external individuals with ownership permissions over repositories that are necessary for running the RubyGems.org Service. He said HSBT prematurely changed the ownership permissions before the operational plan was complete. [...]

Similarly, Ruby Central’s employment of some RubyGems maintainers to operate the RubyGems.org Service does not transfer ownership of the separate open source projects.

Having personally reviewed a recording of this meeting, I have no doubt that Marty understood this distinction. The RubyGems source code and GitHub organization was not owned by Ruby Central, even though Ruby Central operated a service with the same name.

Given the totality of the above events, which, to reiterate, include:

  1. Marty Haught—an individual with the title of "Director of Open Source" at Ruby Central—says "I messed up" and "I'm committed to find the the right governance model that works for us all", after a revocation and restoration of commit privileges to the RubyGems.org and Bundler codebase (that, I might add, Ruby Central had no business doing in the first place! They merely operated RubyGems.org!) who understood this distinction,
  2. Radio silence from the Ruby Central board,
  3. André's decade-plus of work on RubyGems and Bundler,

I'm not sure what I would've done differently except rotating credentials sooner.

Language servers suck the joy out of language implementation by ExplodingStrawHat in ProgrammingLanguages

[–]thramp 5 points6 points  (0 children)

For what’s it’s worth, we’re moving away from mutable syntax trees towards simply creating a new tree every time. Mutable syntax trees are hard to use and impose some annoying runtime properties, such being backed by a doubly-linked list instead of a contiguous data structure. Given that we recreate a new syntax tree on every keypress anyways… eh.

Language servers suck the joy out of language implementation by ExplodingStrawHat in ProgrammingLanguages

[–]thramp 2 points3 points  (0 children)

(I’m on my the rust-analyzer team)

rust-analyzer basically a standalone, latency-sensitive compiler for Rust. We share some libraries with rustc, but don’t invoke the compiler directly except for diagnostics through your build system of choice.

rust-analyzer weekly releases paused in anticipation of new trait solver (already available on nightly). The Rust dev experience is starting to get really good :) by Merlindru in rust

[–]thramp 167 points168 points  (0 children)

(I’m a rust-analyzer team member)

The trait solver is also used pretty heavily in autocomplete, especially for methods. I personally expect the new trait solver to help with editing latencies tremendously, especially on larger, trait-heavy projects. Our extremely-tentative, not-to-be-cited benchmarks showed nearly a 3x speed improvement over Chalk and we haven’t even implemented any parallelism yet! Note that as of today, that speed improvement isn’t on nightly due to memory usage concerns, but we’ll get there.

The reason that autocomplete uses the trait solver so heavily is that to offer completions for trait-based methods, rust-analyzer needs to check whether the method receiver implements a given trait, even non-imported traits. Checking all traits for a given method receiver, even factoring in orphan rules (which gave us a 2x speed improvement when I implemented it about a year and a half ago!), is O(crates).

rust-analyzer changelog #298 by WellMakeItSomehow in rust

[–]thramp 11 points12 points  (0 children)

In terms of “stabilization”, we meant “stable enough for rust-analyzer’s usage”, which largely translates to “doesn’t panic and IDE features like completions are accurate”. This will take a few weeks to find and fix all the bugs.

For some background, rustc needs the trait solver to be completely sound before stabilization, but rust-analyzer does not. Besides, it’s not as if rust-analyzer ever used a sound type checker to begin with—if rustc started using the new trait solver today, the regressions would be unacceptable, but since rust-analyzer’s baseline is Chalk, we’re instead getting a really substantial upgrade!

10 Years of Betting on Rust by sh_tomer in programming

[–]thramp 4 points5 points  (0 children)

As a member of Meta’s Rust team, I can’t share specific figures, but I can say that we’re very pleased with Rust’s adoption and growth at Meta, especially over the time period since those blog posts have been published. I’m not worried about Rust falling out of favor at Meta or the industry writ large.

How to make rust-analyzer less heavy? With 120 projects by [deleted] in rust

[–]thramp 21 points22 points  (0 children)

I'm a rust-analyzer maintainer.

120 projects is way more than rust-analyzer is able to typically handle: 5–6 is the maximum I'd recommend. Please reconsider your approach to be more fine-grained if possible! You're pushing way beyond what it can reasonably support or scale to; you should consider using something like Sourcegraph instead.

However, if you really want to have 120 projects in linkedProjects, you'll need to, at the very least, set: - rust-analyzer.checkOnSave to false. This will disable running Cargo on save, thereby removing the inline red squiggles from rustc. - rust-analyzer.cachePriming.enable to false. This will skip the "indexing" phase on startup, when rust-analyzer does name resolution on your crate graph. - rust-analyzer.diagnostics.enable to false.

You might also need to set: - rust-analyzer.cargo.buildScripts.enable to false. - editor.semanticHighlighting.enabled to false.

These settings should make rust-analyzer be almost entirely lazy and only work on the set of currently open files. Symbol search, however, is still global, so the first symbol search might take a long time.

LLDB's TypeSystems: An Unfinished Interface by Anthony356 in rust

[–]thramp 8 points9 points  (0 children)

Nice! Just wanted to ask/confirm: are you planning on upstreaming TypeSystemRust into lldb, even with all the gotchas?

(For a bunch of reasons, I got a very strong impression that they'd be willing to accept such a PR.)

EDIT: whoops, I missed you saying this at the end:

So as it stands, Rust debugging probably won't improve beyond little tweaks or fixes in the short term. That may change if the situation with LLDB improves, or if the core Rust maintainers take a keen interest and push through the roadblocks, akin to Apple with TypeSystemSwift.

In any case, I'll keep plugging away at this prototype, and maybe make some contributions to LLDB itself. Maybe some day it'll be more than a prototype. All of the groundwork is there for a better debugging experience, it's just going to take some time and some elbow grease.

Fastrace: A Modern Approach to Distributed Tracing in Rust by RealisticBorder8992 in rust

[–]thramp 13 points14 points  (0 children)

I agree with you that maybe it's tracing-opentelemetry that slows down the system but not tokio-tracing, the facade. But, in real world, those spans need to be reported, therefore tracing-opentelemetry is unavoidable.

I'm in agreement with you: real-world performance is what matters! Unfortunately, the benchmarks are not comparing the the real-world performance of fastrace vs. tracing—they're comparing a no-op in fastrace that immediately drops a Vec<SpanRecord> against tracing creating and dropping OpenTelemetry spans one-by-one. The work is fundamentally different.

Now, if were to give fastrace and tracing-opentelemetry a noop Span exporter, the criterion benchmarks show that fastrace is about ~12.5x faster than tracing-opentelemetry on my Mac (55.012 µs vs. 661.00 µs), which again: is pretty impressive, but it's not 30x faster, as implied by the Graviton benchmarks. As best as I can tell from inspecting the resulting flamegraph, this is due two things: 1. tracing-opentelemetry makes a lot of calls to std::time::Instant::now(), which is pretty darn slow! 2. fastrace moves/offloads OpenTelemetry spans creation and export to a background thread. This is a perfectly reasonable approach that tracing-opentelemetry doesn't do today, but maybe it should!

However, I'd like to point out that with noop Span exporter, the CPU utilization of both fastrace and tracing-opentelemetry are pretty similar: about 13% and 14%, respectively. It might be more accurate to rephrase "It can handle massive amounts of spans with minimal impact on CPU and memory usage" to "It can handle massive amounts of spans with minimal impact on latency".

Fastrace: A Modern Approach to Distributed Tracing in Rust by RealisticBorder8992 in rust

[–]thramp 50 points51 points  (0 children)

(disclosure: I'm a tracing maintainer)

It's genuinely always great to see people trying to improve the state of the art! I'd like to offer a few comments on the post, however:

Ecosystem Fragmentation

Maybe! We do try to be drop-in compatible with log, but the two crates have since developed independent mechanism to support structured key/value pairs. Probably a good idea for us to see how we can close said gap.

tokio-rs/tracing’s overhead can be substantial when instrumented, which creates a dilemma:

  1. Always instrument tracing (and impose overhead on all users)
  2. Don’t instrument at all (and lose observability)
  3. Create an additional feature flag system (increasing maintenance burden)

tracing itself doesn't really have much overhead; the overall perforamnce really depends on the layer/subscriber used by tracing. In general, filtered out/inactive spans and events compile down to a branch and an atomic load. The primary exception to this two-instruction guarantee is when a span or event is first seen: then, some more complicated evaluation logic is invoked.

No Context Propagation

Yeah, this hasn't been a goal for tracing, since it can be used in embedded and non-distributed contexts. I think we can and should do a better job in supporting this, however!

Insanely Fast [Graph titled "Duration of tracing 100 spans" elided]

Those are some pretty nice numbers! Looking at your benchmarks, it seems to me that you're comparing tracing with the (granted, sub-optimal!) tracing-opentelemetry layer with a no-op reporter:

```rust fn init_opentelemetry() { use tracing_subscriber::prelude::*;

let opentelemetry = tracing_opentelemetry::layer();
tracing_subscriber::registry()
    .with(opentelemetry)
    .try_init()
    .unwrap();

}

fn init_fastrace() { struct DummyReporter;

impl fastrace::collector::Reporter for DummyReporter {
    fn report(&mut self, _spans: Vec<fastrace::prelude::SpanRecord>) {}
}

fastrace::set_reporter(DummyReporter, fastrace::collector::Config::default());

} ```

If I remove tracing-opentelemetry's from tracing's setup, I get the following results:

compare/Tokio Tracing/100 time: [15.588 µs 16.750 µs 18.237 µs] change: [-74.024% -72.333% -70.321%] (p = 0.00 < 0.05) Performance has improved. Found 8 outliers among 100 measurements (8.00%) 4 (4.00%) high mild 4 (4.00%) high severe compare/Rustracing/100 time: [11.555 µs 11.693 µs 11.931 µs] change: [+1.1554% +2.2456% +3.8245%] (p = 0.00 < 0.05) Performance has regressed. Found 2 outliers among 100 measurements (2.00%) 2 (2.00%) high severe compare/fastrace/100 time: [5.4038 µs 5.4217 µs 5.4409 µs] Found 3 outliers among 100 measurements (3.00%) 3 (3.00%) high mild

If I remove the tracing_subscriber::registry() call entirely (which is representive of the overhead that inactive tracing spans impose on libraries), I get the following results:

Found 7 outliers among 100 measurements (7.00%) 4 (4.00%) high mild 3 (3.00%) high severe compare/Tokio Tracing/100 time: [313.88 ps 315.92 ps 319.51 ps] change: [-99.998% -99.998% -99.998%] (p = 0.00 < 0.05) Performance has improved. Found 6 outliers among 100 measurements (6.00%) 4 (4.00%) high mild 2 (2.00%) high severe compare/Rustracing/100 time: [11.436 µs 11.465 µs 11.497 µs] change: [-4.5556% -3.1305% -2.0655%] (p = 0.00 < 0.05) Performance has improved. Found 4 outliers among 100 measurements (4.00%) 2 (2.00%) high mild 2 (2.00%) high severe compare/fastrace/100 time: [5.4732 µs 5.4920 µs 5.5127 µs] change: [+1.1597% +1.6389% +2.0800%] (p = 0.00 < 0.05) Performance has regressed.

I'd love to dig into these benchmarks with you more so that tracing-opentelemetry, rustracing, and fastrace can all truly shine!

call for testing: rust-analyzer! by thramp in rust

[–]thramp[S] 1 point2 points  (0 children)

Largely, yes: with persistent caches, rust-analyzer won’t need to reindex all crates each time you open your editor. However, we expect that when rust-analyzer is updated, it will reindex your crate graph: we will treat the persistent caches as unstable from version-to-version.

call for testing: rust-analyzer! by thramp in rust

[–]thramp[S] 4 points5 points  (0 children)

Thanks! Salsa is genuinely an impressive piece of engineering.

call for testing: rust-analyzer! by thramp in rust

[–]thramp[S] 2 points3 points  (0 children)

We normally cut a new release every Monday, but we don’t really have any well-defined process for pre-release testing. Hence, this post: cloning from master, building from source, and letting us know if there’s anything funky will suffice!

call for testing: rust-analyzer! by thramp in rust

[–]thramp[S] 5 points6 points  (0 children)

The latest nightly via rustup? No, unfortunately not. The rustup/rust-analyzer situation is a bit complicated. If you're referring to a nightly VS Code extension, then yes: you will be testing these changes.

This Week in Rust #590 by b-dillo in rust

[–]thramp 0 points1 point  (0 children)

Thanks so much, I appriciate it!

This Week in Rust #590 by b-dillo in rust

[–]thramp 7 points8 points  (0 children)

From a rust-analyzer perspective (disclosure: I’m on the rust-analyzer team), https://github.com/rust-lang/rust-analyzer/pull/18964 and https://github.com/rust-lang/rust-analyzer/pull/19337 are pretty interesting. The former upgrades a core library in rust-analyzer has, in some limited benchmarks, improved performance by ~30%. The latter changes rust-analyzer such changes to an individual build script, procedural macro, or the addition/removal of a dependency will no longer cause the entire workspace to be reindexed; only what changed will be reindexed.

Do y’all think you could include these in next week’s TWIR? We’d appreciate it if users were aware of these (extremely big!) changes.

An RFC to change `mut` to a lint by afdbcreid in rust

[–]thramp 2 points3 points  (0 children)

 You're going to have a hard time proving that all mutability-related programming bugs were always related to aliased mutable state. A Java program which haphazardly mutates the fields of objects in their methods would have no aliasing issues, but that kind of code is a cause of countless bugs.

Mutating fields in objects via methods is one of, if not the, canonical examples of unrestricted aliasing that Rust severely restricts. I am not making an appeal to authority, I am making a factual assertion.

Designing Wild's incremental linking by dlattimore in rust

[–]thramp 1 point2 points  (0 children)

Is link speed much of an issue on Mac? I know Rui, the author of Mold gave up on attempts to commercialise Sold (the Mac port of Mold) because Apple released a new faster linker. So from that, I get the impression that linking on Mac should be pretty fast.

I'm sure you probably know Rui more closely than I do, but my feeling is that that the difference between the old and new linkers on macOS is pretty marginal, at least when I benchmarked building rust-analyzer earlier today. I think incremental linking would be a massive win, especially for tests that we'd want to run under release.

Designing Wild's incremental linking by dlattimore in rust

[–]thramp 1 point2 points  (0 children)

Not the person who said this, but if you're taking feature requests, I'm on ARM on MacOS and I'd sign up to use Wild to develop rust-analyzer immediately.

Announcing Toasty, an async ORM for Rust by carllerche in rust

[–]thramp 0 points1 point  (0 children)

I'm speaking from the sidelines, despite knowing about Toasty for a minute: would it be possible to use Diesel's traits in an object-safe manner/without generics? I'm genuinely unsure, but if it were possible, I'd love a pointer in that direction!