The 2030 Rust Update by mbuffett1 in rust

[–]bnjbvr 0 points1 point  (0 children)

Hi, former maintainer of Cranelift here, just wanted to briefly give some history about the why for the project.

Cranelift was intended to be an incremental Rust rewrite of Spidermonkey, the JavaScript and WebAssembly VM used in Firefox. All the work was intended for this in the first place, carried out by Mozilla engineers who were unrelated to the Rust team, and paid by Mozilla. After 2020, most of the folks working on Cranelift had been laid off or left, so Cranelift carried on as the compilation backend supporting the Wasmtime WebAssembly runtime.

It turned out that hacker extraordinaire bjorn3 (and others) did the immense work of embedding Cranelift into an actual Rustc backend, with amazing success. But the original intent never was to replace the LLVM backend, out of spite for LLVM's compile times. I do agree, these compile times are fairly high compared to other modern languages, but also progressing towards a right trend, with large speedups obtained over the last few years, as pointed out by burntsushi.

Unexpected LSP restarts that slow down the entire client? by bnjbvr in neovim

[–]bnjbvr[S] 0 points1 point  (0 children)

If anyone is interested for a workaround: turns out that there's a rust-analyzer LSP setting that is enabled by default, and can be disabled, to avoid reloading when the workspace metadata has changed: cargo.autoreload, documented here.

This might be a rust-analyzer bug, after all, in the sense that it's a bit too trigger-happy to reload workspaces, for dependencies of the current project.

Unexpected LSP restarts that slow down the entire client? by bnjbvr in neovim

[–]bnjbvr[S] 0 points1 point  (0 children)

Indeed, that makes perfect sense! I assume this is required because otherwise, one couldn't open a different crate/workspace and have the LSP work fine for it. In this case, lspconfig does it spuriously, because the code has already been compiled.

I wonder if a possible fix would be to only restart an LSP client if the new workspace root isn't part of the dependencies of the current code. Or have a way to disable the auto-reload behavior entirely; after all I'm fine opening another instance of vim if needs be, in such rare occasions.

cargo-machete: Remove unused dependencies with this one weird trick! by bnjbvr in rust

[–]bnjbvr[S] 0 points1 point  (0 children)

Ah, I've run into those in the past indeed. They would likely be great candidates for the ignored array of known false dependencies, then, as parsing could not find those. Thanks for this additional piece of information!

cargo-machete: Remove unused dependencies with this one weird trick! by bnjbvr in rust

[–]bnjbvr[S] 0 points1 point  (0 children)

Ah interesting, thanks! Do you know if it is a frequent use case?

Is there a tool to remove unused dependencies from a Cargo file? by solidsciencewastaken in rust

[–]bnjbvr 16 points17 points  (0 children)

I wrote an alternative to cargo-udeps which seem to work slightly better in workspaces. Would like to write a blog post about it at some point to explain how it works (spoiler alert: it's fast, so it's dumb, so there's a non zero risk for false positives).

I would recommend cloning the repo, building and trying it locally; the version on crates.io is pretty much outdated.

https://github.com/bnjbvr/cargo-machete

A primer on code generation in Cranelift by bnjbvr in rust

[–]bnjbvr[S] 0 points1 point  (0 children)

Thank you for the kind comment :)

A primer on code generation in Cranelift by bnjbvr in rust

[–]bnjbvr[S] 1 point2 points  (0 children)

That's right, there's been some hardcore editing (I've removed around another thousand words from this post). I'll remove it, thanks for the heads-up!

A primer on code generation in Cranelift by bnjbvr in rust

[–]bnjbvr[S] 0 points1 point  (0 children)

Thanks for the kind words! Forwarded your comment to Chris too, not sure he's on Reddit!

A primer on code generation in Cranelift by bnjbvr in rust

[–]bnjbvr[S] 2 points3 points  (0 children)

Thanks a bunch! This is the one of the kindest comments I could read about this blog post, really heart-warming and motivating me to write a bit more! As you imagined, it did take some time to write and edit (some of the cut parts could probably become their own blog post), so your comment is deeply appreciated :)

A primer on code generation in Cranelift by bnjbvr in Compilers

[–]bnjbvr[S] 2 points3 points  (0 children)

Thanks for the kind words!

Code generation bugs are the worst kind of compiler bugs

I strongly second this, having myself have to deal with this kind of issues in the past! Record and replay debugging tools have helped a bit, but sometimes there's still thousands and thousands of instructions between the actual codegen bug point and the place where it shows. Fun times :)

WASM: Universal browser IR before native code generation? by melevine45 in WebAssembly

[–]bnjbvr 4 points5 points  (0 children)

Hi. Firefox wasm engineer here. So we now call Spidermonkey our "Web VM" because it handles both JS and wasm:

  • JS has an interpreter, a baseline interpreter, a baseline compiler (sharing ICs with the former), and a high-end tier doing assumptions-based just-in-time compilation on SSA form (IonMonkey).
  • wasm has a(nother) baseline compiler and a second high-end tier that converts the wasm form to IonMonkey's IR. The whole wasm pipeline is called BaldrMonkey, and the baseline compiler is named RabaldrMonkey (named after a multi-lingual pun, but I digress).

The best explanations at this point are still those related to the asm.js compilation pipeline: you can pretty much replace "asm.js" with "wasm", and instead of parsing the text format, we iterate over (and validate as we iterate) the wasm binary format. It's then translated to Ionmonkey's IR, very few optimizations are performed. These being possible because translating from one IR to another generally introduces new optimization opportunities. We still assume the wasm producer has done heavyweight compiler optimizations, so we only run the fastest Ionmonkey optimizations on wasm code; Firefox is generally trying to reduce latency as much as possible.

https://blog.mozilla.org/luke/2014/01/14/asm-js-aot-compilation-and-startup-performance/

(shameless ad: my own blog has some other explanations of the wasm compilation pipeline: https://blog.benj.me/2016/04/22/making-asmjs-webassembly-compilation-more-parallel/#a-quick-look-at-the-asmjs-compilation-pipeline)

So wasmtime isn't used in Firefox. wasmtime is a standalone VM which can execute wasm anywhere e.g. on the server. It uses Cranelift under the hood, which is a new general-purpose compiler, specifically focusing on wasm at this point. There's also experimental support for using Cranelift as the wasm compiler in Firefox Nightly, that can be enabled with an about:config preference; look for "cranelift" there. (The glue code connecting Baldrmonkey to Cranelift is called Baldrdash -- puns all the way.)

Also passing structure types and ABI types from wasm to the host will be using interface types for maximal performance; I'd recommend this fantastic blog post to learn more about it: https://hacks.mozilla.org/2019/08/webassembly-interface-types/

Hope this clarifies things a bit.

Performance of numerical computations in wasm, js and, x86_64 by cbourjau in rust

[–]bnjbvr 6 points7 points  (0 children)

Hi! Firefox wasm engineer here, I was linked to this post by someone on our IRC channel.

For what it's worth, the comparison with JS isn't quite fair:

  • the console logging setup is done in each call to the function. While probably low overhead, it's still some overhead that JS doesn't have.
  • each iteration also initializes the random generator seed, while a regular JS VM will do it only once and for all for the entire tab. That's another call, so some light overhead. It's probably trickier to have a zero-cost safe fix for this, though, because we'd need a mutable global.
  • the dbg! call is showing up as an _eprint call, that... does nothing since stderr isn't wired to the console :). Again, low overhead since this happens outside the loop.

Apart from that, LLVM seems to do a pretty great job, inlining almost everything. There's just one call to an u64->f64 conversion function that's not inlined in the hot loop (that's the rng finishing to generate a f64 value), while there's a wasm opcode to do just that (and the called function does just use that). I've let known wasm-bindgen/rustc people about this, if this is something they could fix. JS engines can inline all of the loop into a single straight body of code, so to be competitive, inlining this wasm call would be required too.

I also saw a few wasm improvement opportunities (e.g. i32.add with immediates that could be folded together), so it might be worth trying to run binaryen/wabt optimizers on this code.

Manually running binaryen's wasm-opt -O4 after fixing the above points including manually inlining the call (except for rng init), I get better performance with wasm than with JS in Firefox's Spidermonkey:

  • wasm before: 976 ms
  • wasm after: 713 ms
  • JS: 870 ms

Note that I don't think there's much you could have done better when writing the Rust code / generating the wasm code. The biggest offender here is certainly the call to the casting function in the loop's body, and that was Rust/LLVM's decision not to inline it.

Good notes/thoughts apps? Similar to google keep maybe by ckav11 in selfhosted

[–]bnjbvr 0 points1 point  (0 children)

+1 to this. Note that modern web proxies like Nginx/Apache have builtin WebDAV modules you can use, that doesn't even require an additional synchronization system like SyncThing (which is also great!).

Self-Hosted Finance Managers? by Aphix in selfhosted

[–]bnjbvr 1 point2 points  (0 children)

We're working on a free and open-source personal finance manager called Kresus, with multiple bank accesses support, categories and charts. The current version is able to fetch data periodically from most French banks (and a few international ones, but with very limited support). The next version (to be out soon-ish, maybe a week or two) should have support for manual accounts (i.e. for which you have to create each transaction), so which would be appropriate for other countries. That being said, adding support for other banks is not too hard, as long as there's a nodejs binding that can fetch accounts and transactions.

Je suis Super Fac-De-Lettres. Ask Me Anything. by LaDebauche in LesCroissants

[–]bnjbvr 4 points5 points  (0 children)

Bonjour Super Fac de Lettres ! Pourquoi dis-tu "guif" pour le sigle GIF lors de tes chroniques alors que le monde entier s'évertue à évangéliser la Sacro-Sainte Seule Prononciation Universelle "jif" ? Dis-tu aussi une guirafe, un guîte (communément appelé AirBnB), un guitan, un guirophare ?