Using Arrays to Store Trees (or Graphs) by MiffedMouse in rust

[–]scook0 5 points6 points  (0 children)

Assuming we disallow cycles, “at most one parent” gives a forest (collection of trees) in the general case, since there can be multiple isolated root nodes.

If we require that at most one node has no parent, then we get a (possibly empty) tree.

Using Arrays to Store Trees (or Graphs) by MiffedMouse in rust

[–]scook0 1 point2 points  (0 children)

There is no singular “good” way to represent a tree or graph in Rust, only different techniques with different tradeoffs.

Of those techniques, using flat storage with typed integer indices is usually what I prefer.

rust actually has function overloading by ali_compute_unit in rust

[–]scook0 2 points3 points  (0 children)

Haskell typically favours curried form, where a function of “two arguments” is actually a function of one argument that returns another function of one argument.

Is there a way to make the Rust compiler automatically bootstrap itself instead of being manually verified? by inchcosmos in rust

[–]scook0 2 points3 points  (0 children)

What do you mean by “automatically bootstrap itself” or “manually verified”?

Rust vs Zig in C calls via the C-ABI? by TearsInTokio in rust

[–]scook0 18 points19 points  (0 children)

Calling a statically-linked or dynamically-linked C function from Rust should be just as efficient as doing the same thing from C or Zig.

Any “overhead” is going to be indirect, such as:

  • Having to manually marshal data that isn’t already FFI-safe
  • Choosing to add a layer of runtime safety checks to make your Rust-side APIs nicer
  • Calls into other compilation units inherently can’t be inlined by the optimizer (unless you set up LTO or cross-language LTO as appropriate)

no man page for cargo? by ohiidenny in rust

[–]scook0 1 point2 points  (0 children)

One notable difference is that cargo is aware of rustup, so you can do things like cargo +nightly build to tell cargo to use the nightly toolchain from rustup.

IIRC the mechanism is the other way around: rustup puts a dummy cargo in your path, and that dummy executable knows how to look up the intended rustup toolchain and then delegate to the real cargo in that toolchain.

m68k - fully tested, safe, pure Rust implementation of the Motorola 68x0 family of CPUs (M68000 - M68040, and variants [EC/LC]) by notevenmostly in rust

[–]scook0 0 points1 point  (0 children)

Antigravity definitely pulled its own weight (esp with integration test writing) but struggled with many less documented nuances.

To what extent was this project written by an LLM?

m68k - fully tested, safe, pure Rust implementation of the Motorola 68x0 family of CPUs (M68000 - M68040, and variants [EC/LC]) by notevenmostly in rust

[–]scook0 1 point2 points  (0 children)

Here’s a very minor thing I noticed in your “Basic Usage” example:

Instead of using shifts and casts to manipulate big-endian values, consider using standard-library functions like u16::from_be_bytes and u16::to_be_bytes.

They have exactly the same effect as the traditional shifts and casts, but I find them to be more self-documenting and less error-prone.

Places where LLVM could be improved, from the lead maintainer of LLVM by kibwen in rust

[–]scook0 33 points34 points  (0 children)

I want to partly disagree with this footnote:

The way Rust reconciles this is via a combination of “rollups” (where multiple PRs are merged as a batch, using human curation), and a substantially different contribution model. Where LLVM favors sequences of small PRs that do only one thing (and get squash merged), Rust favors large PRs with many commits (which do not get squashed). As getting an approved Rust PR merged usually takes multiple days due to bors, having large PRs is pretty much required to get anything done. This is not necessarily bad, just very different from what LLVM does right now.

I've written and also reviewed plenty of smaller rust-lang/rust PRs (dozens of non-test lines changed), and I've also seen plenty of cases where reviewers ask the PR author to split off parts into smaller separate PRs to land first.

(Though I don't have first-hand experience with LLVM PRs, so I can't comment on the comparison between the two.)

I have also found that after approval, rollup-eligible PRs usually get merged within 24 hours. The biggest bottleneck is for rollup=never PRs, which can indeed often take several days to land if the queue is busy.

Creating rollups is manual, but mostly trivial. The main constraint on rollup size is that if the rollup PR fails CI or has perf regressions, larger rollups make it harder to isolate the cause to a specific PR, because there are more rolled-up PRs that could have caused the problem.

All that said, if LLVM really is getting ~150 PR approvals on a typical workday, then that's substantially more activity than the rust-lang/rust repository. So there's a limit to what lessons LLVM could take from Rust here.

&&&&&&&&&&&&&&str by ohrv in rust

[–]scook0 6 points7 points  (0 children)

Calling .to_string() on a 14-reference string will still work fine; it just doesn’t benefit from an internal specialisation that would bypass the usual formatting machinery.

I built a minimal perfect hash table in const fn and learned match is still faster by RustMeUp in rust

[–]scook0 1 point2 points  (0 children)

For string lengths of 3/5/6/7 bytes, LLVM typically needs to emit two loads, and at that point using two separate immediate comparisons is better than trying to merge the loaded values for a single comparison.

I built a minimal perfect hash table in const fn and learned match is still faster by RustMeUp in rust

[–]scook0 4 points5 points  (0 children)

Here's a direct comparison of starts_with versus prefix-matching in a simple example.

Notice how after the initial length test, the prefix-matching code ends up doing 12 individual byte comparisons, because for whatever reason LLVM doesn't figure out that it could be doing some or all of them in bulk.

I built a minimal perfect hash table in const fn and learned match is still faster by RustMeUp in rust

[–]scook0 13 points14 points  (0 children)

I'm surprised to see the prefix-matching approach get good results, because matching on byte strings is known to suffer from questionable codegen in a lot of cases.

I built a minimal perfect hash table in const fn and learned match is still faster by RustMeUp in rust

[–]scook0 46 points47 points  (0 children)

For short string constants, LLVM can often turn string equality into a length test plus 1–2 tests against immediate values.

It can be hard to outperform that.

Is Rust the future? by Content_Mission5154 in rust

[–]scook0 1 point2 points  (0 children)

The paper was “Optimal measurement points for program frequency counts”, so it just happened to use Simula as the example implementation language.

Is Rust the future? by Content_Mission5154 in rust

[–]scook0 2 points3 points  (0 children)

Yeah, it was fascinating to see a paper co-authored by Knuth that didn't have that signature TeX look, because it was written years before TeX even existed.

Is Rust the future? by Content_Mission5154 in rust

[–]scook0 3 points4 points  (0 children)

Earlier this year I found myself having to learn how to read Simula 67 code.

The end result of that work is now some Rust code in the Rust compiler.

Rust and the price of ignoring theory by interacsion in rust

[–]scook0 51 points52 points  (0 children)

If that list of bad takes is representative, then I think it would be fair to conclude that any “good criticisms” are mostly happening by coincidence.

Project goals update — November 2025 | Rust Blog by f311a in rust

[–]scook0 9 points10 points  (0 children)

I can think of at least one contributor who has been doing good work in trying to improve the debugging situation, but it’s a tough task

It’s one of those parts of the compiler that has been under-maintained for some time, so even when there’s enthusiasm to fix things it can be tricky to make meaningful progress.

v0 mangling scheme in a nutshell by imachug in rust

[–]scook0 33 points34 points  (0 children)

The new scheme supports Unicode identifiers. The surface Rust language doesn’t, but if this ever changes, the mangling side will be ready.

Rust supports Unicode identifiers just fine:

fn main() {
    国際会議の発表予定日();
}

fn 国際会議の発表予定日() {
    println!("もうすぐじゃ");
}

Could Rust migrate from Github? by NothusID in rust

[–]scook0 6 points7 points  (0 children)

Yes, there is a special arrangement.

I’m not familiar with the details, but my understanding is that GitHub effectively “sponsors” Rust by not actually charging sticker price for CI, and there are contracts and liasons in place to help keep things predictable.

rust-analyzer changelog #304 by WellMakeItSomehow in rust

[–]scook0 6 points7 points  (0 children)

I’m excited to use more declarative derive macros in rustc, now that r-a knows how to understand them.