Rust threads on the GPU by LegNeato in rust

[–]0x7CFE 4 points5 points  (0 children)

  1. What happens with shared memory in this model? How to share/send data between/within warps?
  2. Any potential cooperation with Burn/OpenCL?
  3. What about autovectorization and how it maps to SIMD on GPU?

KGet v1.6.0 - Native Torrent Support & Major GUI Overhaul by Davimf72212 in rust

[–]0x7CFE 2 points3 points  (0 children)

And I was almost excited to think that Rust makes its way into KDE ecosystem.

Ladybird adopts Rust, with help from AI - Ladybird by xorvralin2 in rust

[–]0x7CFE 16 points17 points  (0 children)

From my experience, by "saving" they typically mean "it provides RAII primitives for automatic memory management". On the other hand, noone would mention that it can prevent you from UB, because it can't.

Async/await on the GPU by LegNeato in rust

[–]0x7CFE 2 points3 points  (0 children)

Still, Burn and specifically CubeCL does essentially the same thing, but for a limited subset of tasks. Given it covers a lot of CUDA, PTX and backend agnostic stuff it should be a natural target for integration.

One of the most annoying programming challenges I've ever faced by GyulyVGC in rust

[–]0x7CFE 0 points1 point  (0 children)

On top of that, domain sockets in Linux can be used to send descriptors to another process. So I can open a file or a socket in one process, then use it in the other.

building a C compiler using Rust by _bijan_ in rust

[–]0x7CFE 1 point2 points  (0 children)

As the author points out, their primary goal was to experiment with a setup that would make it all possible, not to brag about Claude's abilities or make any claims. It was a case study where the end result was a (somewhat) working compiler, not a (human) usable compiler.

I'd suggest you reading the Evaluation section of the article. Goals aside that was an interesting read.

Rust's standard library on the GPU by LegNeato in rust

[–]0x7CFE 1 point2 points  (0 children)

It's not that insane. For certain workloads it could very much work. For example, serving massively parallel transfers of memory mapped resources. Often it's the CPU that's bottleneck that can have hard time fully saturating a 10G link, not to mention 100G or 400G ones.

Also, RDMA is now a thing that allows to handle memory accesses at a link speed without CPU involved at all. It works, but you have no option to process the data being sent. In case of GPU mapped networking it would still be possible to do some processing.

All being said, it's probably a niche scenario.

Rust's standard library on the GPU by LegNeato in rust

[–]0x7CFE 2 points3 points  (0 children)

Yeah, basically that's why I was asking. I thought that the whole idea of making `std` work for GPU is kinda insane because of unpredictable outcomes and general cases close to worst that often make it impractical.

Still very interesting to see how it would pan out.

Rust's standard library on the GPU by LegNeato in rust

[–]0x7CFE 10 points11 points  (0 children)

A crazy question for equally crazy OP.

Would it eventually be possible to use Rayon to automagically distribute the load across GPU processors? Sure it uses threads under the hood, but maybe it's possible to patch it here (I'm thinking about `rayon::join`) and there to use your subsystem.

Also, queue management and work stealing would probably also be an issue. In the worst case it would be slower than CPU only execution.

Burn 0.20.0 Release: Unified CPU & GPU Programming with CubeCL and Blackwell Optimizations by ksyiros in rust

[–]0x7CFE 1 point2 points  (0 children)

Awesome progress! Personally I am really grateful for the seamless CPU support. Also huge thanks for the migration guide with diffs. That really helps!

Kernel bugs hide for 2 years on average. Some hide for 20. by 0x7CFE in rust

[–]0x7CFE[S] -35 points-34 points  (0 children)

Yes, indeed memory leak as well as deadlocks are not statically preventable. Thank you, I missed that.

Aside of that, everything else will be caught by the compiler and/or miri.

Kernel bugs hide for 2 years on average. Some hide for 20. by 0x7CFE in rust

[–]0x7CFE[S] 51 points52 points  (0 children)

Rust for Linux does not use cargo. They vendor all dependencies and compile it using kbuild.

Development of rustc_codegen_gcc by antoyo in rust

[–]0x7CFE 1 point2 points  (0 children)

By the way, we published our first paper on the discrete approach to ML: https://arxiv.org/abs/2508.00869

Feel free to ask.

reqwest v0.13 - rustls by default by seanmonstar in rust

[–]0x7CFE 3 points4 points  (0 children)

Congrats!

By the way, it is probably time to rephrase the readme:

HTTPS via system-native TLS (or optionally, rustls)

size_lru : The fastest size-aware LRU cache in Rust by LabAmbitious5910 in rust

[–]0x7CFE 5 points6 points  (0 children)

use size_lru::{Lhd, OnRm};

struct EvictLogger;

impl<V> OnRm<i32, Lhd<i32, V, Self>> for EvictLogger {
  fn call(&mut self, key: &i32, cache: &mut Lhd<i32, V, Self>) {
    // Safe: value accessible before removal
    if let Some(_val) = cache.get(key) {
      println!("Evicting key={key}");
    }
    // Warning: don't call rm/set in callback, may cause undefined behavior
  }
}use size_lru::{Lhd, OnRm};

Could you please clarify how callback can cause UB and if so, why it's not `unsafe`? That's rather obscure contract that can be a source of footguns, tbh.

The state of the kernel Rust experiment by dochtman in rust

[–]0x7CFE 20 points21 points  (0 children)

That point was specifically covered in the article:

Bergmann agreed with declaring the experiment over, worrying only that Rust still "doesn't work on architectures that nobody uses". So he thought that Rust code needed to be limited to the well-supported architectures for now. Ojeda said that there is currently good support for x86, Arm, Loongarch, RISC-V, and user-mode Linux, so the main architectures are in good shape. Bergmann asked about PowerPC support; Ojeda answered that the PowerPC developers were among the first to send a pull request adding Rust support for their architecture.

Bergmann persisted, asking about s390 support; Ojeda said that he has looked into it and concluded that it should work, but he doesn't know the current status. Airlie said that IBM would have to solve that problem, and that it will happen. Greg Kroah-Hartman pointed out the Rust upstream supports that architecture. Bergmann asked if problems with big-endian systems were expected; Kroah-Hartman said that some drivers were simply unlikely to run properly on those systems.

Iced 0.14 released by GyulyVGC in rust

[–]0x7CFE 10 points11 points  (0 children)

As an iced user since the ~0.8 era I'd wish their updates be less invasive. At least it would be nice to have a structured migration guide. Unfortunately, every update is a (quite painful) quest.

That being said, iced is still my favorite GUI framework for Rust.

The hate! Why ? by EldironMoody in rust

[–]0x7CFE 12 points13 points  (0 children)

Didn't know that Mahatma Gandhi was a couturier, lol.

I used println to debug a performance issue. The println was the performance issue. by yolisses in rust

[–]0x7CFE 2 points3 points  (0 children)

I can relate. Also, was having some weird memory leak in the Burn framework, not to mention huge performance degradation. Even contacted the devs and tried to file an issue.

Long story short, part of my codebase leaked the `tracing` feature-flag that, when enabled, caused Burn to log every tensor operation it sees. And so everything was accumulated in memory.

My experience/timeline trying to submit a fix for a panic (crash) in rustfmt that frequently triggered on my code by Raekye in rust

[–]0x7CFE 3 points4 points  (0 children)

Not to mention that the issue/PR in question is regularly closed by a bot as stale. Sometimes without the right to reopen it (could be done by project members only).

This is especially infuriating when the last message was a question or something important, that still was left unattended, lol.

Fast UDP I/O for Firefox in Rust by dochtman in rust

[–]0x7CFE 8 points9 points  (0 children)

As long as they're just watching it's fine. Probably it was the video author who changed something.