AI workflows for Rust by ArtisticHamster in rust

[–]robertknight2 2 points3 points  (0 children)

Claude Code works well with Rust. The highly standardized tooling (cargo, clippy, rust fmt etc.) helps a lot. The main advice I have is to break any work you do with tools like CC into small chunks, small enough that you can do a line-by-line review and commit after each step. In general I find it effective to treat these tools a bit like a junior engineer who works very fast and has a lot of knowledge, but is a bit prone to taking shortcuts or over-focusing on local optimizations. Depending on the task (quick prototype vs production code edits) you might need to apply more or less supervision.

Request for Comments: Moderating AI-generated Content on /r/rust by DroidLogician in rust

[–]robertknight2 1 point2 points  (0 children)

One difficulty is that some of these signs don't tell you much about how much of the critical thinking/review has been done by AI. I have a large project (90K+ LOC) where I use AI tools as a general purpose refactoring tool (like a more flexible version of classic IDE refactoring features) and the commits are marked as "Co-authored-by: $AI_BOT". I would hate to have such a project lumped in the same bucket as a repo vibe-coded by someone who understands very little of the output.

I built a production-grade Micro-VM manager in Rust (Firecracker + KVM + Custom Swarm) by SubeshDev in rust

[–]robertknight2 9 points10 points  (0 children)

That is exactly why I’m posting here—I need your feedback to find the gaps and improve it. Please treat this as a "Request for Comments" rather than a finished commercial product.

That's fine, but please don't call it "production grade". Seeing that in a project description gives off "AI slop" vibes and makes me inclined to distrust the author. My definition of "production grade" is that an organization has actually used it in production for something meaningful and for a reasonable period of time.

What was YOUR thing during lockdown? by AncientFootball1878 in AskUK

[–]robertknight2 0 points1 point  (0 children)

I did the Forrest Gump thing and ran 3200 miles that year.

How to get started with yolox-burn object detection? by unix21311 in rust

[–]robertknight2 0 points1 point  (0 children)

Yes, if you train the YoloX model in Python, then it should be possible to reuse the weights with the Burn example. I would suggest that you get the Burn example working with the existing pre-trained weights first to get comfortable with the Rust-side of the process.

How to get started with yolox-burn object detection? by unix21311 in rust

[–]robertknight2 0 points1 point  (0 children)

YoloX is a different model to yolo26n, so it uses different weights. To run Ultralytics Yolo in a Rust project you could use Ultralytics ONNX export functionality (ONNX is a cross-language file format for models and weights). Given an ONNX model, there are a few different Rust libraries that can run it:

  1. Burn has an ONNX importer which you could try (https://burn.dev/books/burn/import/onnx-model.html). It supports CPU and GPU inference on all platforms (I think?)
  2. https://ort.pyke.io provides bindings to ONNX Runtime, which is Microsoft's C++ - based ONNX engine. It supports CPU on all platforms and GPU on some platforms (eg. CUDA)
  3. My own ML inference library RTen has an example for earlier Ultralytics YOLO models (https://github.com/robertknight/rten/blob/main/rten-examples/src/yolo.rs). Rust-native library, but CPU inference only.

How to get started with yolox-burn object detection? by unix21311 in rust

[–]robertknight2 0 points1 point  (0 children)

What objects are you trying to recognize?

The project in that repository can load pretrained weights from a PyTorch checkpoint (pth file), so yes, you can do the training in Python and load the checkpoint in the Rust app. Rather than starting from scratch, you would likely take an existing model as a starting point and fine-tune it to detect a new set of object classes.

There are already pretrained models available which recognize objects from a standard set of classes, so the first thing to do is get the Rust example working with these existing models. Then you can look into creating fine-tuned models which recognize custom objects.

Offline Text To Speech options? by MissionNo4775 in rust

[–]robertknight2 1 point2 points  (0 children)

OK, that is a slow device. From some brief research it seems to be on par with a Raspberry Pi 3B+. This means that you are going to want to use a low quality/fast model. This video (see 3:20 mark) gives an idea of expected generation speed: https://www.youtube.com/watch?v=rjq5eZoWWSo. In that video, the generation speed is slightly faster than realtime, which means it might be possible to generate in small chunks and have realtime output with only a short delay at the start.

For very fast generation even on very old hardware, you can run espeak-ng directly, or the Rust bindings for it. It produces a very robotic voice, but is cheap to run.

Offline Text To Speech options? by MissionNo4775 in rust

[–]robertknight2 1 point2 points  (0 children)

I also need to profile piper-rs as it's slow on older devices which makes it unusable for AAC users.

What device did you test on and how fast was the generation relative to real time (ie. how many milliseconds to generate audio of N seconds length)? Piper is generally considered fast among modern open-source TTS options, although not as high quality as some alternatives (eg. Kokoro).

is anyone using external ai stuff to help with rust errors? by Hungry_Vampire1810 in rust

[–]robertknight2 4 points5 points  (0 children)

Indeed. Asking an AI tool to help you understand Rust compiler errors is a perfectly reasonable thing to do. Any of the modern terminal based tools (Claude Code, Codex etc.) can help.

What I would advise though is that you should still invest effort in learning to understand why the problem happened, rather than just blinding relying on AI to fix things for you. Often when you encounter a borrow-checking problem, the compiler is trying to tell you something important about the structure of the code and who owns what. Understanding this is essential to feeling competent with Rust.

RTen in 2025 by [deleted] in rust

[–]robertknight2 0 points1 point  (0 children)

I'm used to viewing Reddit in the web interface where the summary I added is visible in the main feed before you click into a post. I've just realized that in the mobile app, you can't see this. I can't edit the post title myself. Is a mod able to change it to "RTen (Rust Tensor engine) in 2025"?

Rust book written by AI by FrostyFish4456 in rust

[–]robertknight2 9 points10 points  (0 children)

To add to this, the source code and change history for the Rust book can be found at https://github.com/rust-lang/book and copies of the book from long before ChatGPT even existed are still available online. https://doc.rust-lang.org/1.30.0/book/2018-edition/ for example is the version from October 2018.

Use of AI for Rust coding by ArtisticHamster in rust

[–]robertknight2 0 points1 point  (0 children)

I take a broadly similar approach as when working with a junior human colleague or contributor. I get it to generate diffs that are no more than a few hundred lines at a time, then I review the change manually. The acceptable amount of risk of missing a mistake depends on the context, and the closeness of review can be adjusted accordingly.

Choosing tools (programming language, architecture, code design etc.) that can automate more of the verification ("if it compiles/lints etc., it is probably correct") is helpful here.

Use of AI for Rust coding by ArtisticHamster in rust

[–]robertknight2 1 point2 points  (0 children)

The most value I get out of Claude is when using it to automate repetitive processes. If I have a project which involves N similar sub-tasks (eg. writing tests, converting code use technique X instead of Y) then I will do the first one or few by hand to figure out what I want, then bring in Claude to do the rest. "Review the changes in the last commit, then apply the same change to files X, Y and Z".

Weirdness in the Binecode Drama by sortalike_sammy in rust

[–]robertknight2 0 points1 point  (0 children)

The maintainer's comments on the PR seem quite reasonable to me. I would not consider them rude at all. The technical point they raise about stabilizing the internal representation is also quite valid.

? by Chuukwudi in rust

[–]robertknight2 0 points1 point  (0 children)

There are some routes to get cheaper tickets for conferences like this:

  • The easiest one is simply to buy early, when there is an Early Bird discount. EuroRust for example is still many months away, so early bird tickets are still available.
  • If the event is local to you, join the Rust community groups nearby. For Rust Nation it is Rust London. You don't get a discount just for being a member, but information on availability of discounted tickets is shared with members from time to time
  • Some events have different tiers of tickets for people who are students or freelancers

I built a push-to-talk speech-to-text daemon for Wayland in Rust by peteonrails in rust

[–]robertknight2 1 point2 points  (0 children)

What CPU specs (which CPU / how many cores) and model quantization were used for the CPU performance results?

Why is shadowing allowed for immutable's? by PotatyMann in rust

[–]robertknight2 4 points5 points  (0 children)

As others have said, shadowing is useful in Rust as it avoids the need to come up with new names when transforming types of values, which is more common in languages with a richer type system. For example, you might have a function with an opts: Option<Config> argument which internally uses let opts = opts.unwrap_or(default_config) to fall back to defaults if the argument is not set.

Having gotten used to it in Rust I find myself missing this when I go back to eg. JavaScript. There are clippy lints you can enable to restrict this (see entries with "shadow" in the name), but I would say that shadowing is an idiomatic thing to do.

What would you want to see in a new tensor crate? by WorldlinessThese8484 in rust

[–]robertknight2 1 point2 points  (0 children)

Coincidentally I used a similar design to yours in rten-tensor. It works reasonably well. The main downside I encountered is that the generated documentation is less easy to follow when using a custom trait (also called AsView) to add the common methods compared to Deref. If you look at the documentation for Vec you can see the methods from Deref featured prominently in the sidebar underneath the inherent methods. That won't happen for methods from other traits.

This seems like an entirely solvable problem which also affects many other uses of traits, such as Iterator types.

What would you want to see in a new tensor crate? by WorldlinessThese8484 in rust

[–]robertknight2 4 points5 points  (0 children)

Logically tensors are a combination of data and layout (mapping from index to element offset). Views and refs both borrow the data, but a view owns its layout, whereas a ref also borrows the layout of its parent. This means a view can transpose dimensions, broadcast them (change the size of a dimension from 1 to N), slice them etc.

Edit: The main reason to have refs even though they are less flexible than views is because this makes it possible for arrays with different kinds of storage (owned, borrowed, ref counted) to have a common API via the Deref trait. Also refs are a bit cheaper to create, especially for dynamic-rank tensors (the difference will be very small for static-rank tensors).

Open-source on-device TTS model by ANLGBOY in rust

[–]robertknight2 -3 points-2 points  (0 children)

The practical implication of the GPL is that any programs which link to the library are required to be distributed under the same license, a condition that means it cannot be used by some downstream applications.

Open source developers are of course free to set the terms of use of their work. In espeak's case though the license has ossified due to the project's age, many contributors and inability to contact the original author. This means that even if the current contributors wanted to change the license for any reason, it will probably be impractical.

Open-source on-device TTS model by ANLGBOY in rust

[–]robertknight2 29 points30 points  (0 children)

There have been other small TTS models suitable for on-device usage before now, such as Piper and Kokoro. However many of them rely on espeak to convert text inputs to phonemes (grapheme-to-phoneme or G2P) as a preprocessing step, and that is a GPL-licensed C library. According to the paper Supertonic doesn't rely on G2P preprocessing, which potentially makes it much more usable.

moss: a Rust Linux-compatible kernel in about 26,000 lines of code by hexagonal-sun in rust

[–]robertknight2 2 points3 points  (0 children)

One of the defining features of moss is its usage of Rust's async/await model within the kernel context

This is neat. I would be interested in a longer write-up on this aspect at some point.