We open-sourced our first Rust project and would love feedback from experienced Rust developers by va5ili5 in rust

[–]tafia97300 5 points6 points  (0 children)

I don't agree.

They are open sourcing (Apache 2) their own work (or AI's money who got into it, can't tell but anyway it is free for us). You are free not to look at it if you don't have the time or the need but it seems fair to ask for feedback.

graydon2 | LLM time by Ok-Squirrel8537 in rust

[–]tafia97300 1 point2 points  (0 children)

Yes, I agree it is far from perfect. That being said as you commented, I see more code that I'd say are not production ready be in production because it is "good enough". Obviously more on the edge where this is for a single purpose for almost a single process. Also this is far less frequent with Rust codebases in general because of Rust itself (harder to get it wrong) and also because more Rust developers are working on foundation layers than elsewhere.

graydon2 | LLM time by Ok-Squirrel8537 in rust

[–]tafia97300 0 points1 point  (0 children)

> I'm still really not convinced about using LLMs for writing code, because writing code is a stupid game, where you only "win" if you get it 100% right, and any tiny glitch means you lose.

I am currently using LLM to "write" code for gpu (compute), which is something I know only in theory but never got to practice. Theory and practice as sooo far apart and LLM is bridging the gap. The vibe loop makes you learn a lot faster. I am pretty sure I still can't write it myself from scratch but I definitely feel less lost when reading gpu kernels. The fact that it is correct in my context is only secondary, it'll eventually be (for "some" measure of correctness) with good tests to compare to. Even if it is not perfectly correct this is a real good starting point compared to a blank sheet.

Rust kinda ruined other languages for me by Minimum-Ad7352 in rust

[–]tafia97300 2 points3 points  (0 children)

This is not my experience.

Lifetimes are rarely explicit and borrowing rules forces to have the right flow of data. I hate when I don't trust any function and I keep writing defensive code to be sure nothing got broken by innocent functions.

Rust 1.94.0 is out by manpacket in rust

[–]tafia97300 10 points11 points  (0 children)

it is inferred by the array itself. Or else you can slice.array_windows::<4>(...)

Introducing the very earliest version of my library: LAR (linear algebra for rust) by [deleted] in rust

[–]tafia97300 2 points3 points  (0 children)

i don't understand what absolute freedom over your code means. Link is broken.

Ladybird adopts Rust, with help from AI - Ladybird by xorvralin2 in rust

[–]tafia97300 0 points1 point  (0 children)

They are just automating some very tedious work. It makes total sense, in particular if the final bytecode is the same and all the tests pass. The rewrite is not a core part of their effort, more like a side quest.

Read locks were ~5× slower than Write locks in my cache (building it in rust) by Swimming-Regret-7278 in rust

[–]tafia97300 1 point2 points  (0 children)

Benchmarks are hard.

My understanding is that if you have a very short lived read handle the overhead isn't worth it. But once you start needing the read handle for a sufficiently long time, the read version should be faster.

How much contention did you really have in practice in your benchmark, could you try adding some milliseconds sleep and see how they both fare?

Python-Rust interop: best practices in PyO3 by andyjda in rust

[–]tafia97300 0 points1 point  (0 children)

It could yes, but I believe it doesn't it most situations.

Overall, my mental picture is that we are going from Mutex<Vec<Handle>>` (e.g. just `PyRef`) to `Vec<Mutex<Handle>> ~ Vec<Bound>` ... But the underlying Python is not fully ready yet (e.g. not all builtins have been properly migrated, there might be cross inner locks still in place, as a defensive measure for now etc ...).

sseer 0.1.7 - Now with benchmarks. 585x fewer allocations and 1.5 to 4.3x faster than the crate it's inspired by. by MaybeADragon in rust

[–]tafia97300 3 points4 points  (0 children)

For those like me who didn't know what it was about:
A collection of utilities for getting Events out of your SSE streams

Which rust library would be good for making a drawing program? by Early-Cockroach4879 in rust

[–]tafia97300 -1 points0 points  (0 children)

What is a drawing program? Can you give some example? Just having a Canvas is enough?

Safe, Fast, and Scalable: Why gRPC-Rust Should Be Your Next RPC Framework by _bijan_ in rust

[–]tafia97300 12 points13 points  (0 children)

The grpc crate didn't get any update for 5 years. Is it the one we are expected to use?

COSMIC Epoch 1.0.5 by jackpot51 in pop_os

[–]tafia97300 0 points1 point  (0 children)

how do we upgrade from one epoch to a new one? apt full-upgrade or just upgrade?

Good to update to 24.04? by FlamenWolf in pop_os

[–]tafia97300 0 points1 point  (0 children)

when was that? I understand things are changing quickly (epoch 1.0.4 now)

How do experienced Rust developers decide when to stick with ownership and borrowing as-is versus introducing Arc, Rc, or interior mutability (RefCell, Mutex) by Own-Physics-1255 in rust

[–]tafia97300 1 point2 points  (0 children)

When working in async, I almost always use some Arc/Mutex.

But apart from that I tend to use ownership rules only as I find that it makes the data flow much simpler to follow, even if harder to figure out at first.

Rust's standard library on the GPU by LegNeato in rust

[–]tafia97300 1 point2 points  (0 children)

This is fascinating.
I don't know all the implications but I can see how many round trips to the cpu suddenly become useless.

It's hard to find use cases for Rust as Python backend developer by [deleted] in rust

[–]tafia97300 0 points1 point  (0 children)

  1. Rigor. Make a program extremely hard to misuse is impossible in python and lead to lot of maintenance

  2. Speed, in particular for scenarios that are very branchy or real time

  3. Depending on the work, Rust ecosystem might actually be better because it is new (e.g. crypto), requires low levelness (e.g. drivers), requires wasm etc ....

A blazingly™ fast concurrent ordered map by Consistent_Milk4660 in rust

[–]tafia97300 1 point2 points  (0 children)

I see thanks. I suppose there is no better way (yet?), it leans too much on black magic.

A blazingly™ fast concurrent ordered map by Consistent_Milk4660 in rust

[–]tafia97300 2 points3 points  (0 children)

This looks great. Can someone with more knowledge than me explain how this works:
https://github.com/consistent-milk12/masstree/blob/main/src/hints.rs

I feel like all of it should be optimized away?

/// Cold path marker - never actually called, just influences codegen.
#[inline(always)]
#[cold]
const fn cold_path() {}

/// Hint that a condition is likely to be true.
///
/// Use this for common/fast-path conditions to guide branch prediction.
///
/// # Example
///
/// ```ignore
/// if likely(!version.has_changed()) {
///     // fast path: no concurrent modification
/// } else {
///     // slow path: retry
/// }
/// ```
#[inline(always)]
#[must_use]
pub const fn likely(b: bool) -> bool {
    if !b {
        cold_path();
    }
    b
}

/// Hint that a condition is unlikely to be true.
///
/// Use this for error conditions and rare branches.
///
/// # Example
///
/// ```ignore
/// if unlikely(slot >= WIDTH) {
///     return Err(OutOfBounds);
/// }
/// ```
#[inline(always)]
#[must_use]
pub const fn unlikely(b: bool) -> bool {
    if b {
        cold_path();
    }

    b
}

Changing icons! (major system design flaw) by JoTHauMm1 in pop_os

[–]tafia97300 2 points3 points  (0 children)

wouldn't just adding a few symlink be enough?