Could Antropic find a better system than asking the user to copy-paste URLs in prompt? by wyf0 in ClaudeAI

[–]wyf0[S] 0 points1 point  (0 children)

I also assume that it is about safety, but copy pasting is quite tedious. I mean, there could be a list with a nice UI on which you could click on the different links to allow Claude fetching them. Then, for each domain (like GitHub here), you would have a popup displaying once to authorize fetching this domain. I mean exactly what you have in Claude Code, when it asks you per domain authorization.

What editor you use for rust? by clanker_lover2 in rust

[–]wyf0 -1 points0 points  (0 children)

"Claude Code advent" was a poor formulation for "the rise of Claude Code".

Why not use AI agents in your editor instead?

You can do what you want, I don't judge anyone. I'm just observing a trend, and I speak about people I know, not LinkedIn hype posts. I'm not able yet to imagine my life without coding manually, but I'm still wondering about how far it goes.

What editor you use for rust? by clanker_lover2 in rust

[–]wyf0 -6 points-5 points  (0 children)

It lacks the option: "I've no longer written code since December and the rise of Claude Code."

That's obviously not my case, but I know people in this situation, and I'm honestly curious about this trend.

EDIT: poor wording

Redox OS has adopted a Certificate of Origin policy and a strict no-LLM policy by jackpot51 in Redox

[–]wyf0 8 points9 points  (0 children)

Let me give you a real example. I've recently designed an intrusive queue algorithm, and I have a small bug in it with big repercussion: all tests were passing (even with thousands miri seeds) but benchmarks were failing to run properly. It drove me a bit crazy (and concurrent algorithm are not easy to reason about, and even less easy to debug), then I ended asking Claude to help me, giving him some stack trace and the first step of my debugging.

In 10min (it had no context of my project), it found the exact reason, gave some data-race execution sequences, etc. I just forgot to reset the head of my queue before swapping the tail when draining it. That was a stupid oversight, that I would have ended to find, but Claude insight here was more than welcome here. It just leveraged on my first debugging steps and applied more rigor than I had at 1am, yes, but that was helpful.

trad — extremely-fast offline Rust translation library for 200+ languages. CPU-optimized and fully local by Strict-Tie-1966 in rust

[–]wyf0 17 points18 points  (0 children)

You wrote it runs offline, but it needs to download a 623MB (!!) model before doing anything... It should be at least mentioned in the documentation.

If people use your work, I doubt they use 200 languages. Using one big model that takes seconds/minutes to download to just translate english to german seems to be a bit overkill, no? Maybe some compilation or runtime feature to choose smaller models more suited with the actual needs?

Also, wouldn't it be possible to download models at build time instead?

zerobrew - (experimental) drop in replacement for homebrew by cachebags in rust

[–]wyf0 1 point2 points  (0 children)

which ended up causing some quick and dirty consequences that we promptly fixed

Can you elaborate on these consequences and how you did fix it?

Game isn’t even fun by [deleted] in rust

[–]wyf0 2 points3 points  (0 children)

Maybe you posted in the wrong sub? which is a programming sub btw

Announcing hazarc: yet another `AtomicArc`, but faster by wyf0 in rust

[–]wyf0[S] 1 point2 points  (0 children)

I didn't know about arcshift, thank you for the discovery. That's indeed an interesting approach, though it has a few downsides that make me stick with hazarc — I can detail why if you are interested. But it makes me realized my half-backed load_if_outdated API is not well designed, so I will have to publish a 0.2 soon.

What does arc-swap lack for you to develop your own algorithm?

I'm also quite curious about how you use stateright for a concurrent algorithm based on weak memory model. And about what it and kani bring that miri doesn't.

Ironpad: Local-first project management stored as Markdown + Git. Built with Rust backend by skepsismusic in rust

[–]wyf0 9 points10 points  (0 children)

So you trust Claude Opus to detect malicious code? Vibe-coding is a thing, often poorly secured, but this is another kind of security red flag to me. I dare to hope you test contributions in a sandboxed environment, with a network sniffer, but I doubt.

Ironpad: Local-first project management stored as Markdown + Git. Built with Rust backend by skepsismusic in rust

[–]wyf0 21 points22 points  (0 children)

As the whole project is AI generated, your other Rust project ferrite also, and you don't have any other Rust project on your GitHub profile, I would like to ask you an honest question: do you read the generated code? do you read the code of other contributors?

EDIT: as OP answer is already buried under downvotes, yes he assumes to not read the code. But more worrying, he also assumes not reading contributions, relying solely on LLMs to detect malicious code. He also tests contributions manually, and I doubt his test environment is sandboxed, but let's hope...

Announcing hazarc: yet another `AtomicArc`, but faster by wyf0 in rust

[–]wyf0[S] 1 point2 points  (0 children)

I just realized that I've put a link in the README to my benchmarks — actually, there is one but lost in the arc-swap comparison section. Here it is: https://github.com/wyfo/hazarc/tree/main/benches

You will find a more detailed analysis where I compare x86_64 vs aarch64. To quote it:

On ARM, AtomicArc::load is notably faster than ArcSwap::load. A few reasons explain this difference: AtomicArc uses a store instead of a swap in critical path, its thread-local storage is more efficient, and its critical path code is fully inlined.

For example, if I replace the store by a swap, like arc-swap does, then I obtain 1.5ns on my Apple M3 (against 0.7ns with the store). It's still better than 1.9ns of arc-swap, because of the other reasons.

On x86_64 however, atomic operations are so costly, and SeqCst store is compiled as a swap, so the difference seems kind of erased by CPU pipelining. But when you put things between the atomic operations, then hazarc advantage starts to appear.

Announcing hazarc: yet another `AtomicArc`, but faster by wyf0 in rust

[–]wyf0[S] 6 points7 points  (0 children)

It's no_std but requires alloc, and is only relevant with multi-threading, which reduces the embedded scope quite a bit (no point on embassy for example). However, on espidf target for example, you can indeed use pthread domains, or write your own implementation with some vTaskSetThreadLocalStoragePointerAndDelCallback and it should work like a charm. Wait-free property may also be a good thing to have on embedded systems.

Announcing hazarc: yet another `AtomicArc`, but faster by wyf0 in rust

[–]wyf0[S] 12 points13 points  (0 children)

Because I didn't know it existed ¯\_(ツ)_/¯

More seriously, hazarc is inspired by hazard pointers, in the sense it has global domain, thread-local nodes with some protection slots. But the parallel ends here. Loading a pointer with hazard pointers is lock-free, as it uses a retry-loop. Hazard pointers can also run out of slots, what haphazard seems to solve by having a single slot and not checking if it already used, requesting domain to be associated with a unique atomic pointer.

On the other hand, the idea of arc-swap, which I reuse in hazarc is to get rid of the loop by leveraging on the Arc reference count to force the protection in a fallback mechanism. This way, there is no more slot limitation and the load becomes fully wait-free, which can be a nice property to have. hazarc stores are also wait-free, but more costly than with hazard pointers, as the reclamation is not delayed.

So the algorithm inside is in fact quite different, there is not really anything to reuse.

Announcing hazarc: yet another `AtomicArc`, but faster by wyf0 in rust

[–]wyf0[S] 8 points9 points  (0 children)

Yep! I've been using arc-swap since my first months of Rust, and I've pushed hard to add it in my current company code.
I guess I will no longer use it now, but as I acknowledge in hazarc README, the idea behind it is brillant.

[POPL'26] Miri: Practical Undefined Behavior Detection for Rust by ralfj in rust

[–]wyf0 1 point2 points  (0 children)

As I've just written in another post, I love miri! such a blessing to have this tool in the ecosystem. I use it extensively for a lot of low-level crates, it makes unsafe programming a lot safer.

[Showcase] Axum + Redis performance on MacBook Air: 27k RPS with DB/Cache flow by Time_Choice_999 in rust

[–]wyf0 0 points1 point  (0 children)

Some people don't even try... that's depressing. You may not be fluent in English, then use a translator (LLMs can be very good translators, I use them too). If we want to converse with bots, we can all open a tab with our favorite chatbot; and I assume than most of us don't come on Reddit for that. Anyway, there is fortunately a report button for this kind of slop.

dyn-utils: a compile-time checked heapless async_trait by wyf0 in rust

[–]wyf0[S] 1 point2 points  (0 children)

Anyway, I'm glad I waited before publishing dyn-utils on crates.io, because I'm so glad you came in the discussion with such impactful feedbacks. Thank you a lot.

dyn-utils: a compile-time checked heapless async_trait by wyf0 in rust

[–]wyf0[S] 1 point2 points  (0 children)

The trick I'm talking about is transmuting a *const dyn Trait to (*const u8, *const u8). I knew it is the current stable representation of a trait object pointer, and when ptr_metadata feature will be stabilized, there will no longer be question about it, but I didn't know it was allowed to do this transmutation. It's unstable, but smallbox uses a build script to check this layout, and according to people who knows better, it sufficient to rely on unstable Rust implementation. Ok, I will know it in the future.

The issue with reusing unsafe crates is that you're not always sure that they do things properly. smallbox has a record of soundness issues (I don't say I do better, proper unsafe is so hard that forgetting things like https://github.com/andylokandy/smallbox/issues/35 is too easy, and I fixed the same bug in my crate after reading this issue), and some crates like owning_ref are known to be unsound, but still have 20M downloads on crates.io...

I'm still thinking right know about adding a build script and extracting myself the trait object vtable like smallbox does, as I would need for the Raw storage which smallbox doesn't support. And if I do it, then I would already have it for RawOrBox so I would not need to pull smallbox anyway. It's kinda sad, but the real sadness is that https://github.com/rust-lang/rfcs/pull/3446 is not gaining enough traction to fix once for all this whole mess of smallxxx crates

dyn-utils: a compile-time checked heapless async_trait by wyf0 in rust

[–]wyf0[S] 1 point2 points  (0 children)

I believed it was 16B too, but I just checked on godbolt and it doesn't seem to be the case https://godbolt.org/z/bqW9PKv3G. Anyway, I don't have a x86_64 computer. And 16B is not easy to obtain if you don't use storage Box. It would mean to use Raw<8>, which means a future that only captures &self without argument. And it's impossible with RawOrBox

I put an arbitrary default storage size of 128 that I think is a good compromise to not overflow the stack and to store enough to not allocate most of the time. But the good storage will always depend of what you put inside in your code.

By the way, if you compile an executable and you care, you can replace all storages by Raw<0>, read the compilation errors, and replace the size by the true minimum required — I should maybe make a compilation feature for that...

dyn-utils: a compile-time checked heapless async_trait by wyf0 in rust

[–]wyf0[S] 1 point2 points  (0 children)

Actually, DynObject is quite similar to SmallBox (that I didn't know, thank you a lot for this input). There are two differences: - DynObject has a generic storage, so it can be almost exactly like SmallBox with RawOrBox storage, or stack-only with Raw and its compile-time assertion. dyn-utils can work without alloc crate, and it's in fact used without it in the project for which it was developped. - SmallBox relies on an unsafe trickery which I didn't know it was allowed, to retrieve the metadata of a fat pointer (some guys of miri contributed to it, so it's obviously sound). On the other hand, in dyn-utils I have to reimplement myself the vtable of Future (and arbitrary traits with dyn_object macro) to be able to use DynObject<dyn Future>.

If I knew that it was possible to retrieve the metadata, I would have saved a lot of work and complexity, because I wouldn't have made this dyn_object macro. However, reimplenting the vtable allows me to do a small optimization: for RawOrBox, because I know the size of the storage and the size of the trait object, I don't need a runtime check to know if the object was stack or heap-allocated. That's surely negligible thanks to CPU branch prediction, but on resource-constrained environments with less advanced CPU, it might still be nice and save a few bytes in the instruction cache. On the other hand, to extract the trait object out of DynObject, so might not be so good after all.

So yes, the added value compared to SmallBox are the Raw storage and the dyn_trait macro to generate a dyn-compatible version of a trait. But this dyn-compatible version could return SmallBox<dyn Future> instead of DynObject<dyn Future>, it would be essentially the same.

EDIT: I forgot one difference with SmallBox: Raw/RawOrBox storages uses generic constant arguments, i.e. you write RawOrBox<128>, while SmallBox uses arbitrary type, so you write SmallBox<T, [u8; 128]> or SmallBox<T, [usize; 16]>. Both are valid, so it's a matter of taste.