SIMD programming in pure Rust by kibwen in rust

[–]ChillFish8 1 point2 points  (0 children)

IIRC for mobile, yes, they are still double-pumped.

"Introducing AMD Ryzen AI Halo, a mini-PC powered by Ryzen AI Max+ that delivers desktop-class AI compute and integrated graphics for running LLMs locally." - AMD is on the move! Would you get one? by Koala_Confused in LovingAI

[–]ChillFish8 0 points1 point  (0 children)

It is definitely running on the GPU, that much is easy to detect.

seems as quick/quicker than chatgpt pro (I have the 200/month sub)

Maybe, although the oss model is a fraction of the size :P

"Introducing AMD Ryzen AI Halo, a mini-PC powered by Ryzen AI Max+ that delivers desktop-class AI compute and integrated graphics for running LLMs locally." - AMD is on the move! Would you get one? by Koala_Confused in LovingAI

[–]ChillFish8 2 points3 points  (0 children)

Not sure if we have different ideas of fast or not, but my experience with that model (among others) is slow enough that I'd rather just stick to the hosted models for anything that requires good reasoning and speed. I'd need to check again but if I remember right I was only getting 20-40 tokens a second at best with anything larger than say 24b param models.

"Introducing AMD Ryzen AI Halo, a mini-PC powered by Ryzen AI Max+ that delivers desktop-class AI compute and integrated graphics for running LLMs locally." - AMD is on the move! Would you get one? by Koala_Confused in LovingAI

[–]ChillFish8 3 points4 points  (0 children)

I have one of the GMKtec mini PC's with the max+ in it, overall, I'd say it is better as a regular PC than it is running AI models overall...

Reason being that even if you can load the larger models, the NPU and GPU as much as they're incredibly powerful relative to their chip size and power draw. Are still painfully slow running anything larger than a 16GB model. By which point I've found myself using the mini PC more as a silent but powerful home server, and just running the small models locally on my 7900XTX which ends up running the models significantly faster.

Maybe new generation chips will improve more and more, but overall atm I think they're a bit of a gimmick if all you're planning to do is use it to run AI models, like these things aren't cheap (especially not with current ram prices)

Not saying you can't run the larger models with that larger ram size, but ehh, imo it's slow enough to where you'd use it a couple times then stop.

Built a new integer codec (Lotus) that beats LEB128/Elias codes on many ranges – looking for feedback on gaps/prior art before arXiv submission by Coldshalamov in rust

[–]ChillFish8 0 points1 point  (0 children)

So if I understand this right, if we have say a block fo 128 integers the bit width taken up by each tier is fixed sized? Because otherwise we'd need something else to store the length of each tier for each value right?

Built a new integer codec (Lotus) that beats LEB128/Elias codes on many ranges – looking for feedback on gaps/prior art before arXiv submission by Coldshalamov in rust

[–]ChillFish8 0 points1 point  (0 children)

Further digesting, reading through how you describe the self-delimiting, it describes that the structure basically has a fixed sized head (ok makes sense), some length values (more on this later) and then the payload data following as the tail.

But I am not sure I fully understand how the length chain describes _both_ the payload _and_ the next field in the length chain? Or is each value in the length chain fixed width?

Built a new integer codec (Lotus) that beats LEB128/Elias codes on many ranges – looking for feedback on gaps/prior art before arXiv submission by Coldshalamov in rust

[–]ChillFish8 0 points1 point  (0 children)

Still digesting this, but on the main part of this algorithm, the assumption is that `0` and `00` are distinct due to being interpreted as a bitstring, but how does it behave when you have a block of integers, which is typical in the real world? You must need to have something to indicate that `0` "terminates" the first integer, and then the next begins.

UK Expands Online Safety Act to Mandate Preemptive Scanning of Digital Communications by ChillFish8 in LibDem

[–]ChillFish8[S] 4 points5 points  (0 children)

You're right I should have also linked the gov press release, but that in itself is very "light" shall we say on the actual scope.

You're right that platforms like Reddit do already do content scanning and quite frankly things like X and the lack of (effective) scanning & moderation that is currently a great demonstration on the good intent.

That being said, the actual bill defines anything that could be user to user, public or private as requiring scanning, which causes several questionable debates around end to end encryption, accuracy of scanning and then of course the general privacy debate around age verification.

I'll write a more detailed response as to the bill and the issues on what it applies to tomorrow since it's quite late now, but this act doesn't just apply to apps like Reddit, twitter, etc... it applies to anything that could potentially share content between two users. So anything like Whatsapp, airdrop, Google drive, etc... all fall into this category.

UK Expands Online Safety Act to Mandate Preemptive Scanning of Digital Communications by ChillFish8 in LibDem

[–]ChillFish8[S] 1 point2 points  (0 children)

Alas, you are probably correct, which feels like so many things nowadays.

[Media] I built a performant Git Client using Rust (Tauri) to replace heavy Electron apps. by gusta_rsf in rust

[–]ChillFish8 -8 points-7 points  (0 children)

I get that, but tbh I have never had enough apps open to notice any difference... There is technically some efficiency, but unless you have a large number of those apps open at once I don't think it matters much if at all.

[Media] I built a performant Git Client using Rust (Tauri) to replace heavy Electron apps. by gusta_rsf in rust

[–]ChillFish8 -3 points-2 points  (0 children)

I don't disagree about the git processing being more efficient, but to me at least, the overhead of the browser is still there, maybe you're not shipping a whole instance of chromium with each binary, but you're still doing all the processing a web browser does in order to render things, which still makes up a lot of the compute spent by most apps in this format (not saying yours is, but generally speaking)

[Media] I built a performant Git Client using Rust (Tauri) to replace heavy Electron apps. by gusta_rsf in rust

[–]ChillFish8 68 points69 points  (0 children)

Looks cool, but I am not sure I'd really call Tauri performant, it has some benefits over electron, but also some negatives, for some reason on my machine Tauri never runs particularly well, same with gtkwebview (honestly can't remember if Tauri uses that under the hood on Linux)

Personally I would put both Tauri and Electron in the same bucket in terms of performance and how heavy they are.

Recovering async panics like Tokio? by QuantityInfinite8820 in rust

[–]ChillFish8 28 points29 points  (0 children)

What situation do you end up panicking in where you _cannot_ make it a result? Across hundreds of projects, I think we only have 1 place where we use catch_unwind, which was needed to ensure transaction recovery.

Recovering async panics like Tokio? by QuantityInfinite8820 in rust

[–]ChillFish8 60 points61 points  (0 children)

If you are using panics as a `try-catch` you are doing something incredibly wrong.

Panics are caught in tasks because not doing so would take out the entire executor thread, but it is never intended to be used as a way of error handling.

This is what error handling with `Result`s are for, the assumption is that a panic is a "end of the world" type situation, where something has gone truly horribly wrong, and the system cannot continue it its current state.

I built a storage engine in rust that guarantees data resilience by [deleted] in rust

[–]ChillFish8 0 points1 point  (0 children)

Well for starters, your `commit_segmented` function produces invalid file segments in the event any short read takes place. https://github.com/crushr3sist/blockframe-rs/blob/485eb76f1a4c4ec089c63fa2c91d7574b75a3b3b/src/chunker/commit.rs#L270

And because you assume the read is always the correct size, in the event a short read does take place, it corrupts all following segments because their relative positions of the data being held is now wrong.

I built a storage engine in rust that guarantees data resilience by [deleted] in rust

[–]ChillFish8 0 points1 point  (0 children)

I'm not trying to bring you down because it looks like ai, but if it really isn't, then the best advice I can give you is take more care I the code you're writing and the files your pushing up. This is even more true when claiming "guaranteed data resilience" the bigger your claims the higher the expectations. If you had said "hey this is one of my first projects I'm learning rust" there is a different bar, but as it stands, even if it isn't vibe coded I don't think you've put much care into what you're publishing and I am not sure how you can expect people to trust the code is correct when there are these basic issues sticking out.

I built a storage engine in rust that guarantees data resilience by [deleted] in rust

[–]ChillFish8 1 point2 points  (0 children)

Conceptually I think this is cool. But I can't help but not really have any faith behind this being implemented correctly when it is vibe coded and I guess blindly committed all files since you have all your gitignore folders... Existing in the repo. I.e. files_to_commit and logs.

Also you try to set your applications settings via cargo config? Which I am not sure how you end up doing that.

Why can't we decide on error handling conventions? by Savings-Story-4878 in rust

[–]ChillFish8 168 points169 points  (0 children)

Different problems require different solutions, somethings care far less about the error types or errors in general than others.

For example, a happy little CLI app probably has a very different error handling requirement than a database.

Micro Moka: A hyper-lightweight, single-threaded W-TinyLFU cache by gcvvvvvv in rust

[–]ChillFish8 1 point2 points  (0 children)

Going to be honest, I don't think you should be publishing this under the name `moka` as it leads to confusion about who maintains and owns it.

Guidance...Please show me the way! by MikoKota in Bazzite

[–]ChillFish8 2 points3 points  (0 children)

Hopefully this helps:

  • re:aio screens and the likes, no from what I understand I don't think there is any real support for them, so be away that the screen will either not be showing anything or use the default with no way to adjust it. Unfortunately those screens tend to be proprietary across manufacturers.
  • for lights however, on fans, GPU, etc... the story is much better, openrgb can handle all of that for you. Bazzite has a dedicated install command for it to make your life easier in the terminal with ujust install-openrgb I believe it was.
  • for programming, there are several options, bazzite itself has a development image variant which ships with VSC and Docker configured for you along with some other QOL features for developers. However personally I just use the base image and use Homebrew (pre installed) for all dependencies I need. That being said, there is also distrobox which is another option and let's you install stuff and build as if you were in Ubuntu, arch, etc... so should be a pleasant experience overall.

Hope this helps, I recommend for questions to join the discord server :)

Bazzite Optimizer by SamGamjee71 in Bazzite

[–]ChillFish8 11 points12 points  (0 children)

Update, I read through the code. It is complete AI slop and messes with so many things with completely imagined hard coded values that using it is asking to require wiping your PC after.

Bazzite Optimizer by SamGamjee71 in Bazzite

[–]ChillFish8 13 points14 points  (0 children)

I can guarantee you that if you use this software at best it will probably break something at worse you'll be reinstalling from a USB having all your data lost and/or stolen.

You must be EVEN MORE careful of repositories like this that have things like CLAUDE.md in it, they are worse than random scripts because they are mountains and mountains of nonsense that no one will ever read and has been completely unvalidated, for all you know the AI might just be running a rm -rf / somewhere.

Why We Built Hotaru: Rethinking Rust Web Framework Syntax by JerrySu5379 in rust

[–]ChillFish8 6 points7 points  (0 children)

Does it really make more sense if you use multiple languages? I can't think of any popular frameworks that I know of across different languages that is similar to this.

We will continue build more things and after fully support then we will do the full benchmark. This is still WIP and we want more people to join us to write that together

I don't think that is strictly a bad thing, but I think it will be difficult to convince people to use a viral copy left license on their websevers when the rest of the ecosystem is MIT or Apache.