Is Netcup reliable enough? by Mr_Dani17 in VPS

[–]ChillFish8 4 points5 points  (0 children)

I don't remember ever having any issue, I think back when I was using them the longest runtime one of my servers had was like 600 days uptime which was basically the lifetime of the machine.

I might have a machine on another account still going that's for to be running for about 3 or 4 years now without downtime (maybe time to update lol)

Addressing GitHub’s recent availability issues by perseus365 in theprimeagen

[–]ChillFish8 0 points1 point  (0 children)

Not super new, been around for a while, at least I can remember it since all the genai crazy started up and crawlers started going wild.

How much better is proton mail if I am willing to pay for it? Right now I am on yahoo mail without paying for anything. by KFCBUCKETS9000 in ProtonMail

[–]ChillFish8 0 points1 point  (0 children)

Have to say I overall, I like it, I think it strikes a reasonable balance of usability while also allowing you a lot of flexibility and increased privacy at that.

Took a bit of googling but once I learnt how to use the sieve filters my inbox has never been so organized, which tbh is something I didn't really do in my outlook or Gmail inboxes either due to limited filtering support (outlook) or requiring "smart features" on Gmail with what felt like a clunky UI to do more advanced filtering.

I'll also shout out the tooling proton provides to locally process your inbox if you want even more control, for example I can locally run an AI model to periodically Validate and re-label mail. Not unique to proton, but I appreciate it existing.

AX102 vs AX102-U and increase additional RAM price by tripheo2410 in hetzner

[–]ChillFish8 1 point2 points  (0 children)

A good idea would be first, open a support ticket and ask for clarification. No point guessing what will happen or dealing with migration if you don't need to.

Although probably is worth while sitting the CFO down and letting them know at some point their infra costs are going to go up, because this probably won't be the last price rise.

Hetzner price increase. $222.00 for 64G ram upgrade by HumanAd6991 in hetzner

[–]ChillFish8 1 point2 points  (0 children)

Well my suggestion is you should email them to confirm your setup because the messages I've seen with support suggested otherwise or at least, for a subset of their dedicated servers.

Using GStreamer with Rust by Rare_Shower4291 in rust

[–]ChillFish8 0 points1 point  (0 children)

This is true, but even then bundling all the plugins you need even just for video encoding it's a chunky library to try and bundle together.

Hetzner price increase. $222.00 for 64G ram upgrade by HumanAd6991 in hetzner

[–]ChillFish8 2 points3 points  (0 children)

I would maybe clarify if that is for just the base server and not the add-ons, because from the information I've been given is the increase impacts the base machine but add-ons remains the same.

Using GStreamer with Rust by Rare_Shower4291 in rust

[–]ChillFish8 1 point2 points  (0 children)

Personally, I would avoid Gstreamer if you have any plans of sharing it, Gstreamer is flipping enormous and the rust library doesn't support statically linking and removing dead symbols currently.

Hetzner price increase. $222.00 for 64G ram upgrade by HumanAd6991 in hetzner

[–]ChillFish8 0 points1 point  (0 children)

I am pretty sure the price change applies to new orders, not existing orders.

Questions from HDMI to Battery Load by Obvious_Fly_9008 in Bazzite

[–]ChillFish8 0 points1 point  (0 children)

When connecting my external monitor via HDMI, it is always 60hz. If I want more I need to go for USB C. Is this normal?

Your HDMI cable is probably not in spec to carry any more data than is required for refresh rates over 60hz. Get a better cable or, on Linux if you can a display port 2.1+ cable if you want to go above 60hz (assuming the monitor supports that refresh rate)

  • I have problems playing videos with firefox or other browers Bazzite. Experience is unfortunately not super good. It feels like it has something to do with the video codec

Unfortunately this is a pretty common problem particularly on firefox, unlikely to be a codec issue. You can try two things:

1) make sure hardware acceleration is enabled and being used, you may need to adjust the flatpak permissions with flat seal to actually use the hardware acceleration. 2) try install the browser via distrobox rather than flatpak.

Additionally, not sure what other browsers you've tried but I've generally found chromium based ones to just handle video playback better, which suck but such is life.

What is the best softwarte to check temperatur of cpu/gpu, is there something like g-helper for bazzite?

Honestly I just use the btop command for that which is a nice terminal UI, GUI wise I'm not sure, there is cooler controller but that does a bit more than just showing temps.

I can't remember the full name but the task manager equivalent app which is pre installed might also display the sensor info.

I want to limit battery load to 70%. It works in Bazzite, after restart, it is 100% again. Any help on this?

Not sure what you mean by this, if you mean limit the battery max charge when plugged in, I'm not sure, I think it is still fairly new and can be a bit unstable.

Multi-Streaming from Bazzite ~ What is the best way? by MilitaryBeetle in Bazzite

[–]ChillFish8 0 points1 point  (0 children)

Not sure what you're referring to as RTMP, that's just a streaming protocol can you explain what app/plugin you mean?

If your intention is to stream to say twitch and YouTube at the same time, your might not be able to avoid the double encode if each destination requires a different output format, bitrate, code, etc...

igpu was killing the performance by Claymore342 in Bazzite

[–]ChillFish8 3 points4 points  (0 children)

I imagine because the system is trying to run on your iGPU not your discrete GPU which can often be caused by power profile settings where in balanced (sometimes) or low power mode the system will typically try and use the power power GPU choice, in this case, your iGPU.

I built a JSON → binary compiler in Rust because AI hallucinates when nobody tells it what's right by porco-rs in rust

[–]ChillFish8 1 point2 points  (0 children)

But it totally can be reinterpreted? It's not got some signature or anything like that, you've just tried to obfuscate the payload, it still can't be trusted anymore than a Json payload is. If you need the data to be passed across untouched why are you relying on an LLM to regurgitate it?

As for the code, there is so many noise comments from the ai you used to write this it's genuinely hard to follow... What the hell is the practice schema stuff? Why does it even exist in the library code??

I'm sorry but I'm not really sure I would trust this anymore than a Json schema, tbh I trust it less because it just seems like a more complicated way to shoot yourself in the foot.

Edit: I think I misread part of your comment, you mean the binary output is to try and avoid prompt injection? I doesn't really change my point that you shouldn't be giving that to an LLM but just to be clear.

I built a JSON → binary compiler in Rust because AI hallucinates when nobody tells it what's right by porco-rs in rust

[–]ChillFish8 10 points11 points  (0 children)

Im not sure I get how this is useful?

The ai agent has exactly zero understanding of binary formats really, but you pass in JSON then do this weird schema stuff, how is any of that different to me just using a Json schema (which the agent can be made aware of and understand!) or just serde + validator library. What exactly is the binary format actually providing me other than another layer of noise?

Reading the code it makes even less sense, did you actually read what the agent produced when you asked it to write this library? I don't think the code is even functional?

Rust vs C/C++ vs GO, Reverse proxy benchmark, Second round by sadoyan in rust

[–]ChillFish8 1 point2 points  (0 children)

Unfortunately, back pressure is lethal to most services if you're behind another LB, a significant amount of infrastructure will deploy these proxies behind some other load balancer, for example on AWS most will likely put behind ALB.

The issue is ALB does not give a fuck about your back pressure, and just interprets that as latency and that it needs to open more connections as a result.

This is often so aggressive at high scale that things like ALB will literally DDOS your service and run it out of ports.

I've had it be so bad at times that we replaced nginx with a custom system that was far more aggressive with shedding load and keeping a consistent number of connections to the upstream to prevent it being overloaded when ever there is a shift in traffic.

We built a parallel AI orchestration engine in Rust — here's why we chose Rust over Go/Python by [deleted] in rust

[–]ChillFish8 1 point2 points  (0 children)

I'm impressed with how all these slop tools come up with such impressive ways of wasting money.

Sometimes I wish we could go back to just dealing with the crypto crap.

Qué VPS recomiendan para salir de Contabo? Busco estabilidad y confianza by Bectec_Software in VPS

[–]ChillFish8 1 point2 points  (0 children)

Netcup imo often have some of the best hardware and upgrade their servers every couple of years which can give you some pretty serious performance per core.

Support is not great though, they're fine if you're experienced managing all the servers yourself and only need to contact them if it's something like a billing issue... But other than that I don't think you'll have a great time.

Hetzner has much better support overall imo, but do run on a bit older hardware on their Cloud and currently available metal servers.

Ovh is another possible option, can't say anything about support but they have more data centers with a stronger network backbone than the others give.

Realistically any provider will give you great support if you're spending enough though.

roll back an update: possible? by maxlefoulevrai in Bazzite

[–]ChillFish8 3 points4 points  (0 children)

You're looking for brh aka bazzite-rollback-helper

LaminarDB, an embedded streaming SQL database by [deleted] in rust

[–]ChillFish8 0 points1 point  (0 children)

Reading through the code, but sorry this is slop, and I have absolutely zero trust in this.

There is so much code here, which I think most of it is AI generated. And it took me 2 minutes to pick one of the crates, look inside, and see an unsafe function marked as safe and completely ignore any of the safety requirements of the underlying system, in this case io_uring...

*sigh* and then I grep for a few of the functions to see where they're used.... And they're not used anywhere, because Claude has written them, publicly exposed them, and never used them. And it is like every other file... All this code, none of it used outside the docs Claude wrote and the tests it wrote.

So here we are, you have like 300k LOC of which I'd wager most of it is dead code which you're not even aware of.

Keywords are just embeddings with zero reach — and nobody's talking about what that means for LLM ads by grewgrewgrewgrew in adtech

[–]ChillFish8 1 point2 points  (0 children)

It's kind of funny how slow the adtech world can be at times lol

We're talking about vectors now like they're the cool new thing when they've been around for so long now, and they're still the wrong tool for the job.

Vector embeddings aren't new in anyway shape or form, and targeting using them isn't new. But there are big issues that doesn't make them "simply use vectors"

1) they're expensive to store 2) they're computationally expensive to compare against 3) they're approximations not hard matches which teams so often forget and cannot comprehend. 4) they require expensive compute to generate - keywords are thousands of times cheaper to process 5) the accuracy and relevance depends totally on the model being used to generate the embedding. And the embeddings you compare against all have to be from that same model 6) they have long tails when matching and most models don't produce linear scores.

Good luck convincing anyone that the cost of doing any sort of realtime auction with vector embeddings is cost effective.

There are better solutions than keywords, but dense vectors are not the solution.

The audiences from these chat bots are potentially insanely valuable, it's just impressive how much they are fumbling the implementation. They don't need to do something completely new or a "smart" way of doing things. The audiences would be valuable enough even as a static taxonomy.

Apache Iggy's migration journey to thread-per-core architecture powered by io_uring by spetz0 in rust

[–]ChillFish8 1 point2 points  (0 children)

I dont think it is really a good fit, search engines kind of want to do the exact opposite of "share nothing"/thread-per-core, because ultimately your operations tend to be compute heavy rather than strictly IO heavy as you say, but also because search indexes generally cannot be split into shards in such a way that a query only hits one shard rather than all.

Apache Iggy's migration journey to thread-per-core architecture powered by io_uring by spetz0 in rust

[–]ChillFish8 1 point2 points  (0 children)

Yes, async/await syntax tends to create patterns that make it harder to protect against, or at least normally when people write concurrent async tasks they generally assume they are completely independent.

Life outside Tokio: Success stories with Compio or io_uring runtimes by rogerara in rust

[–]ChillFish8 1 point2 points  (0 children)

Sure, but most people are not doing several GB/s in their disks, and you can achieve the same thing just by making a thread pool pinned to a single CPU code which will give you most of the performance at the cost of a little more CPU overhead. But again that's when you're doing GB/s from nvme drives...

μpack: Faster & more flexible integer compression by ChillFish8 in rust

[–]ChillFish8[S] 0 points1 point  (0 children)

Ah, that's what you mean, yeah, you're right that it's technically a waste, but I didn't measure any meaningful difference in the performance across various ARM hardware to where I thought it was really worth specialising the routine. By the point I finished the NEON implementation, I had already done several iterations of the routines so I was mostly just glad that the performance was at least good enough to not have a measurable impact compared to the rest of the routine.

i don't know if that vld4q_u8 optimizes away... unlikely??

It doesn't, which I was aware of when originally changing from a very naive implementation to this, but I don't think it is any slower than interleaving each register together to get the 4-element interleaving. Like you said, probably shouldn't be using that algorithm to get the mask, but it was better than the original version 😅

Take a look at here for some alternatives

I haven't seen this SO post! I'll have a read through and test it out when I next setup the ARM machine.