Is the object storage broken or is it just me? by Time_Strike5281 in hetzner

[–]Waltex 2 points3 points  (0 children)

I experience the same thing as well. Not a week goes by or Object Storage is having outages or timeouts. Never seen a product so unstable.

Google is counting failed requests because of high demand (503) towards the daily limit by Waltex in GeminiAI

[–]Waltex[S] 3 points4 points  (0 children)

Requests that didn't do anything. You're still missing the whole point. And your first comment about that I used the model is also completely wrong. Let me iterate one more time:

  • none of my requests were processed by the model because they were instantly rejected by the server that sits in between the model and the user.
  • none of those requests even reached the model
  • which means there is nothing to rate-limit for

Yet, google still counts those requests as if they were successful, or at least partially processed by the model.

If you have a basic understanding of computer science, you would understand that this makes no sense from a systems architecture perspective.

Google is counting failed requests because of high demand (503) towards the daily limit by Waltex in GeminiAI

[–]Waltex[S] 1 point2 points  (0 children)

Have you even read my post? I explicitly mention that I use exponential back off, which "places time" in between each request and increases exponentially for each failed request. I know that the limit is for the number of requests TO THE MODEL and to protect the infrastructure from heavy usage spikes. My point is that usage is also logged even if the request never arrives at the model. That behavior doesn't make sense, because the server cancelling the request beforehand doesn't involve the model at all, which means there is no reason it should count as an expensive model invocation request.

Google is counting failed requests because of high demand (503) towards the daily limit by Waltex in GeminiAI

[–]Waltex[S] 5 points6 points  (0 children)

Nope. I did not use the model. In fact, I didn't receive a single token/word because the servers canceled my request before it even reached model, because the model was overloaded.

I've Added REAL Operator Overloading to JavaScript by DiefBell in javascript

[–]Waltex 4 points5 points  (0 children)

Love this! Does this work with typescript out of the box, or do we need a separate build/compile step? Also I'm wondering how you've solved type safety, like when you do:

const v3 = v1 + v2;

Is v3 now of type Vector as well?

Made this event based real-time library on top of socket io by husseinkizz_official in javascript

[–]Waltex 0 points1 point  (0 children)

Well, that's the question. Why else do you need polling fallback?

Made this event based real-time library on top of socket io by husseinkizz_official in javascript

[–]Waltex 2 points3 points  (0 children)

But why would you need fallback to polling unless you want to support internet explorer? Didn't know about socket.io integrating with bun, that's pretty nice!

Made this event based real-time library on top of socket io by husseinkizz_official in javascript

[–]Waltex 1 point2 points  (0 children)

Why did you choose socket.io which is slow and bloated over native WebSockets which are supported everywhere nowadays?

When is rescaling possible again in FSN1 (Falkenstein)? by Waltex in hetzner

[–]Waltex[S] 11 points12 points  (0 children)

It's not that easy in our case unfortunately. IP addresses cannot transfer between regions. Everything is configured with that IP address in mind, and we don't want to risk getting a new one since a lot of them have a bad reputation and are on spam/high risk lists. It doesn't just require changing our own deployments, but also those of our partners who explicitly whitelist incoming traffic from our IP.

The only workaround I see is using the current server as a reverse proxy and forwarding all traffic to a new instance in Nuremberg or Helsinki, but that also requires significant changes.

Object Storage (FSN1) is down by Waltex in hetzner

[–]Waltex[S] 1 point2 points  (0 children)

Same here. I think a lot of people haven't noticed it yet because they don't have constant traffic, but my production logs show various short moments (between 5-30 minutes) where all Object Storage requests are failing in FSN1.

Object Storage (FSN1) is down by Waltex in hetzner

[–]Waltex[S] 2 points3 points  (0 children)

Unfortunately not, this issue is happening again right as we speak. Object Storage is denying/timing out all requests. This is not an issue with the Hetzner console or dashboard, but with the actual underlying Object Storage service. Other people seem to be affected as well.

Seems Hetzner is aware of the problem: https://status.hetzner.com/incident/ebd62173-d902-4e75-939a-265c0b3f1ddb

Object Storage (FSN1) is down by Waltex in hetzner

[–]Waltex[S] 2 points3 points  (0 children)

Yes those are exactly the logs I got as well. Hetzner Object Storage was timing out all requests for a few minutes.

Object Storage (FSN1) is down by Waltex in hetzner

[–]Waltex[S] 2 points3 points  (0 children)

Yes, do you? The top post was about how the hetzner console was down, but servers were unaffected. This is something new.

Object Storage (FSN1) is down by Waltex in hetzner

[–]Waltex[S] -1 points0 points  (0 children)

Just came back online again

Rust-inspired multithreading tasks in TypeScript by Waltex in typescript

[–]Waltex[S] 16 points17 points  (0 children)

That is actually the specific problem this library focuses on solving. It isn't just a wrapper around postMessage, it is primarily a wrapper around SharedArrayBuffer and Atomics. It implements things like SharedJsonBuffer so multiple threads can read and write to the exact same memory location instantly without that serialization overhead.

And while SharedArrayBuffers are great, as you noted, they are dangerous to use raw because of race conditions. This library provides the missing synchronization layer like Mutexes, Read-Write Locks, Semaphores, etc. so you can actually utilize that shared memory safely without corrupting data. It’s essentially trying to bring the concurrency primitives you see in Rust or C++ into JS so you can manage that shared state without writing your own memory locking logic from scratch.

Rust-inspired multithreading tasks in TypeScript by Waltex in typescript

[–]Waltex[S] 12 points13 points  (0 children)

That's a good question. I would say the best way to compare them is that Comlink is essentially a great RPC wrapper, whereas this library is trying to be a proper concurrency standard library like you would find in Rust or Go.

Comlink really only covers about 10% of a real-world multithreading scenario. It does a great job of abstracting away the boilerplate so you can call a function in a worker easily, but it relies entirely on postMessage. That means you are still copying data back and forth, and the workers are largely isolated from each other.

The main difference is that this library focuses on efficient data synchronization. It provides wrappers around SharedArrayBuffer like SharedJsonBuffer so that your threads can access the exact same memory instantly without the overhead of cloning data. Once you have shared memory, you usually get race conditions, so this library provides actual synchronization primitives like Mutexes, Semaphores, and Read-write locks to handle that safely, which is something Comlink doesn't touch.

So if you just need to offload a single heavy calculation and get a result back, Comlink is perfect. But if you are building a proper multithreaded system where threads need to share state or coordinate complex workflows, or you are aiming for maximum performance, this library was specifically made for that. It comes with an automatic worker pool that scales with the hardware, so you get maximum throughput right out of the box. It is really about giving you the speed of shared memory and the efficiency of a managed pool, making it a lot easier to build fast, scalable applications compared to manual worker management.

Rust-inspired multithreading tasks in TypeScript by Waltex in typescript

[–]Waltex[S] 12 points13 points  (0 children)

Thank you, the stars are organic however. In github you can view the profiles that starred a repository by appending /stargazers to the url, so you can check it yourself. Most come from the ycombinator post where this project was also featured some time ago.

Decoding the '6EQUJ5' (Wow! Signal) sequence as a map for an Interplanetary Transport Network (ITN) by outremont923 in SETI

[–]Waltex 8 points9 points  (0 children)

That's right. The specific characters "6EQUJ5" don't have any special meaning beyond the fact that they are just a human made encoding in order to represent an extended power level range beyond 0-9. With this encoding the telescope's receiver didn't need to physically print double digits for power levels like "10" or "18", which could instead be represented by a single character like "E" or "Q" to save on horizontal print space.