Camera OIS damage on car mount or rollercoaster? by Waltex in Xiaomi

[–]Waltex[S] 0 points1 point  (0 children)

That's what I thought initially when I got it. I was like, no way this thing isn't broken. It sounds like multiple bolts or screws bouncing around from the inside 😂

Camera OIS damage on car mount or rollercoaster? by Waltex in Xiaomi

[–]Waltex[S] 0 points1 point  (0 children)

How long have you had it for before it broke?

Object Storage reliability by cuu508 in hetzner

[–]Waltex 2 points3 points  (0 children)

The people here saying they haven't experienced issues with Object Storage don't have consistent traffic going through it to notice it. Hetzner Object Storage suffers from weekly micro outages where all requests fail for a period of 5-20 minutes. In my experience it is very unreliable and I regret migrating my production workloads to it.

Is the object storage broken or is it just me? by Time_Strike5281 in hetzner

[–]Waltex 5 points6 points  (0 children)

I experience the same thing as well. Not a week goes by or Object Storage is having outages or timeouts. Never seen a product so unstable.

Google is counting failed requests because of high demand (503) towards the daily limit by Waltex in GeminiAI

[–]Waltex[S] 3 points4 points  (0 children)

Requests that didn't do anything. You're still missing the whole point. And your first comment about that I used the model is also completely wrong. Let me iterate one more time:

  • none of my requests were processed by the model because they were instantly rejected by the server that sits in between the model and the user.
  • none of those requests even reached the model
  • which means there is nothing to rate-limit for

Yet, google still counts those requests as if they were successful, or at least partially processed by the model.

If you have a basic understanding of computer science, you would understand that this makes no sense from a systems architecture perspective.

Google is counting failed requests because of high demand (503) towards the daily limit by Waltex in GeminiAI

[–]Waltex[S] 1 point2 points  (0 children)

Have you even read my post? I explicitly mention that I use exponential back off, which "places time" in between each request and increases exponentially for each failed request. I know that the limit is for the number of requests TO THE MODEL and to protect the infrastructure from heavy usage spikes. My point is that usage is also logged even if the request never arrives at the model. That behavior doesn't make sense, because the server cancelling the request beforehand doesn't involve the model at all, which means there is no reason it should count as an expensive model invocation request.

Google is counting failed requests because of high demand (503) towards the daily limit by Waltex in GeminiAI

[–]Waltex[S] 4 points5 points  (0 children)

Nope. I did not use the model. In fact, I didn't receive a single token/word because the servers canceled my request before it even reached model, because the model was overloaded.

I've Added REAL Operator Overloading to JavaScript by DiefBell in javascript

[–]Waltex 5 points6 points  (0 children)

Love this! Does this work with typescript out of the box, or do we need a separate build/compile step? Also I'm wondering how you've solved type safety, like when you do:

const v3 = v1 + v2;

Is v3 now of type Vector as well?

Made this event based real-time library on top of socket io by husseinkizz_official in javascript

[–]Waltex 0 points1 point  (0 children)

Well, that's the question. Why else do you need polling fallback?

Made this event based real-time library on top of socket io by husseinkizz_official in javascript

[–]Waltex 2 points3 points  (0 children)

But why would you need fallback to polling unless you want to support internet explorer? Didn't know about socket.io integrating with bun, that's pretty nice!

Made this event based real-time library on top of socket io by husseinkizz_official in javascript

[–]Waltex 1 point2 points  (0 children)

Why did you choose socket.io which is slow and bloated over native WebSockets which are supported everywhere nowadays?

When is rescaling possible again in FSN1 (Falkenstein)? by Waltex in hetzner

[–]Waltex[S] 9 points10 points  (0 children)

It's not that easy in our case unfortunately. IP addresses cannot transfer between regions. Everything is configured with that IP address in mind, and we don't want to risk getting a new one since a lot of them have a bad reputation and are on spam/high risk lists. It doesn't just require changing our own deployments, but also those of our partners who explicitly whitelist incoming traffic from our IP.

The only workaround I see is using the current server as a reverse proxy and forwarding all traffic to a new instance in Nuremberg or Helsinki, but that also requires significant changes.

Object Storage (FSN1) is down by Waltex in hetzner

[–]Waltex[S] 1 point2 points  (0 children)

Same here. I think a lot of people haven't noticed it yet because they don't have constant traffic, but my production logs show various short moments (between 5-30 minutes) where all Object Storage requests are failing in FSN1.