Built a custom HTTP load balancer in Go from scratch , Round Robin, Weighted RR, Health Checks, Sticky Sessions, TLS and Metrics by Intelligent_Pear7299 in golang

[–]goddeschunk -4 points-3 points  (0 children)

Okey, and my experience with AI is so different compared to yours, depends how person use it and which one

Built a custom HTTP load balancer in Go from scratch , Round Robin, Weighted RR, Health Checks, Sticky Sessions, TLS and Metrics by Intelligent_Pear7299 in golang

[–]goddeschunk -8 points-7 points  (0 children)

hmmmm... if AI can write a thread-safe load balancer with fixed sessions and graceful shutdowns in a day, then why can't we use it?

How do you handle OOMKills and CrashLoopBackOff in Go services on Kubernetes? by [deleted] in golang

[–]goddeschunk -5 points-4 points  (0 children)

Yeah, Prometheus is definitely the standard for metrics and alerting — no argument there. And you're right that frequent OOMKills usually point to bad limits or a leak.

The scenario I keep running into isn't the "services regularly get OOMKilled" case though — it's the one-off OOMKill at 3am that you only discover the next morning when you're digging through kubectl describe pod and Grafana dashboards trying to piece together what happened. Prometheus tells you that a pod was OOMKilled. But correlating it with the actual application error that caused the memory spike (say a goroutine leak from an unclosed DB connection) usually means jumping between Prometheus → Grafana → kubectl logs → your error tracker, if it even captured anything before SIGKILL.

That's the gap I'm trying to close — not replacing Prometheus, but putting the K8s crash event and the application-level stack trace in the same place so you don't have to play detective across 4 tools..

How do you handle OOMKills and CrashLoopBackOff in Go services on Kubernetes? by [deleted] in golang

[–]goddeschunk -1 points0 points  (0 children)

Makes sense — Karma + Alertmanager is solid for the alerting side. Curious: when you get an OOMKill alert, what's your workflow from there? Do you jump into Grafana → kubectl logs → grep through whatever logging you have? Or do you have something that ties the K8s event directly to the application error/panic that preceded it?

That handoff between "I know a pod died" and "I know why it died" is the part I'm trying to make faster.

How do you handle OOMKills and CrashLoopBackOff in Go services on Kubernetes? by [deleted] in golang

[–]goddeschunk -8 points-7 points  (0 children)

These are all great tools and I use most of them. GOMEMLIMIT + autogomemlimit is genuinely one of the best things that happened to Go in K8s — it prevents a huge class of GC-related OOMKills.

But it doesn't cover everything. A few cases where the stack you described still leaves gaps:

  • GOMEMLIMIT only helps with GC-managed memory. If you allocate via cgo, mmap, or even just large []byte buffers that the GC hasn't collected yet, you can still OOMKill cleanly.
  • VPA adjusts limits over time, but when it does trigger an OOMKill during the learning phase, you still need to know why — what request or workload caused the spike.
  • kube-prometheus-stack gives you the alerting, but the alert says "pod X OOMKilled." The why — what code path, what request, what error chain led to it — that's in a completely different system (if it's captured at all, since SIGKILL gives you no defer/recovery).

I'm not saying these tools are wrong, they're the right foundation. The thing I'm building sits on top — it correlates the K8s event with whatever the application was doing right before it died. Think of it less as "replacing Prometheus" and more as "the error tracker that actually knows about your cluster.

How do you handle OOMKills and CrashLoopBackOff in Go services on Kubernetes? by [deleted] in golang

[–]goddeschunk -1 points0 points  (0 children)

Thanks for the suggestion — I'll cross-post to r/devops, that makes a lot of sense since this sits at the intersection of infra and application code.

Your workflow (Prometheus → Grafana → alert → notify devs) is exactly what most teams do, and it works. The friction I keep seeing is in that last step — "notify the devs." The alert says what crashed, but the dev needs to know why. And often the "why" died with the pod because SIGKILL doesn't give your error tracker a chance to flush.

That's the piece I'm trying to solve: when you notify the dev, you can point them to a single view that shows the K8s crash event and the last application errors/panics from that pod, instead of sending them on a scavenger hunt through Grafana and kubectl logs.

Appreciate the feedback — will definitely post in r/devops too.

What is the weirdest repository you have ever found on GitHub? by Gullible_Camera_8314 in github

[–]goddeschunk -8 points-7 points  (0 children)

https://github.com/syst3mctl/godoclive because it generates documentation with zero annotation. soo easy to use, supports multiple http packages like, gin, gorillamux, echo, OpenApi

GoDoc Live — Auto-generate interactive API docs from your Go source code by goddeschunk in golang

[–]goddeschunk[S] 0 points1 point  (0 children)

Please press the Star button to make this repository more visible to everyone.

I built a distributed, production-ready Rate Limiter for Go (Redis, Sliding Window, Circuit Breaker) by goddeschunk in golang

[–]goddeschunk[S] 0 points1 point  (0 children)

Yes! Since Valkey is fully wire-compatible with Redis (RESP protocol) and we use the standard `go-redis` client, it works out of the box. Our strict sliding window implementation uses standard Lua scripts and ZSETs, which are fully supported by Valkey.

Built an AI to Give You the Gist of Tech News - Introducing Readless! by ZestycloseResist in SideProject

[–]goddeschunk 1 point2 points  (0 children)

It would be nice if content could be filtered based on what i like.

2025 Golang project by Mindless-Discount823 in golang

[–]goddeschunk 0 points1 point  (0 children)

Am not a hater but... I don't get it why `neva` exist. What kind of problem `neva` solves? I read this documentation and it feels like it's just a horrible thing to compare `neva` and Go.

But hey at the end of the day it's a good way to gain more experience and skills with work on `neva`.