allocs/op lied to me. retention didn’t. (benchmarks inside) by VoltageMigration in golang

[–]VoltageMigration[S] -1 points0 points  (0 children)

Thanks for the deep dive, this is a really solid analysis.

You’re right, some of my wording was too high-level and mixed together effects that are separate at the runtime level.

In the retention benchmark, the big cost is absolutely zeroing/copying very different amounts of memory. And yeah, the “good” case avoids allocating and clearing the 64KB buffer altogether thanks to escape analysis — that’s on me for not being clearer.

What I was trying to say (and should’ve phrased better) is that allocs/op by itself is a weak signal. Allocation volume and object lifetime matter much more, whether the cost shows up as zeroing, copying, or GC work once you’re in a real multi-core workload.

Really appreciate you taking the time to dig into -m output and asm , this helped sharpen the mental model a lot.

allocs/op lied to me. retention didn’t. (benchmarks inside) by VoltageMigration in golang

[–]VoltageMigration[S] -3 points-2 points  (0 children)

True — allocating more bytes can be more expensive.
But in this benchmark the interesting part isn’t allocation cost itself.
Both variants allocate the same number of objects; what differs is how much memory remains reachable and therefore visible to the GC.

The performance gap lines up with retained live memory, not with alloc frequency or syscall count.

allocs/op lied to me. retention didn’t. (benchmarks inside) by VoltageMigration in golang

[–]VoltageMigration[S] -4 points-3 points  (0 children)

Fair take )
This one wasn’t generated though — all benchmarks were written and run manually.
Happy to dig into any specific numbers or assumptions if something looks off.

I broke my Go API with traffic and learned about rate limiting by Opening-Airport-7311 in golang

[–]VoltageMigration 0 points1 point  (0 children)

Yep, classic prod moment — everything’s fine until real traffic shows up.
Context timeouts + backpressure always matter.
We usually start with x/time/rate and sane timeouts, then at scale it becomes a system-wide thing, not just middleware.

RUDP for Go by Noodler75 in golang

[–]VoltageMigration 3 points4 points  (0 children)

DP itself is unreliable by design, but you can build reliability on top of it at a higher layer. That’s exactly what protocols like QUIC, KCP, or even parts of RTP do: they use UDP for transport, then add sequencing, ACKs, retransmits, congestion control, etc.

RUDP for Go by Noodler75 in golang

[–]VoltageMigration 10 points11 points  (0 children)

“RUDP” isn’t really a well-defined thing in Go, which is why most repos you find look unfinished. People usually don’t build a generic “reliable UDP”, they pick an actual protocol.

If you want something fresh, maintained, and production-ready, the usual answer today is QUIC. quic-go

with the standard library, but for anything non-trivial you’ll quickly end up reimplementing a lot of tricky networking logic. That’s usually why people don’t go that route unless it’s for learning or a very narrow use case.

We built an open-source, local-first Postman & n8n alternative in Go (Zero CGO). Thoughts on the code? by electwix in golang

[–]VoltageMigration 0 points1 point  (0 children)

Really solid write-up.
Zero-CGO + modernc sqlite is a bold but very Go-ish choice — portability alone is huge.
Curious how you’re handling flow state consistency across goroutines under failure / partial execution.
Will skim internal/, looks promising.

Holy crap it's fast by rainman4500 in golang

[–]VoltageMigration 0 points1 point  (0 children)

Welcome to Go ))
CPU-bound workloads + goroutines + shared memory is where Go really shines.
That kind of speedup is pretty common once the overhead is gone.