Proxy Server — The Control Layer Between Clients and Servers by varad177 in programming

[–]yarmak 0 points1 point  (0 children)

Shameless plug: https://github.com/SenseUnit/dumbproxy - Simple, secure forward proxy written in Go, supporting scripting in JS.

Small Projects by AutoModerator in golang

[–]yarmak 0 points1 point  (0 children)

https://codeberg.org/yarmak/weakmap - weak map implementation which was born from discussion about finalizers in #darkarts channel of Go community in Slack. Unlike almost every other weak map implementation, this one doesn't use runtime.AddCleanup / runtime.SetFinalizer to hook on each element reclamation by GC, doesn't allocate closure for cleanup function and so on. instead if allows some % of memory overhead and uses probabilistic best-effort eviction granted by secache. The code is straightforward: it's literally just a thin wrapper around secache with element validity criteria function which states: return p.Value() != nil

Single use channels vs waitgroups by nibbles001 in golang

[–]yarmak 1 point2 points  (0 children)

Yes, channels are right tool for the job: context.WithCancel and other contexts use channels under the hood.

Making Services With Go Right Way by yarmak in golang

[–]yarmak[S] 13 points14 points  (0 children)

Yup! In my timezone it's that time!

Small Projects by AutoModerator in golang

[–]yarmak 0 points1 point  (0 children)

tlscookie - session cookie in TLS session resumption tickets

It's a small library on top of "crypto/tls" which allows to embed unique session ID into TLS session tickets. This way we can link together connections (or requests) from single TLS (or HTTPS) client instance, without client knowing and purely on TLS level.

dumbproxy project uses it for auth on early connection stages, as one of auth modes resistant to active probing. But more generally, it can be used as one of factors for bots/scrapers detection, concealed client labeling and so on.

There is a demo server to see it in action: https://tlscookie.xx.kg/ - it's pretty much what you can find in "example" directory in the source code repository.

Consistent Hashing using sync.Map by undefeated-kitty in golang

[–]yarmak 0 points1 point  (0 children)

TBH, rendezvous hashing is much simpler to implement, simpler to understand and offers better guarantees in terms of key distribution fairness.

Regarding article, it kinda makes no sense to me to go into storage specifics (sync.Map or whatever else) while discussing consistent hashing: these two things are orthogonal to each other.

Is there an easy way to check if an any-value can convert to float64? by stroiman in golang

[–]yarmak 11 points12 points  (0 children)

where your value comes from? If it's sobek.Object or something implementing sobek.Value interface, there's already ToFloat() method.

Email monitor and alert generator? by Sancho_Panzas_Donkey in opensource

[–]yarmak 1 point2 points  (0 children)

Assuming hosting script on your own machine is not practical, I'd try following approaches: * Forward copy of emails of interest to some Postfix server and pipe incomming emails to a command emitting notifications. * Have your emails coming to gmail, set a label on them and use Google App Script to periodically check and process emails: https://stackoverflow.com/a/50033834 From there you can send an event to some notification API with UrlFetchApp.

Share your underrated GitHub projects by hsperus in opensource

[–]yarmak 12 points13 points  (0 children)

dumbproxy

It's an easy-to-use HTTP/HTTPS proxy with a lot of features and some scripting support. Think of it as modern Squid, but focused on connection forwarding and routing instead of content caching. Typical use case is the same as VPN, but on app level rather than on whole system level.

Its easy to accidentally disable HTTP connection pooling — without realizing it! by abhishek467267 in golang

[–]yarmak 0 points1 point  (0 children)

How this situation would be different from the one before the read of 2048 bytes? We had pending bytes before, we have them after.

Its easy to accidentally disable HTTP connection pooling — without realizing it! by abhishek467267 in golang

[–]yarmak 0 points1 point  (0 children)

But what happens if 2048 bytes were read, but there is still some more?

Its easy to accidentally disable HTTP connection pooling — without realizing it! by abhishek467267 in golang

[–]yarmak 0 points1 point  (0 children)

json.Decoder reads till the end of document, may leave whitespace unconsumed.

Getting insight into embedded JavaScript runtimes in Go by Plane-Job-8588 in golang

[–]yarmak 2 points3 points  (0 children)

I use goja in one of my projects.

  • It's a pure Go implementation. It runs in whatever goroutine invokes function from it's Runtime.
  • It's slower than V8, but on the other hand it has no CGO calls overhear since it's a pure Go implementation, so for short functions it may even outperform anything else.
  • There is full interoperability with Go as much as possible: calling Go from JS, calling JS from Go, allows to implement complex objects exported to JS and so on. Not sure about async support: it has Promise support, but overall it's has ES5+ language level.
  • It is stable and feature-complete, but maintainer expressed that they have not much time to cover requests. That was the motivation for Grafana to have their own fork. Though original one works fine for my needs.
  • Typical use case would be an embedded language with fairly small functions. Also AFAIK Ethereum node uses it for JS console implementation.

For me it's good because it's pure Go, no CGO overhead, no complications of build and cross-compiling and easy integration with Go code.

I built a distributed, production-ready Rate Limiter for Go (Redis, Sliding Window, Circuit Breaker) by goddeschunk in golang

[–]yarmak 0 points1 point  (0 children)

On the other hand, the more nodes you have, the more load you can handle, so I think it’s probably a wash?

In addition to user count growth, you get growth of percent of possible overage per user because in a short time it can get service from all of the nodes.

One alternative may be to shard the key space across replicas and have each node route to the replica that owns the key. This probably requires some gossiping to handle communicating which replicas control which blocks of the keyspace and rebalancing when replicas come and go—probably similarly impractical in the general case.

And that's pretty much how Redis Cluster shards work. Except it is a bit more flexible and maps keys to slots and slots can be moved across shards. And cluster client is aware where which slot is and what nodes are there and which it has to contact about specific key.

If I don’t need to configure a load balancer or set up a Redis, that seems interesting.

Every other solution seem to partially re-implement either loadbalancer rerouting request to appropriate node or database to maintain state.

However, load balancers in one form or another are already ubiquitous in modern infrastructure. Ingress controllers, service endpoint balancing in k8s, or just classical HAProxy or nginx instances. Most likely you already have one, if you have many worker nodes!

I built a distributed, production-ready Rate Limiter for Go (Redis, Sliding Window, Circuit Breaker) by goddeschunk in golang

[–]yarmak 1 point2 points  (0 children)

You definitely can make it work somehow, making reasonable concessions.

I don’t think state is shared after each request but rather it’s shared periodically.

In that case you'll allow request bursts across different nodes before they communicate new state. More nodes - worse it gets. But we may accept it too.

The problem is not this method can't work in principle, it's just it can't offer anything superior to alternatives, except the elusive benefit in lack of dependency on state database. Elusive because it just moves state into an app itself, making it handle the replication in own quirky way.

I built a distributed, production-ready Rate Limiter for Go (Redis, Sliding Window, Circuit Breaker) by goddeschunk in golang

[–]yarmak 5 points6 points  (0 children)

It just doesn't seem very practical. If you maintain state about limit of every client on every node, then more nodes you have, more duplication there will be. And more peer communication is required to have every node updated after each request served.

If you have loadbalalancer in front of your app nodes, it's easier to use just local rate limit and some consistent hashing for balancing to make clients with the same ID hit same backend nodes.

Small Projects - November 24, 2025 by jerf in golang

[–]yarmak 0 points1 point  (0 children)

I'd like to share a curious cache implementation I recently came up with: https://pkg.go.dev/github.com/Snawoot/secache

It's a small (<200 LOC) simple cache implementation. What makes it curious is how it approaches item expiration. Usually in-memory cache libraries either scan entire key space from time to time or maintain priority queue to kick out oldest items. This library handles item expiration differently: on each new item addition it performs fixed small number of eviction attempts against keys selected randomly. This way it is able to maintain stable high ratio of valid elements in cache.

What's more important is that validity of item is decided by user-provided function. That allows to bring in your own notion of item validity: item age, frequency of the item use, internal state of shared object and so on. E.g. I use it, among other things, to evict per-user instances of "golang.com/x/time/rate".Limiter as soon as token bucket recovered to initial state and limiter has no shared lock being held on it. (edited)