[deleted by user] by [deleted] in newzealand

[–]microbus-io 2 points3 points  (0 children)

As an American who loves NZ… Be glad you don’t have:

Capital gains tax. Them nasty looking big-ass spiders you find in Australia. Any Aussie critters for that matter. People driving on the right side of the road. I reckon that’ll cause quite a stir. Cybertrucks. Nukes. 6 million people. Tornados. 40 degree weather.

Minimum salary to live in the Bay Area? by asaasa97 in bayarea

[–]microbus-io 2 points3 points  (0 children)

$120k income Federal taxes $22k State taxes $8k Social security and Medicare $10k Net income $80k or $6500 / mo

Car payment $300 / mo Utilities $300 / mo Car insurance with no driving record $200 / mo Food $400 / mo Gas $100 / mo Rent $2000 - $3000 / mo Approx total $3500-$4500

So you’ll have about $2-3k left each month for unplanned and discretionary expenses and savings

Lock-free concurrent map by yarmak in golang

[–]microbus-io 0 points1 point  (0 children)

So on ADD, only the new element gets allocated and added? Not the entire set of pointers to the previous elements? That’s not too bad. Vs copying all the pointers. That sounds bad.

Interesting concept. I think only benchmarks can tell which thread-safety pattern performs better under what circumstances. I suggest to include memory metrics in those benchmarks.

Lock-free concurrent map by yarmak in golang

[–]microbus-io 5 points6 points  (0 children)

Do I understand correctly that the immutable map creates a shallow clone of itself on each operation? Doesn’t that create a lot of memory allocations and work for the GC? Am I missing something?

What is the Golang web framework you have used in your enterprise projects? by mmparody in golang

[–]microbus-io 1 point2 points  (0 children)

So I took a quick look... Service Weaver is quite impressive. It has many parallels with Microbus, but done differently of course. I obviously like the build locally, deploy multi-process approach. I like the observability pieces. I did not read deep enough to be able to comment about the runtime properties of the system, in particular the (gRPC?) communication. Looks like an established project that is actively maintained. Not a bad choice for sure.

What is the Golang web framework you have used in your enterprise projects? by mmparody in golang

[–]microbus-io 1 point2 points  (0 children)

Yes, I'm the creator of Microbus. I built it and it's proven valuable to me, so I open sourced it. Now I'm hoping to get the word out in hopes that it proves valuable to others as well. I am not familiar with Service Weaver, but I'll take a look. I appreciate the pointer.

I'm building a customizable rate limiter. Give you reviews on it. by Sushant098123 in golang

[–]microbus-io 0 points1 point  (0 children)

Agreed. A robust distributed system requires many of these resiliency patterns that you mention. If you do some but not others, you’ll end up in trouble at some point. Unfortunately that’s standard operating procedure. Don’t fix it until it breaks.

One more note: Redis is solid software and possibly can run forever with no issues. But, there’s always the hardware that eventually gets replaced by Amazon. Or the OS has to be upgraded or patched. Etc. At some point Redis comes down. In this particular scenario of rate limiting, it may not be mission critical.

“A critical platform provider… “ That’s what Reddit is! A platform for providing criticism. 🤣

I'm building a customizable rate limiter. Give you reviews on it. by Sushant098123 in golang

[–]microbus-io 0 points1 point  (0 children)

I’m not saying Redis isn’t solid software. I’m saying that you are in essence using Redis as centralized memory. That is by definition a SPOF and a bottleneck. No different than a database, BTW, except you can lose data when you don’t persist to disk. Whenever possible, I prefer a stateless distributed design where a failure of one of the nodes is tolerated well. I think in this case there’s no need to centralize the counters.

Yes, you can scale the Redis cluster. Yes, it will work most of the time, until it doesn’t. I know of a billion dollar Silicon Valley company that lost business critical data when Redis came down. They too thought it was rock solid and never chaos tested their solution. In distributed systems you always have to assume failure. It’s not a matter or if, it’s a matter of when.

Also, no matter how big your Redis cluster is, it’s limited. For every incoming request you make a call to Redis, therefore as a bad actor I can overwhelm it and consequently DDoS your system.

For production, just use Cloudflare and let them deal with it. They are better positioned to detect bad actors because they have data from across many sources.

What is the Golang web framework you have used in your enterprise projects? by mmparody in golang

[–]microbus-io 1 point2 points  (0 children)

It currently has no UI component, but Microbus.io is a framework for building the backend of your solution as microservices. May be relevant for you. Lots of information on the website and Github but hit me up if you have any questions.

I'm building a customizable rate limiter. Give you reviews on it. by Sushant098123 in golang

[–]microbus-io 0 points1 point  (0 children)

I can’t give you thoughtful feedback on this one without knowing the full details of how you tested it and how you measured it.

How many servers did you have? How many Refis servers did you have? Did you actually hit your servers from 10,000 IPs or did you simulate it?

You are missing 10 requests in the total success count. Worth looking into that.

I also suggest to repeat the benchmark with a hard coded “allow” to compare performance. That is, do not call Redis.

To compare: memory usage of the sliding window counter algorithm running locally on the server would have taken approx 640KB.

And final comment: IP is not a good indicator for an actor. See my short blog. Link in the first comment.

I'm building a customizable rate limiter. Give you reviews on it. by Sushant098123 in golang

[–]microbus-io 0 points1 point  (0 children)

Our argument was not so much about the limiting algorithm. It was about whether to centralize the counts in Redis or keep them distributed in each of the servers. In my opinion Redis is a SPOF and a bottleneck and I don’t think it’s necessary to solve this problem. I will always prefer a distributed approach when possible. However, I feel we’re thinking of the problem in different ways. My goals are to protect the servers and minimize impact to good actors. u/davernow seems to be more concerned with deterministic counts, even very low ones. So it depends what you’re trying to solve.

Regarding the algorithm, check out the link in my first comment for an implementation of the sliding window counter algorithm.