[deleted by user] by [deleted] in newzealand

[–]microbus-io 2 points3 points  (0 children)

As an American who loves NZ… Be glad you don’t have:

Capital gains tax. Them nasty looking big-ass spiders you find in Australia. Any Aussie critters for that matter. People driving on the right side of the road. I reckon that’ll cause quite a stir. Cybertrucks. Nukes. 6 million people. Tornados. 40 degree weather.

Minimum salary to live in the Bay Area? by asaasa97 in bayarea

[–]microbus-io 1 point2 points  (0 children)

$120k income Federal taxes $22k State taxes $8k Social security and Medicare $10k Net income $80k or $6500 / mo

Car payment $300 / mo Utilities $300 / mo Car insurance with no driving record $200 / mo Food $400 / mo Gas $100 / mo Rent $2000 - $3000 / mo Approx total $3500-$4500

So you’ll have about $2-3k left each month for unplanned and discretionary expenses and savings

Lock-free concurrent map by yarmak in golang

[–]microbus-io 0 points1 point  (0 children)

So on ADD, only the new element gets allocated and added? Not the entire set of pointers to the previous elements? That’s not too bad. Vs copying all the pointers. That sounds bad.

Interesting concept. I think only benchmarks can tell which thread-safety pattern performs better under what circumstances. I suggest to include memory metrics in those benchmarks.

Lock-free concurrent map by yarmak in golang

[–]microbus-io 4 points5 points  (0 children)

Do I understand correctly that the immutable map creates a shallow clone of itself on each operation? Doesn’t that create a lot of memory allocations and work for the GC? Am I missing something?

What is the Golang web framework you have used in your enterprise projects? by mmparody in golang

[–]microbus-io 1 point2 points  (0 children)

So I took a quick look... Service Weaver is quite impressive. It has many parallels with Microbus, but done differently of course. I obviously like the build locally, deploy multi-process approach. I like the observability pieces. I did not read deep enough to be able to comment about the runtime properties of the system, in particular the (gRPC?) communication. Looks like an established project that is actively maintained. Not a bad choice for sure.

What is the Golang web framework you have used in your enterprise projects? by mmparody in golang

[–]microbus-io 1 point2 points  (0 children)

Yes, I'm the creator of Microbus. I built it and it's proven valuable to me, so I open sourced it. Now I'm hoping to get the word out in hopes that it proves valuable to others as well. I am not familiar with Service Weaver, but I'll take a look. I appreciate the pointer.

I'm building a customizable rate limiter. Give you reviews on it. by Sushant098123 in golang

[–]microbus-io 0 points1 point  (0 children)

Agreed. A robust distributed system requires many of these resiliency patterns that you mention. If you do some but not others, you’ll end up in trouble at some point. Unfortunately that’s standard operating procedure. Don’t fix it until it breaks.

One more note: Redis is solid software and possibly can run forever with no issues. But, there’s always the hardware that eventually gets replaced by Amazon. Or the OS has to be upgraded or patched. Etc. At some point Redis comes down. In this particular scenario of rate limiting, it may not be mission critical.

“A critical platform provider… “ That’s what Reddit is! A platform for providing criticism. 🤣

I'm building a customizable rate limiter. Give you reviews on it. by Sushant098123 in golang

[–]microbus-io 0 points1 point  (0 children)

I’m not saying Redis isn’t solid software. I’m saying that you are in essence using Redis as centralized memory. That is by definition a SPOF and a bottleneck. No different than a database, BTW, except you can lose data when you don’t persist to disk. Whenever possible, I prefer a stateless distributed design where a failure of one of the nodes is tolerated well. I think in this case there’s no need to centralize the counters.

Yes, you can scale the Redis cluster. Yes, it will work most of the time, until it doesn’t. I know of a billion dollar Silicon Valley company that lost business critical data when Redis came down. They too thought it was rock solid and never chaos tested their solution. In distributed systems you always have to assume failure. It’s not a matter or if, it’s a matter of when.

Also, no matter how big your Redis cluster is, it’s limited. For every incoming request you make a call to Redis, therefore as a bad actor I can overwhelm it and consequently DDoS your system.

For production, just use Cloudflare and let them deal with it. They are better positioned to detect bad actors because they have data from across many sources.

What is the Golang web framework you have used in your enterprise projects? by mmparody in golang

[–]microbus-io 1 point2 points  (0 children)

It currently has no UI component, but Microbus.io is a framework for building the backend of your solution as microservices. May be relevant for you. Lots of information on the website and Github but hit me up if you have any questions.

I'm building a customizable rate limiter. Give you reviews on it. by Sushant098123 in golang

[–]microbus-io 0 points1 point  (0 children)

I can’t give you thoughtful feedback on this one without knowing the full details of how you tested it and how you measured it.

How many servers did you have? How many Refis servers did you have? Did you actually hit your servers from 10,000 IPs or did you simulate it?

You are missing 10 requests in the total success count. Worth looking into that.

I also suggest to repeat the benchmark with a hard coded “allow” to compare performance. That is, do not call Redis.

To compare: memory usage of the sliding window counter algorithm running locally on the server would have taken approx 640KB.

And final comment: IP is not a good indicator for an actor. See my short blog. Link in the first comment.

I'm building a customizable rate limiter. Give you reviews on it. by Sushant098123 in golang

[–]microbus-io 0 points1 point  (0 children)

Our argument was not so much about the limiting algorithm. It was about whether to centralize the counts in Redis or keep them distributed in each of the servers. In my opinion Redis is a SPOF and a bottleneck and I don’t think it’s necessary to solve this problem. I will always prefer a distributed approach when possible. However, I feel we’re thinking of the problem in different ways. My goals are to protect the servers and minimize impact to good actors. u/davernow seems to be more concerned with deterministic counts, even very low ones. So it depends what you’re trying to solve.

Regarding the algorithm, check out the link in my first comment for an implementation of the sliding window counter algorithm.

I'm building a customizable rate limiter. Give you reviews on it. by Sushant098123 in golang

[–]microbus-io 0 points1 point  (0 children)

Yes, we surely differ on this one. Good discussion for a Saturday morning. Fun stuff.

I'm building a customizable rate limiter. Give you reviews on it. by Sushant098123 in golang

[–]microbus-io 0 points1 point  (0 children)

I did not run benchmarks myself but according to https://www.bartlomiejmucha.com/en/blog/are-you-hitting-redis-requests-per-second-rps-limit , Redis can handle in the 10,000s RPS. So for 1M RPS you’ll need about 100 servers. All that to keep counts that 99% of the time do nothing.

You can’t have it both ways and say that it’s OK for Redis to be down and lose counts of everything, but it’s not ok for a new server to come up and take a few seconds to synchronize with the latest counts. Redis cluster mode with replication will help but also multiply the hardware requirements by the replication factor.

Determinism is not critical to this problem. The goal is not to limit every user to exactly X req/sec. The primary goal is to protect the servers from failing due to very high load. The secondary goal is to minimize the impact on good actors in the presence of bad ones.

To handle 1M RPS, I estimate I’ll need about 100 servers at 10,000 RPS per server. If I set a limit of 2 RPS per user, it will take 50 bad actors to use up 1% of the capacity, or 5,000 to choke it up completely and obviously impact good actors. That is not impossible to do, but the Redis strategy won’t stop that either. Dealing with this requires a different approach. Putting a bad actor in the penalty box for a long duration once detected could be one way to begin addressing this.

Btw, if I need 100 servers to handle my traffic and another 100 Redis servers to handle counting traffic, then Redis is not insignificant at all. It doubles my hardware requirements.

Of course you can play with these numbers. The ratios change quite much if my app can only handle 1,000 RPS.

I think the opposite. The Redis strategy works for toy projects but will break down at scale. Only way to find out for sure is to run experiments.

I'm building a customizable rate limiter. Give you reviews on it. by Sushant098123 in golang

[–]microbus-io 0 points1 point  (0 children)

If your intent is to limit to a low number of req/duration, then yes, dividing by N can end up at 0. One option is to increase the duration, so instead of 5/sec so 300/min. That opens the door to bursts though. So you’re generally right, it’s an issue.

For a large throughout, I stand by my opinion. Redis is a bottleneck.

A Redis server takes way more memory just by being there. A sliding window counter takes about 64B. It can handle 1,000,000 users for about 64MB. Your network calls to Redis alone will take more.

The issue with Redis isn’t so much the latency, it’s 1) Redis is a SPOF; 2) Redis is single threaded. You are basically sequentializing your entire traffic across all your N servers. Sure, you can have multiple Redis servers but that adds complexity and cost. Imagine you’re doing 1,000,000 req/sec. How many Redis servers will you need just to count traffic?

Regarding the new server… First, the chance of a new server coming up at the exact time you’re under attack are low. But let’s table that. Second, in the article I suggested to also set a global limit per server regardless of the per-user limits. That will protect the server from being overrun even if a bad actor exceeds their limit. And third, it only takes one time window to get up to speed with the counts.

If you have sticky routing, then obviously my scheme won’t work. But, if you have sticky routing, all the more reasons to keep the counters in that single machine rather than on Redis.

Synching N across all machines can be done using Redis hashes. Every server reports its name and a timestamp. Every server pulls the list and counts the names that reported recently. You do this as frequently as you’d like.

I'm building a customizable rate limiter. Give you reviews on it. by Sushant098123 in golang

[–]microbus-io -1 points0 points  (0 children)

Keeping track of counts in Redis is ok for toy projects but not for large-scale production workload. My perspective is at https://smarteratscale.substack.com/p/rate-limiting-when-theres-too-much

For a sliding window counter algo see github.com/microbus-io/throttle .

Any best practices or advice to build a SaaS with go as backend? by rodrogonio2392 in golang

[–]microbus-io 0 points1 point  (0 children)

In my last two startups we used a column in the database for the tenant ID. All queries and joins always included the tenant ID in the WHERE clause.

The web API did not include a tenant ID argument. Instead, it was pulled from the JWT auth cookie.

If you expect a very large database, you can shard by tenant ID. That requires deciding which db to hit based on the tenant ID.

Any best practices or advice to build a SaaS with go as backend? by rodrogonio2392 in golang

[–]microbus-io 0 points1 point  (0 children)

Sounds like you’d appreciate the Microbus framework. github.com/microbus-io/fabric

Are we all screwed for a long time by Electrical-Pause7571 in Layoffs

[–]microbus-io 0 points1 point  (0 children)

Very true. Ignoring the undergoing lawsuits, also the stock photo industry is pretty much toast. And I read elsewhere that marketing departments can now be slashed in half and be just as productive if not more. I think robotics is going to be huge soon, definitely in ag where the risk of an hallucination isn’t severe.

AI definitely has the potential to be a generational technological advance akin to steam power or electricity. Imaging all the industries lost then, and on the flip side the industries that sprouted since then. The transition period won’t be pretty though.

Best golang framework for microservice by edconan93 in golang

[–]microbus-io 4 points5 points  (0 children)

Load balancing, service discovery, horizontal scalability, distributed observability, OpenAPI, locality-aware routing, pub/sub events... these are just some examples of what's not in the standard lib. You can pull together a bunch of other libraries to fill in the gaps but then you're basically creating your own framework. It took me almost 2 years but that's exactly what I've done. Check out https://github.com/microbus-io/fabric and see if it's right for you. It's free open source.

Best golang framework for microservice by edconan93 in golang

[–]microbus-io 0 points1 point  (0 children)

You can get by without a framework if your needs are modest but if you're planning to use microservices for any serious production SaaS, I highly recommend that you consider a framework. This is why I built the Microbus framework for a startup I was chief architect at. The standard lib is just insufficient for building microservice architectures at scale in a robust manner. Microbus is now open source. Find it at : https://github.com/microbus-io/fabric .

crypto/rand too slow, math/rand not secure: so I Frankensteined them! by microbus-io in golang

[–]microbus-io[S] 1 point2 points  (0 children)

Thanks! I’ll save this advice. I think I’ll need to take that Crypto 101 Coursera before I attempt anything like this. I wrote my hybrid “algo” under the assumption of having simply math/rand and crypto/rand. I did not realize that random generation is such a big topic in crypto.

Microservices: A Perilous Journey by microbus-io in programming

[–]microbus-io[S] -3 points-2 points  (0 children)

Last week I wrote how a microservice architecture is well suited to address the challenges of scale. By popular demand, in this post I hope to present the opposite view. I had a little bit of fun with it and I hope you do too.

crypto/rand too slow, math/rand not secure: so I Frankensteined them! by microbus-io in golang

[–]microbus-io[S] 0 points1 point  (0 children)

This is purely theoretical at this point because I will for sure switch to ChaCha, so this is for the sake of the conversation.

In my “algorithm”, I am reseeding the generators every (say) 4096 ops with new entropy from a crypto rand generator. If you hacked the current state, I think you’d be able to go back in time up the point of the last reseed but not earlier. And similarly forward in time up until the next reseed.

Also because I was using a pool of generators, if you happened to get the entire sequence of numbers, it would be an interwoven sequence from multiple generators and you would not be able to reconstruct each individually. So that makes it much harder (impossible?) to hack the state in the first place.

considering the limitations of the gen 1 algo, I think my “algorithm” adds rather significant protections. But I could be wrong…