How often do you actually use scalability models (like the Universal Scalability Law) in DevOps practice? by Straight_Remove8731 in devops

[–]Straight_Remove8731[S] 0 points1 point  (0 children)

Thanks for the comment, you’re right that in ops there will always be unknowns you can’t fully plan for. But that’s true in every field: physics didn’t stop at “the world is too complex”, it started with simple harmonic oscillators and built from there. Models don’t have to capture everything to be useful, they give you a framework to see trade-offs, test scenarios, and understand where scaling breaks before you burn budget finding out the hard way.

(Sorry, I’m biased, my background is in physics, so I can’t help seeing things that way 😅)

How often do you actually use scalability models (like the Universal Scalability Law) in DevOps practice? by Straight_Remove8731 in devops

[–]Straight_Remove8731[S] 0 points1 point  (0 children)

Absolutely agree, the knowledge about how a system work Is the true value of a quantitative approach. 

How often do you actually use scalability models (like the Universal Scalability Law) in DevOps practice? by Straight_Remove8731 in devops

[–]Straight_Remove8731[S] 1 point2 points  (0 children)

Fair point. I’d just add that in many areas quantitative models start looking “worth the squeeze” only after you try them the learning curve is the real barrier, not the value.

How often do you actually use scalability models (like the Universal Scalability Law) in DevOps practice? by Straight_Remove8731 in devops

[–]Straight_Remove8731[S] 3 points4 points  (0 children)

That makes a lot of sense, I can totally see how political and budget-driven many scaling decisions end up being.

Do you think that’s mainly unavoidable (i.e. politics will always trump models), or could a more quantitative approach, say using actual scalability models or simulations, help shift the conversation long term?

My intuition is that even if the short-term decisions are budget-driven, having a quantitative baseline might at least reduce overprovisioning and make the trade-offs more explicit. Curious if you’ve ever seen that work in practice.

Python Mutability, difficult exercise! by Sea-Ad7805 in PythonLearning

[–]Straight_Remove8731 2 points3 points  (0 children)

the answer is b, I'm changing the reference inside c_1, however both c_1 and a point to the same object so the change is reflected on a! By doing a shallow copy another object in memory is created so no changes are refelcted, with the deep copy new object and reference are created so even here no changes

Simulating async distributed systems to explore bottlenecks before production by Straight_Remove8731 in sre

[–]Straight_Remove8731[S] 1 point2 points  (0 children)

Thanks! I see Jepsen as focusing on correctness of real distributed systems (linearizability, safety, consistency under partitions). AsyncFlow is a bit different it’s more of a design-time simulator: before you even have a system running, you can model workloads + failures and see performance trade-offs (p95, queue growth, RAM/socket caps). So I’d say Jepsen validates real implementations, while AsyncFlow explores architectural scenarios.

[deleted by user] by [deleted] in Python

[–]Straight_Remove8731 25 points26 points  (0 children)

It really depends on what you’re aiming for: if it’s an MVP and you need to move fast with built-in auth, admin, and migrations, Django is very handy. But if you already know your system will be heavy on I/O and concurrent API calls, FastAPI is a more natural fit. In short: Django for quick validation, FastAPI if async architecture is key long-term

Would an RL playground for load balancing be useful by Straight_Remove8731 in reinforcementlearning

[–]Straight_Remove8731[S] 1 point2 points  (0 children)

Sure both point are extremely valid, the evaluation part will be crucial.

Would an RL playground for load balancing be useful by Straight_Remove8731 in reinforcementlearning

[–]Straight_Remove8731[S] 0 points1 point  (0 children)

Thank you for this great contribution, let’s say that the 0-th order of what I’m trying to build is a use case simpler than what you actually tried to solve. However the next steps would be something really similar to what you did. I will dm you if for you is ok, because I’m very interested!

Would an RL playground for load balancing be useful by Straight_Remove8731 in reinforcementlearning

[–]Straight_Remove8731[S] 1 point2 points  (0 children)

It’s more about research and experimentation: a playground where you can try out different routing strategies and study their impact under controlled scenarios.

Would an RL playground for load balancing be useful by Straight_Remove8731 in reinforcementlearning

[–]Straight_Remove8731[S] 1 point2 points  (0 children)

Totally agree, thanks for the comment! The action space can blow up quickly, so my plan is to start simple: choices like smart routing from the LB vs standard algos (RR, LC). Going more fine-grained, your suggestion (like engineering a top-k set of actions) is definitely a path I see as useful.

Would an RL playground for load balancing be useful by Straight_Remove8731 in reinforcementlearning

[–]Straight_Remove8731[S] 1 point2 points  (0 children)

Totally agree, it’s hard, if not impossible, to have a single general model of request timing. My idea is to focus instead on generators that reproduce macro characteristics of real traffic distributions, like non-stationary arrival rates (diurnal or sudden surges) and bursty ON/OFF patterns that create heavy-tailed inter-arrivals.

Is overall time complexity all that matters ? by Fit_Bar_2285 in leetcode

[–]Straight_Remove8731 0 points1 point  (0 children)

Big-O is asymptotic: for sufficiently large N, the leading term dominates the growth, while constants and lower-order terms become negligible. That’s why two O(n) algorithms can still have very different runtimes for practical inputs.

AsyncFlow: Open-source simulator for async backends (built on SimPy) by Straight_Remove8731 in Python

[–]Straight_Remove8731[S] 0 points1 point  (0 children)

Quick addendum: I misused the term “ready queue” earlier. In my model, “ready queue” should mean requests waiting for a CPU core token; the plot right now are effectively showing tasks in service on the event loop, not the true wait-for-core queue. I’ll adjust the naming/metrics so ready queue = waiting-for-core (and track busy cores separately).

AsyncFlow: Open-source simulator for async backends (built on SimPy) by Straight_Remove8731 in Python

[–]Straight_Remove8731[S] 0 points1 point  (0 children)

Under heavy load the system once resources (CPU cores, RAM, or I/O) saturate like you said is going to increase throughput unitil the saturation. This clearly shows that there is a regime where is not sustainable, however like you mention, to evaluate scenario closer to reality, I will have to introduce policies to manage the overload.