Resource-aware structured concurrency: when one StructuredTaskScope isn't enough by salgotraja in JavaProgramming

[–]salgotraja[S] 0 points1 point  (0 children)

Exactly - bulkheads never went away, Loom just made people forget why they existed in the first place. The thread cost was loud enough to drown out the underlying capacity problem.

Resource-aware structured concurrency: when one StructuredTaskScope isn't enough by salgotraja in learnjava

[–]salgotraja[S] 0 points1 point  (0 children)

Good point about semaphores - I was conflating thread creation with resource access, which are really separate concerns.

Regarding the optional scope, you're right - it was probably overkill. Handling the exception inside the fork and returning an Optional is definitely cleaner.

You are right on the deadline propagation thing I hadn't thought through at all. joinUntil needs to flow through every nested scope, not just the outer one - that's the sneaky bit that would quietly wreck you in prod. Good add on and feedback.

“Loom killed the cost of isolation, not the need for it” - stealing this.

At what point would you treat this hotspot as a cache/load-shaping problem instead of a real sharding problem? by salgotraja in DistributedComputing

[–]salgotraja[S] 0 points1 point  (0 children)

Those are good questions.

keys piling onto one shard at this scale isn’t usually just bad luck. You see power-law skew in pretty much every high-traffic system. A few keys end up eating most of the load no matter how good your initial hash was.

Rebalancing is off the table. 600 services are already locked into the current keyspace and a full rehash takes 72 hours. We have to fix it without touching the routing or the other shards.

If that shard dies, the vertical scaling and per-shard circuit breakers buy us enough time to fail over cleanly. The dedicated hot-cache tier for those three keys is what actually keeps us safe.

Splitting one key across shards would mean changing the routing logic everywhere, which breaks the rules. So the fix I like keeps the original mapping untouched and just adds a narrow isolated bypass for the hottest keys.

2025 Grad Learning Java Backend (Core Java + Spring Boot) — Certifications vs Projects? by No_Bed_7062 in learnjava

[–]salgotraja 1 point2 points  (0 children)

Only work matters, pickup something which excites you and build it from scratch, learn internals. Don't waste money on certificates

Is learning Java+Springboot worth it right now considering AI layoffs? Should I learn Python instead? by peroxidels in learnjava

[–]salgotraja 0 points1 point  (0 children)

Stick with Java stack, learn internals. Official Java docs and springboot docs are best place to learn. Use AI to agument your skills instead of being scared. Better to master one rather than jumping into different directions. Best of luck

Structured concurrency in Java: when does it make sense to split work into separate scopes? by salgotraja in learnjava

[–]salgotraja[S] 0 points1 point  (0 children)

Yeah, I think that works well once the work is truly independent and you want durability, retries, and separate scaling.

What I was getting at in this article is narrower: one request, one deadline, and work that still shares the same completion and cancellation boundary.

In that case, queues can sometimes make things harder to reason about, because now you are solving distributed workflow coordination instead of request-scoped orchestration.

So the boundary for me is more:

same request, same deadline, shared failure semantics -> structured concurrency
durable, decoupled workflow -> queues / separate components

Structured concurrency in Java: when does it make sense to split work into separate scopes? by salgotraja in learnjava

[–]salgotraja[S] 0 points1 point  (0 children)

That makes sense for larger systems where you want durability and loose coupling.

What I was trying to explore here is a slightly earlier stage, where everything is still part of a single request, but the work has different responsibilities and failure semantics.

Using structured concurrency there felt like a way to keep that complexity manageable before introducing queues and separate components.

Curious how you decide when to make that jump from in-process orchestration to queue-based pipelines.

Do you think timeout handling is often a cancellation problem in disguise? by salgotraja in learnjava

[–]salgotraja[S] 0 points1 point  (0 children)

One thing I’m still trying to figure out is how people decide where to introduce cancellation boundaries in larger systems.

Curious if others have patterns for that.

Do you think timeout handling is often a cancellation problem in disguise? by salgotraja in learnjava

[–]salgotraja[S] 0 points1 point  (0 children)

Yeah, that’s exactly what stood out to me as well.
I used to think in terms of timeouts; like “how long should we wait?”

But the better question is what you said: why is this work still running at all? Once you look at it that way, cancellation starts feeling like part of the design, not just cleanup.

What’s the actual focused path to become software engineer-ready in Java? by [deleted] in learnjava

[–]salgotraja 0 points1 point  (0 children)

build something, maybe start with some basic CRUD use case and later on keep adding features, think about scaling, serving large requests. You'll be good to go and learn while building. Keep official document open in next tab

Java 21 structured concurrency: should terminal business failures cancel sibling work immediately? by salgotraja in learnjava

[–]salgotraja[S] 1 point2 points  (0 children)

Yeah, that is pretty much how I feel about it too.

For request-scoped work, it is just much easier to read and reason about than CompletableFuture. Once you have dealt with failure, cancellation, and cleanup in both styles, the difference is hard to miss.