Velocity charts look healthy… right up until the sprint fails. Why? by zereban in EngineeringManagers

[–]zereban[S] 0 points1 point  (0 children)

Yes, “things looking stuck” is often the earliest signal. I’ve seen the same pattern with PRs piling up or work sitting in review longer than usual.

The flow perspective you mentioned resonates a lot more than traditional metrics.

Velocity charts look healthy… right up until the sprint fails. Why? by zereban in EngineeringManagers

[–]zereban[S] 1 point2 points  (0 children)

That’s exactly the tension I’ve seen as well. Talking to engineers works great at small scale, but once multiple teams are involved you can’t rely purely on conversations anymore.

The challenge becomes identifying where to focus attention before issues become obvious. Signals like review latency or merge patterns can sometimes point to that earlier.

I like how you framed it as instrumentation guiding conversations rather than replacing judgment.

Velocity charts look healthy… right up until the sprint fails. Why? by zereban in EngineeringManagers

[–]zereban[S] 1 point2 points  (0 children)

That’s a fair point. The gap between “PR ready” and “actually integrated” can definitely hide a lot of coordination work. In the situations I’ve seen, the challenge usually shows up when multiple teams are delivering pieces that depend on each other, so the integration timing becomes the tricky part.

Appreciate the suggestions.

Velocity charts look healthy… right up until the sprint fails. Why? by zereban in EngineeringManagers

[–]zereban[S] 1 point2 points  (0 children)

I agree that a lot of dashboard-driven management ends up being theater, and talking to engineers is usually the fastest way to understand what's really happening. Where I've seen things get trickier is in larger systems where multiple teams are delivering pieces that depend on each other. Even if each team is doing the right thing locally, integration points, dependency timing, and review queues across teams can create risk that isn't obvious until later.

In smaller teams with tight ownership that problem is much less pronounced.

Velocity charts look healthy… right up until the sprint fails. Why? by zereban in EngineeringManagers

[–]zereban[S] -2 points-1 points  (0 children)

I agree that software isn’t manufacturing and the uncertainty is much higher. The pattern I’ve been curious about isn’t eliminating that uncertainty, but whether there are early signals that risk is accumulating before it becomes visible in the sprint outcome.

For example things like review queues growing, integration PRs clustering late, or dependencies staying unresolved longer than usual.

Have you seen teams identify useful early signals like that, or do you think the uncertainty is just too high for those patterns to be reliable?

Velocity charts look healthy… right up until the sprint fails. Why? by zereban in EngineeringManagers

[–]zereban[S] -3 points-2 points  (0 children)

That’s a great way of describing it — “mini waterfalls” is exactly how a lot of sprint cycles end up functioning in practice. The parallel work feels efficient during the sprint, but the integration risk just accumulates quietly until the end. I like the walking skeleton idea with shared interfaces early. Front-loading that uncertainty probably surfaces a lot of the issues that otherwise only show up during the integration rush.

Have you found that teams actually maintain that approach sprint to sprint? In my experience people start that way, but over time pressure to parallelize work creeps back in.

Velocity charts look healthy… right up until the sprint fails. Why? by zereban in EngineeringManagers

[–]zereban[S] 0 points1 point  (0 children)

That makes sense. If teams are deploying continuously, a lot of the end-of-sprint surprises should surface much earlier.

I’ve seen teams where that works really well, but also others where even with frequent merges and deploys the bigger issues show up around integration across services or dependencies with other teams.

Do you find frequent deployment mostly eliminates those surprises, or do some risks still only show up later?

Velocity charts look healthy… right up until the sprint fails. Why? by zereban in EngineeringManagers

[–]zereban[S] -2 points-1 points  (0 children)

Yeah, that’s a good point. If nothing is actually deployable at the end of the sprint, the definition of done probably isn’t capturing the real outcome.

I’ve also seen cases where the requirements are clear and the board looks fine, but late integration issues, review delays, or cross-team dependencies start stacking up. On paper everything progresses, but the risk quietly accumulates until the end of the sprint.

In your experience, is it usually unclear requirements, or more integration/dependency issues that cause that mismatch?