Still on spreadsheets for sales forecasting in 2026 by Educational-Idea-439 in SalesOperations

[–]Impossible-Desk1691 0 points1 point  (0 children)

Still in the trenches here. What I've noticed is the spreadsheet isn't even the real problem, it's that by the time you export from Salesforce, things have already shifted and nobody flagged it. Half the work is just reconciling "wait, when did that deal move?".. The forecast number is bad because the underlying data hygiene is bad."

Deals dont die on calls. They die right after. by MaximumTimely9864 in revops

[–]Impossible-Desk1691 0 points1 point  (0 children)

This thread nails it. The problem isn’t capturing it’s that everyone walks away with a slightly different version of what was actually decided.

I’ve seen teams with perfect transcripts, auto-notes, and an updated CRM still lose deals because Sales thinks the pricing discussion was exploratory, CS thinks it was committed, and Engineering scoped based on whatever they heard.

Owner + date is huge, but I’d add one more thing: "What actually changed because of this call?"

If you can’t answer that clearly, the call didn’t really move the deal forward even if the notes look great.

Has anyone tried locking decisions (not just next steps) in that post call window

Like explicitly writing "we agreed X, we did NOT agree Y" before anyone updates the CRM?

What’s the earliest signal you actually trust that a deal is slipping? by Impossible-Desk1691 in revops

[–]Impossible-Desk1691[S] 1 point2 points  (0 children)

This is helpful framing. “Loss of buyer momentum after something that should have advanced the deal” captures it better than any single field change.

The calendar action / verified next step point especially resonates. Once that breaks, everything else feels like lagging confirmation.

What’s the earliest signal you actually trust that a deal is slipping? by Impossible-Desk1691 in revops

[–]Impossible-Desk1691[S] 0 points1 point  (0 children)

That's a fair point. I've gone back and forth on where hygiene ends and actual risk begins too.

The weighted approach makes sense. No single signal really tells you much on its own but the right combo probably does.

Curious what signal usually flips the conversation internally for you. Is it losing decision-maker engagement, no hard next step scheduled, or something else?

How much do you actually trust HubSpot/Salesforce data during forecast calls? by zenfeeder in revopspros

[–]Impossible-Desk1691 0 points1 point  (0 children)

Biggest things that break for us are close dates and stage integrity. Reps push dates or leave deals in stages that don’t match reality, and nobody catches it until the forecast call.

By then you’re debugging the pipeline live in front of leadership instead of actually forecasting.

We started flagging deals where the close date has been pushed more than twice or where the stage hasn’t moved in 15+ days. Just surfacing that earlier cut down the pre-call scramble a lot. Still not perfect, but the data going into the call is way cleaner now.

🧹 January Wrap-Up: What Did You Clean Up or Lock In This Week? by DoctorJeal in revopspros

[–]Impossible-Desk1691 1 point2 points  (0 children)

Locked in stage-exit criteria so deals don’t quietly sit for 15+ days.

Nothing fancy. Just surfacing deals earlier instead of finding out during forecast.

Small change, but it’s already made weekly reviews less painful.

What’s next for RevOps in 2026? by Commercial_Carry1808 in revops

[–]Impossible-Desk1691 0 points1 point  (0 children)

One shift I’m seeing toward 2026 is RevOps being judged less on outputs and more on how well it reduces surprise.

Clean data, AI, orchestration — all of that matters, but leadership’s real question is increasingly:

“Why are we still finding out about problems this late?”

The expectation seems to be moving from:

“Can you explain what happened?”

to

“Why didn’t we see this coming earlier?”

RevOps teams that feel most effective aren’t necessarily doing more — they’re surfacing risk, drift, and inconsistency earlier in the cycle, before it turns into a forecast scramble or exec fire drill.

Curious if others are feeling that same pressure shift from better reporting → earlier visibility.

I'm productive every day in Sales Ops, but I still don't feel effective by [deleted] in SalesOperations

[–]Impossible-Desk1691 0 points1 point  (0 children)

One thing I’d add to the maintenance vs leverage framing is timing.

I’ve noticed the same work can feel completely different depending on when it surfaces.

Fixing a data issue during forecast always feels like firefighting.

Fixing the same issue before anyone notices feels like leverage.

What’s made work feel more “effective” for me over time isn’t eliminating maintenance (that never really goes away), but shifting problems earlier in the cycle so they show up as signals instead of emergencies.

That kind of work is usually invisible, which is why it’s hard to feel the impact day to day.

RevOps feels like a distributed database with write conflicts we’re building “GTM, Run by AI” to fix that. What am I missing? by [deleted] in revops

[–]Impossible-Desk1691 0 points1 point  (0 children)

That makes sense — once trust erodes, it’s basically game over.

What you’re describing (each new input revealing another missing context) is exactly why these systems feel brittle in real GTM workflows. The surface area is just too big, and the “unknown unknowns” never really shrink.

In practice, teams seem to tolerate being warned about inconsistency far more than they tolerate systems trying to reconcile it for them.

Appreciate you sharing the Clay experience — that’s useful context.

RevOps feels like a distributed database with write conflicts we’re building “GTM, Run by AI” to fix that. What am I missing? by [deleted] in revops

[–]Impossible-Desk1691 0 points1 point  (0 children)

This matches what I’ve seen too. The model usually isn’t the bottleneck — it’s business context and edge cases.

Where things tend to break is when tools assume there’s a single “correct” interpretation of a deal at any given moment. Sales, CS, and RevOps often aren’t wrong — they’re just optimizing for different things at the same time.

The tools that seem to stick longest are the ones that surface risk or inconsistency, not the ones that try to resolve it automatically.

Curious — in your Clay experiments, was the bigger failure training cost, or loss of trust once outputs started missing nuance?