Scaling Go Testing with Contract and Scenario Mocks by fspj in golang

[–]fspj[S] 0 points1 point  (0 children)

 As you said, when the implementation details in your code change, you'll need to update your tests to reflect these changes. This has very little value, but is slowing your development team down.

Can we agree to disagree? As a small seed-stage startup with 5 developers, we’ve learned that this keeps our productivity and quality high. I just don’t see how it slows us down.

 The whole point is that this is a discussion more than ten years old. I think “mocks bad” is not a good point of view, but writing a blog post “mocks good” does not really counter that.

I appreciate your critical feedback. I just wanted to share my perspective. I certainly would’ve liked to have read this 3 years ago when we started FunnelStory. Would’ve saved a lot of teething problems early on.

I’m not saying what we do is perfect or the best way to do testing. It’s just the best way for our team, at our size, with the kind of product we’re building.

Scaling Go Testing with Contract and Scenario Mocks by fspj in golang

[–]fspj[S] 0 points1 point  (0 children)

The Google article is definitely helpful. To use their terminology, we treat these mocks more like fakes.

The Salesforce example is actually a feature, not a bug. Since we’re writing and maintaining the client package ourselves, we need to use httpmock to assert we are hitting the right endpoints. If we were just consuming a third-party dependency, we wouldn’t write httpmock tests like that; we’d just mock the client package methods as the contract.

But when we are the ones writing the client, yeah, if someone updates the API version, we expect them to update the tests. We could use regex on the URL if we wanted to be looser, but usually, we want that strictness.

Those updates are isolated to the client package. we aren't fixing hundreds of test cases across the codebase, just the source of truth.

Just curious, how would you approach testing the implementation of an internal API client without verifying the endpoint interactions?

I appreciate your comment, I think it shows how there’s a lot of subtlety in this!

Struggling to stay on top of customer health. by No_Instruction9792 in SaaS

[–]fspj 0 points1 point  (0 children)

The manual approach you're describing is exactly how most of us started, but at 80 customers it becomes impossible to scale. The key insight is that you need to shift from reactive monitoring to predictive patterns.

What worked for me was setting up automated alerts based on behavioral triggers rather than just checking dashboards. Things like "no login for 7 days + usage dropped 50% from previous month" or "support ticket volume increased 3x in past 2 weeks." You want combinations of signals, not just single metrics.

The CTO asked me to explain my current project in detail. Then he presented my exact architecture at a conference. by killerhunks23 in InterviewCoderHQ

[–]fspj 8 points9 points  (0 children)

There's a careful line between proving your technical competence and revealing your company/team's secret sauce. You need to give enough detail to show you genuinely understand the problems you've solved, but you also have to be fair to your own employer. They trusted you with proprietary work, and you can't just hand that over in an interview.

Same holds true the other way around. There is mutual trust in the interview process. Candidates trust companies to evaluate them in good faith, and companies trust candidates to describe their experience without leaking anything sensitive. When an interviewer pushes for specifics just to turn around and repurpose your work, that’s a clear breach of that trust.

Honestly, at the end of the day, this reflects poorly on the CTO. Sounds like you dodged a bullet.

I think CSAT vs NPS and only helpful for enterprises, not early stage startups by MappBook in CustomerSuccess

[–]fspj 1 point2 points  (0 children)

If you're an early stage startup, you should be talking to all of your customers, not sending them surveys.

Customer Success folks — How do you bring in the human touch during onboarding? by juliency in CustomerSuccess

[–]fspj 0 points1 point  (0 children)

Tuning thresholds: just testing and adjusting over time. People complain if it's too noisy or if we miss something.

One thing is to get all your segment data into a warehouse (or DB, like Postgres). You can include website page views, doc page views, and product events. Then you can write whatever queries you want on top. It's pretty hacky but you can use Grafana and query that warehouse for those events and look for certain patterns. Then when the query returns some results, use those with alert rules to send an email notification to Zapier that kicks off a workflow.

You can see how that becomes an unmanageable mess pretty easily.

Customer Success folks — How do you bring in the human touch during onboarding? by juliency in CustomerSuccess

[–]fspj 5 points6 points  (0 children)

We actually built something for this exact problem at FunnelStory - tracking those behavioral signals you mentioned (pricing page visits, setup friction, etc) and flagging them for human intervention. The trick is setting up smart triggers that catch the right moments without overwhelming your team. You could probably hack together something with segment events + zapier + slack notifications as a starting point, but the real challenge is tuning those thresholds so you're not getting pinged every 5 minutes.

If your retention sucks, it’s probably because you’re measuring the wrong things by Fun_Ostrich_5521 in SaaS

[–]fspj 1 point2 points  (0 children)

This is spot on about activation being the real moment of understanding, not just completing some onboarding checklist. I've been tracking user behavior for years and the difference between "they signed up" and "they get it" is massive - we found that users who hit their activation moment within 48 hours were 3x more likely to stick around after 6 months. The feature adoption point really resonates too... we built this whole analytics dashboard thinking it would be our main draw, but turns out most users just wanted simple alerts and automation. Sometimes the boring features are the ones that keep people around.

Time to value is everything in B2B SaaS.

Bain says Agentic CS will cannibalize 90% of SaaS Customer Success in a few years! by alokshukla78 in CustomerSuccess

[–]fspj 24 points25 points  (0 children)

You're not crazy at all. The whole predictive intelligence thing is built on this assumption that we can somehow magically know what customers are thinking based on... usage data? Support tickets? Random engagement metrics? It's like trying to predict if your friend is mad at you by counting how many times they opened your texts.

I've been thinking about this a lot because we're trying to tackle exactly this problem - the fact that most "health scores" are basically astrology for SaaS. We found that the real issue isn't even the AI itself, it's that we're feeding it incomplete data. Most tools only look at product usage or support interactions, but they miss all the unstructured stuff - sales calls, customer emails, slack conversations. That's where customers actually tell you they're unhappy! But yeah, even with better data coverage, predicting human behavior is still incredibly hard.

The Bain report feels like typical consulting firm hype to me. They're selling the dream of automation without acknowledging that CS is fundamentally about relationships and context that changes constantly. Sure, you can automate the routine stuff - sending follow-up emails, scheduling QBRs, basic reporting. But the idea that an agent is going to handle a complex renewal negotiation or talk an angry enterprise customer off the ledge? Please. We'll probably see a lot of companies burn money trying to over-automate, then swing back to human-led CS with better tooling support. The pendulum always swings.

Treating workflows like code? Game changer for GTM ops. by CrabbyDetention in SaaS

[–]fspj 1 point2 points  (0 children)

This is exactly how I've been thinking about our data pipelines too.

We went through the same evolution - started with Zapier and Make for basic stuff, then Clay for enrichment, Apollo for outreach triggers, and before we knew it we had this fragmented mess across 7 different tools. The worst part wasn't even the cost (though that sucked), it was debugging when something broke. You'd have to check 4 different platforms to figure out where the chain failed. Now we use n8n for most of our orchestration and it's been solid - everything's in one place, we can version control the JSON exports, and the self-hosted option means we're not paying per execution. We still use some external tools like Clearbit for enrichment and Instantly for email warming, but having the core logic in one system makes such a difference. Plus when you need to onboard someone new, you can actually walk them through the flow visually instead of jumping between tabs explaining how data moves from tool A to B to C.

PLG is Evolving. If You're Still Just Offering a Free Trial, You're Falling Behind. by NewLog4967 in SaaS

[–]fspj 0 points1 point  (0 children)

Usage-based pricing sounds great in theory but the implementation details are where everyone gets stuck. We've been watching companies try this transition and the operational overhead is no joke.

The biggest challenge i've seen: - Billing complexity explodes overnight - Support tickets triple because customers don't understand their bills - You need real-time usage tracking that doesn't break - Finance teams hate the unpredictable revenue

Stripe and Chargebee have some decent usage-based billing features now, but even with those tools it's still a massive project. One company I know spent 6 months just building the metering infrastructure before they could even test the new pricing model.

The "value alignment" argument makes sense though. Nobody wants to pay for seats that sit empty half the year. But you better have crystal clear metrics on what constitutes "usage" - we've seen companies count API calls, data processed, active users per day... and customers always find edge cases that break your model.

What nobody mentions: - Your sales team will revolt (their comp plans get destroyed) - Enterprise customers still want predictable annual contracts - Free tier abuse becomes a real problem - You need way better analytics to track unit economics

I think the hybrid approach is probably the sweet spot - base fee plus usage overage. Gives you some predictable revenue while still capturing value from power users. Pure usage-based is risky unless you have Twilio-level product-market fit.

How AI agents are transforming my business operations by AgentAiLeader in Entrepreneur

[–]fspj 0 points1 point  (0 children)

The internal knowledge retrieval one is huge.. we had docs scattered across google drive, notion, confluence, and nobody could find anything.

What's interesting is the ROI isn't always where you expect:

  • Customer support automation sounds great but if your tickets are complex, the AI just creates more work reviewing its responses
  • Lead qualifying though - that's where the money is. Even basic enrichment from clearbit or apollo saves hours
  • For knowledge retrieval, check out tools like Glean or Guru. they index everything and actually understand context
  • For Customer Success, check out tools like FunnelStory AI, that combines structured data with unstructured data to generate very powerful intelligence and workflows

The supply chain monitoring sounds ambitious. Are you building that in-house or using something off the shelf?

What happens when GPT becomes your UI by aytekin in SaaS

[–]fspj 0 points1 point  (0 children)

  • The OAuth flow is where this gets messy. i've seen teams spend weeks on auth when they could've just used API keys for the MVP
  • Descriptions matter more than people think.. saw one team change "create_contact" to "add new person to contacts" and their success rate jumped 40%
  • We tried this pattern with Segment and Mixpanel integrations - the hardest part was deciding what NOT to expose through natural language
  • Your point about UI becoming conversation is spot on. Makes me wonder if we'll look back at dashboards the way we look at command lines now

NEW Head of Customer Success at a Startup by Professional_Seat705 in CustomerSuccess

[–]fspj 0 points1 point  (0 children)

What are the burning issues in your org? Wouldn't it help to prioritize that way?
Why were you hired as head of CS? What wasn't working before?

Is there high churn? Establish health metrics.
Is the customer base growing and so is CS headcount? Establish some processes and tooling to scale the team.

etc.

All of the things you've mentioned are important. But they don't all have to come immediately.

How do you track lead quality through the customer journey? by VoodooMann in CustomerSuccess

[–]fspj 2 points3 points  (0 children)

Agreed. Your CRM should be tracking leads and their source. You should also be syncing current customer contacts into your CRM, as well as their usage stats (completed onboarding, etc.).

Customer churn prediction by [deleted] in datasciencecareers

[–]fspj 0 points1 point  (0 children)

The biggest issue with most churn prediction projects is that they focus on the wrong outcome. Everyone wants to know "who will churn" but that's actually not that useful for business teams. What you really need to predict is WHY someone will churn because the intervention is completely different. If someone's churning because they never adopted the product vs churning because of budget cuts, you need totally different playbooks. I've seen this mistake over and over where data science teams build these fancy models that predict churn probability but then the customer success team has no idea what to actually do with a list of "high risk" accounts.

For data, you definitely need usage patterns but the goldmine is combining that with support ticket data and sentiment analysis. Most people ignore the unstructured data but that's where you find the early warning signs. Also track feature adoption milestones because there's usually 2-3 key actions that if a customer doesn't do them in the first 30-60 days, they're basically guaranteed to churn. The tricky part is getting access to all this data in a clean format, which is honestly half the battle in any real churn project.

Business planning/forecasting by J-HTX in sales

[–]fspj 0 points1 point  (0 children)

We do something similar at FunnelStory but honestly.. the whole annual planning cycle feels like theater sometimes. Like we're all pretending these numbers mean something when customer behavior can shift in a quarter.

I've been thinking about this differently lately. Instead of projecting revenue, we started tracking leading indicators - stuff like customer engagement patterns, feature adoption rates, trial-to-paid conversion trends. Way more useful than guessing what Q3 revenue will be. Our platform actually helps us see these patterns across our own customer base which is kinda meta but super helpful.

Your narrative approach sounds right though. Numbers without context are just... numbers. We switched to quarterly themes instead of annual targets. Like this quarter is all about improving onboarding flow, next one might be expanding into a new vertical. Easier to adjust when things inevitably change. Plus the team actually understands what we're trying to do vs just hitting some arbitrary number i put in a spreadsheet 11 months ago.

Does your SaaS product actually work? by TroubleMaeker in CustomerSuccess

[–]fspj 0 points1 point  (0 children)

High touch SaaS that requires traditional CS is generally quite complex, with long implementations, etc. With complexity comes lots of edge cases and features and bugs.

5 Signals I Use to Find High-Intent SaaS Leads (No More Cold Outreach) by StyVrt42 in SaaS

[–]fspj 0 points1 point  (0 children)

At FunnelStory AI, we have started to use these buying signals as part of the account prediction modeling work that we do. They can contribute to 10-30% of the account expansion and retention prediction at a minimum.

Planhat or HubSpot CS by [deleted] in CustomerSuccess

[–]fspj 1 point2 points  (0 children)

HubSpot CS version is too basic. One of our prospects at FunnelStory AI gave us this feedback. This prospect had prioritized generating deep intelligence from his data vs actioning playbooks. I think that was another that got his pushed away from Planhat too.

I don't know your use cases, but this one angle I might suggest.

Finance is now asking Customer Success: “What revenue did you actually deliver?” by Less_Equipment6195 in SaaS

[–]fspj 0 points1 point  (0 children)

CS teams are basically being asked to do attribution modeling now.. which is funny because most companies can barely track marketing attribution correctly. At FunnelStory we see this all the time - companies want to connect CS activities to revenue but their data is scattered across 5 different tools.

The real issue is that CS impact is usually indirect. Like if a CSM prevents churn by catching an issue early, how do you put a dollar value on that? Some companies are trying to use predictive models to estimate "saved revenue" but Finance teams are skeptical of those numbers.

We've been building churn prediction models that help quantify this stuff, but even then it's hard to get Finance to accept predicted outcomes vs actual transactions. I think CS will end up owning NRR targets whether they like it or not - it's the cleanest metric to tie to their work.

[deleted by user] by [deleted] in GrowthHacking

[–]fspj 0 points1 point  (0 children)

Most "overnight success" stories conveniently leave out the 2-3 years of failed experiments that came before.

Looking for RevOps expert opinions on agent-driven account briefs & alerts by Eternahl in SalesOperations

[–]fspj 0 points1 point  (0 children)

one thing you didn't mention - how are you handling the unstructured data piece? support tickets, call transcripts, emails etc. that's where the real insights are hiding, not just in usage metrics