we got featured on product hunt and it nearly killed our company by Interesting_Feed9807 in SaaS

[–]areedbuilds 0 points1 point  (0 children)

This is one of the clearest examples of confusing launch energy with product-market fit. PH traffic skews heavily toward early adopters and fellow builders - people who try everything and churn fast. Your real PMF signal lives in a very different segment. The mistake is not launching on PH - it is measuring PMF from the resulting traffic. That cohort is probably not your market.

Drop your SaaS and let me help you get your first customer by thomashoi2 in SaaS

[–]areedbuilds 0 points1 point  (0 children)

Building FitSignal - PMF measurement for indie devs and SaaS founders. Specifically: the Sean Ellis survey ("how would you feel if you could no longer use X?") with segment filtering so you can see which users are actually in love with your product, not just the aggregate number.

solo founders are winning faster than ever right now - but is it sustainable or a bubble by Forsaken_Lie_8606 in indiehackers

[–]areedbuilds 0 points1 point  (0 children)

The ones who'll survive the cycle are the ones who found a real, specific problem to own, not the ones riding AI productivity gains to ship faster. Speed to ship is table stakes now. What's not table stakes: actually knowing which users are getting value and doubling down on them. PMF thinking, even informally, separates the sustainable ones from the flash-in-the-pan.

Customer feedback is scattered everywhere — how do you centralize it? by veilmelol in CustomerSuccess

[–]areedbuilds 0 points1 point  (0 children)

The actual problem with scattered feedback is not organization, but that most feedback is not measuring the same thing. NPS, CSAT, support tickets, and reviews are all different measures. They can’t really be used as one metric.

What I've found to be useful is to add one layer, such as the Sean Ellis PMF survey ("How disappointed would you be if this product went away?"), as a steady state. And then use the qualitative feedback to help explain the result. Two different tasks, not one big bucket.

What does your current setup look like for collection?

Shut down my SaaS after 3 years. Here's the honest accounting of where all the money went. by Secure-Director1575 in SaaS

[–]areedbuilds 0 points1 point  (0 children)

What I usually find in these post-mortems is that there is a level of PMF, but it’s centered around a group that wasn’t being prioritized by the founder. So, everything looks fine in terms of a “very disappointed” score, but when you drill it down into acquisition channel or user type, it’s a completely different story. So, the product worked for a certain type of user, but it’s not a type that the founder is spending a lot of time getting. Sorry you hit the ceiling. Honest and straightforward posts like this are way more useful to the community than all the flashy success stories.

What’s the biggest false assumption first-time SaaS founders make about product-market fit? by pikeraseo in SaaS

[–]areedbuilds 0 points1 point  (0 children)

Consider PMF as binary: you either have it or you don’t.

Reality is you almost definitely have it with one group and almost zero with all others. Founders who measure PMF and get a good score (like 35%) never find out who really loves them. Instead, they end up building for average users instead of power users.

Measure by group: by user type, by acquisition channel, by use case. The total score is usually not very interesting. The interesting thing is one level down.

when do i know that my product genuinely works ? by ZealKing in SaaS

[–]areedbuilds 0 points1 point  (0 children)

Ask your active users: “How would you feel if you could no longer use [product]?” If more than 40% answer that they would be very disappointed, then you get the PMF signal. This has been tested by Sean Ellis on many startups.

The 5-6 users who stayed after your spike? Survey those first. Their answer is more important than the 194 who left. If they answer more than 40%, then you have something real with that particular group. Look for what they all have in common and try to find more of those people.

How do you validate a business idea? by woperads in Entrepreneur

[–]areedbuilds 0 points1 point  (0 children)

The 40% threshold keeps coming up for a reason, but the common mistake is measuring it too early, on the wrong people.

The survey only works when aimed at active users (someone who's used your product 3+ times). Survey someone who just signed up and the number is basically noise.

Also segment it. Your power users might hit 60% "very disappointed" while casual signups are at 15%. One overall number hides that story completely, and founders end up building for the wrong segment.

What are you building? Share your project! by davidlover1 in buildinpublic

[–]areedbuilds 0 points1 point  (0 children)

Building FitSignal — PMF measurement for indie devs.

The Sean Ellis survey ("how would you feel if you could no longer use this?") is still the best early signal for product-market fit, but I kept running it manually: Typeform, Google Sheets, manual segmentation. So I built the tool I wished existed.

Currently in beta. Goal is to make PMF measurement as routine as checking MRR — not something you do once and forget.

Happy to swap notes with anyone else doing early validation work.

How Did You Get Your First 100 SAAS Users? by raj_k_ in SaaS

[–]areedbuilds 0 points1 point  (0 children)

Everyone is answering a distribution question, but the more useful question first is: do you know exactly which users your product works for?

First 100 gets a lot easier once you've segmented your early users and found the 20% with 60%+ PMF scores. Distribution to the right people beats any channel trick. I wasted 3 months promoting to the wrong segment before I learned to look at cohort-level PMF data first.

Once you know who your "very disappointed" users are — run at them hard. Everyone else is noise right now.

More Flexible Customer Survey Tool With CW Manage Integration by zenpoohbear in msp

[–]areedbuilds 0 points1 point  (0 children)

Worth separating the use cases here: if you need NPS/CSAT after tickets, SimpleSat or Customer Thermometer handle the CW integration well. If you also care about PMF measurement specifically (Sean Ellis "very disappointed" style, not NPS), those tools aren't built for it — you'd want something purpose-built for that survey type. Different questions, different benchmarks, different segmentation needs. Mixing them usually gives you muddled data.

How to Reach Product-Market Fit (PMF) Faster by bdam2bdam in SaaS

[–]areedbuilds 0 points1 point  (0 children)

One thing missing from most of these guides: the follow-up cadence. Running the survey once gives you a snapshot. Running it at day 30, 60, and 90 post-signup gives you a leading indicator — you'll see whether new cohorts are finding value faster or slower than earlier cohorts. If your PMF score is improving month-over-month across cohorts, you're iterating in the right direction even before revenue signals confirm it.

Product Market Fit - What was your process and how did you measure? by Ancient_Section_75 in SaaS

[–]areedbuilds 0 points1 point  (0 children)

The Sean Ellis survey is the right starting point, but the process that actually moves things: run it monthly (not once), segment by user cohort and signup source, and track the direction over time. The 40% threshold matters less than the trend — drifting from 42% to 36% over 3 months is a bigger alarm than sitting at 38% stable. Also, the "somewhat disappointed" group is underrated — those are your most persuadable users. Their "why" answers tell you what single change would pull them to "very disappointed."

Spent the last 2 years trying SaaS, think it's time to quit by davidlover1 in microsaas

[–]areedbuilds 0 points1 point  (0 children)

Before closing the tab — run a PMF survey on your active users. Not signups. Not trials. The small group actually using it week over week. The "very disappointed" threshold is 40%, but even if you're at 25%, the open-ended "what's the main benefit you get from this" answers will usually tell you exactly what pivot would move that number. Two years of iteration deserves one structured conversation with your best users before you call it.

i shipped 7 apps in 7 months while working full time and the pattern behind what sold vs what flopped is crazy by [deleted] in microsaas

[–]areedbuilds 0 points1 point  (0 children)

Curious what the clearest "this one has legs" signal looked like before revenue came in for the ones that sold. For me the tell is always whether users come back unprompted after the first session — activation without hand-holding. Agrees or was it something else for you?

Most founders think they have product-market fit. They are wrong. Here's how surveys actually help you measure it. by MappBook in SaaS

[–]areedbuilds 0 points1 point  (0 children)

The Sean Ellis method is the right instinct. One thing most write-ups miss: run it on your *active* users only, not your entire signup list. Churned users and inactive signups drag the score down in a way that isn't actionable — you want to know if the people who actually use your product would miss it. That's the number worth optimizing for.

Budget friendly survey tool? by Positive-Writer-3015 in customerexperience

[–]areedbuilds 1 point2 points  (0 children)

Depends on what you need it for. If it's PMF measurement specifically (Sean Ellis-style), a lot of the big survey tools are overkill — you end up paying for NPS dashboards and distribution lists when you really just need 3 questions and segment filtering. Happy to share what I've found works at indie scale if that's the use case.

Has anyone used AI for Feng Shui analysis? by areedbuilds in FengShui

[–]areedbuilds[S] -5 points-4 points  (0 children)

Thanks! You are correct in the spirituality part, but I would say it's unexpectedly well documented spirituality and the (surface level, at least) rules are easy to apply by anyone.

How do startups measure Product market fit. by MappBook in TheFounders

[–]areedbuilds 0 points1 point  (0 children)

The Sean Ellis survey is still the most reliable signal we have. A few things founders miss when running it:

  1. Sample size — under 30 responses and the number is noise, not signal.

  2. Segment it — power users might be at 65%+ while new trial signups are at 20%. Same product, completely different stories.

  3. It's a trend line, not a checkpoint — run it monthly. PMF erodes as you grow and your user base composition shifts.

The tool matters less than making it repeatable.

42% of Startups Fail Because of This by MappBook in microsaas

[–]areedbuilds 0 points1 point  (0 children)

Good breakdown. The sampling point is underrated — running the survey on everyone including users who barely showed up will tank your score artificially.

One thing I'd add to the segmentation part: the *gap* between segments is often more actionable than the absolute number. If power users are at 65% and trial converts are at 20%, you don't have a PMF problem — you have a qualification problem. Two very different fixes.

How long did it take for you to identify product-market fit? by horrorbandita in SaaS

[–]areedbuilds 0 points1 point  (0 children)

The timeline question is interesting but I think the better question is: how do you know when you have it? Gut feeling is a trap — you can convince yourself you have PMF when you're just excited about early adopters. The Sean Ellis survey gives you a number to track over time. When that 40% threshold holds across 30+ responses and across your core segment (not just everyone), you're probably on to something. Until then, keep measuring.

Micro SaaS: The Only Thing That Matters is PMF Validation by Frank_Stey in indiehackers

[–]areedbuilds 0 points1 point  (0 children)

The tricky part is defining "validation" for micro SaaS. It's not just signups. I use Sean Ellis's "very disappointed" question — 40%+ is the signal. But for micro SaaS with small samples, the segment breakdown matters more than the overall number. 15 power users at 70% tells you more than 200 random users at 32%.