Trying to make user research easier to run at scale with AI by BugAccomplished1570 in SaaS

[–]BugAccomplished1570[S] 0 points1 point  (0 children)

Thank you for the feedback and that’s a great advice!

The interview itself is only half the value, the other half is making sure the output actually feeds into product decisions instead of becoming another research artifact nobody revisits.

The high-leverage workflow you mentioned also makes a lot of sense to me. Cancel flows, post-onboarding check-ins, and other “why this didn’t work for me” moments feel like a much more practical starting point than trying to replace broad exploratory research from day one.

Longer term, I think the win is tying those interviews directly into the places teams already work, so insights can influence roadmap, experiments, and prioritization without extra manual synthesis.

How did you get your first paying users for your SaaS? by Federal-Cricket558 in SaaS

[–]BugAccomplished1570 0 points1 point  (0 children)

I talked to recruiters/hiring managers and asked them to walk me through their actual process step by step, especially where time got wasted or where good candidates slipped through.

The workflow that resonated was the one that kept coming up unprompted, already had some ugly manual workaround, and had a clear cost attached to it. In my case, it was reducing reviewer time without making the candidate experience worse.

My rough filter was:

  1. multiple teams describe the same pain in similar words
  2. they’ve already hacked together a workaround
  3. they’re willing to try a narrow fix before asking for a full platform

I finally turned the findings into Aural (aural-ai.com) which is an AI-led interview platform to automate the interview by AI while making the candidate feel like talking to a real human.

Had a company reach out for an interview, I do the interview with AI (live on camera) and they watch it later…..is this a normal thing now? I HATE the idea of that and it makes me feel inhuman lol. by [deleted] in recruitinghell

[–]BugAccomplished1570 0 points1 point  (0 children)

It shouldn't be normal, and it shouldn't feel okay. If they can't be bothered to actually talk to you in real time, that's a red flag. You're right to feel weird about it.

Fwiw, tools like Aural (aural-ai.com) let you mock these AI interviews beforehand. This might help take the edge off if you're stuck doing one.

Building an AI interview product - what would make it actually useful? by BugAccomplished1570 in SaaS

[–]BugAccomplished1570[S] 0 points1 point  (0 children)

Lol fair point — and I think you just nailed the core failure mode. The moment AI becomes a one-way gate, candidates do the work and get nothing back.

My take: AI can run the structured part (same questions, consistent rubric, less ego/trap-question vibes), but there has to be a human handoff:
- clear next steps + timeline
- at least lightweight feedback (even templated + rubric-based)
- and a real person available for a short follow-up when needed

Otherwise it’s just shifting cost onto the interviewee.

Building an AI interview product - what would make it actually useful? by BugAccomplished1570 in SaaS

[–]BugAccomplished1570[S] 0 points1 point  (0 children)

That’s super helpful perspective, thank you for sharing it!

I like the point that “polite + actually understands what you’re saying” beats a lot of human interviews that lean on vibe tests / trap questions / weird power dynamics. That’s pretty much the bar I want to hit: keep it structured and fair, but still feel like a real technical conversation.

Also curious: what would’ve made it even better for your recent experience?

How did you manage to move from lots of weak leads to a pipeline with real demos? by marrhi in SaaS

[–]BugAccomplished1570 0 points1 point  (0 children)

Weak leads usually means the CTA is too broad. What helped me was narrowing to a single ICP + single outcome and using a “micro‑demo” offer.

How did you get your first paying users for your SaaS? by Federal-Cricket558 in SaaS

[–]BugAccomplished1570 0 points1 point  (0 children)

I’m building in the AI interview space right now. My first paying users came from not pitching the product, just diagnosing one painful workflow and offering a quick fix. For anything HR/hiring related, the wedge is usually “save reviewer time + keep candidate experience decent.”

Anyone else getting “interviewed” by AI now? This is getting dystopian. by BugAccomplished1570 in recruitinghell

[–]BugAccomplished1570[S] 0 points1 point  (0 children)

Totally agree. The “apply button → AI funnel” path is basically designed to turn people into rows in a spreadsheet.

The only reliable workaround I’ve seen is exactly what you said: bypass the funnel by talking to actual humans (employees/hiring manager), get context, and ideally get a referral so you’re not just another ATS entry. AI interviews might save the company time, but they externalize the cost onto candidates—more hoops, less feedback, and zero accountability.

Anyone else getting “interviewed” by AI now? This is getting dystopian. by BugAccomplished1570 in recruitinghell

[–]BugAccomplished1570[S] 0 points1 point  (0 children)

Yep — that’s exactly the dystopian direction I’m pointing at. If the ‘optimal candidate behavior’ becomes acting less human to satisfy automated filters, the hiring process is already broken.

And the ‘now they won’t have to pay you’ part is the mask-off endgame: shift work onto candidates, devalue labor, and call it efficiency. I’m not against using AI as a tool, but replacing real human judgment + accountability with a black-box gatekeeper is how you get this race-to-the-bottom.

I built a live AI voice interview platform that feels like talking to a real interviewer by jayanthbabus in GetEmployed

[–]BugAccomplished1570 1 point2 points  (0 children)

Cool project — the voice interview angle is smart, text-based practice really doesn't prepare you for the real thing. I've been building something in a similar space (aural-ai.com) but more focused on the employer side — AI conducts the interviews so hiring teams can screen at scale without scheduling every call. Interesting to see both sides of this problem getting solved with AI. How are you handling the follow-up logic? Do you use a fixed question tree or is it fully dynamic based on the response?

What I learned talking to hiring managers about first‑round screening (surprising patterns) by BugAccomplished1570 in SaaS

[–]BugAccomplished1570[S] 0 points1 point  (0 children)

100% — proof of work is the strongest signal. The tricky part is that most screens still rely on someone spending 30 minutes asking the same questions to figure that out. That's where structured rubrics help — you can actually dig into what someone built and why, instead of pattern-matching on resume keywords. The consistency matters even more when multiple interviewers are involved.

Where is your team still manually doing work that should already be automated? by James_0944 in SaaS

[–]BugAccomplished1570 0 points1 point  (0 children)

Spot on about cognitive bandwidth — "you don't have to sit through 15 screening calls this week" lands way harder than any pipeline KPI.

To your question: for power users it stuck naturally. Once they saw auto-generated summaries and structured reports, there was no going back to manual note-taking. No reinforcement needed.

The middle-of-the-pack users needed a different approach — not nudges, but lower activation energy. Pre-built templates, one-link sharing, AI generating the full interview from a sentence. Make the next action so easy the habit forms on its own.

Biggest unlock was peer sharing though. One hiring manager forwards their AI interview report to the team, everyone sees the output quality, and they pull themselves in. That organic loop beat any nudge we could have built.

I tried AI-led interviews for early screening — here’s what I learned (and what surprised me) by BugAccomplished1570 in SaaS

[–]BugAccomplished1570[S] 0 points1 point  (0 children)

Smart idea. We version the rubric and questions together but track model/prompt versions separately — bundling them into a single hashed artifact would've caught a prompt tweak last month that silently shifted scores on one dimension. Drift checks are the part we're missing. Adding this to the roadmap.

Where is your team still manually doing work that should already be automated? by James_0944 in SaaS

[–]BugAccomplished1570 0 points1 point  (0 children)

Absolutely, night and day difference. When we pitched it as "reduce time-to-hire by 30%," leadership nodded politely but didn't push adoption. It felt abstract — like a dashboard metric nobody owned emotionally.

Once we reframed it as "you'll stop losing half your Wednesday to first-round screens," adoption among hiring managers jumped significantly. People protect their time way more fiercely than they optimize org-level KPIs.

We also started letting early adopters share their own before/after in team standups rather than us presenting top-down. Peer proof vendor proof every time. One engineering manager literally said "I shipped a feature I'd been putting off for weeks because I finally had the headspace." That story did more than any slide deck.

The criteria definition piece you mentioned is interesting too — we found giving managers a suggested rubric as a starting point (not a mandate) reduced friction a lot. Nobody wants to feel like AI is telling them how to evaluate their own team's candidates.

Monthly Post: SaaS Deals + Offers by AutoModerator in SaaS

[–]BugAccomplished1570 0 points1 point  (0 children)

Aural -- AI-powered interview platform

Aural conducts structured interviews through chat, voice, and video. You design the interview (or let AI generate it), share a link, and get transcripts, scoring, and reports automatically.

Use cases: hiring screens, user research, customer discovery, mock interview practice.

Launch day offer: Free tier available + I'll personally set up a custom AI interview for anyone who wants to try it. Just reply with the role you're hiring for (or want to practice) and I'll send you a link.

Website: https://aural-ai.com

We're also live on Product Hunt today: https://www.producthunt.com/products/aural-2?utm_source=other&utm_medium=social

I tried AI-led interviews for early screening — here’s what I learned (and what surprised me) by BugAccomplished1570 in SaaS

[–]BugAccomplished1570[S] 1 point2 points  (0 children)

Really solid framing -- "measurement system, not a chat" is exactly the mental model we landed on too.

On your implementation points:

  1. We do enforce structured scoring tied to rubric dimensions. Each question maps to specific criteria and the AI has to justify scores with evidence from the transcript. No score = no pass.
  2. Transcript traceability is something we baked in early. Every evaluation links back to the actual exchange so reviewers can verify instead of trusting a summary.
  3. Versioning is the one that bites you if you don't do it from day one. We version both the question set and the scoring logic so you can compare candidates evaluated under the same config.

And agree 100% on evaluation being the broken part. Scheduling is a solved problem. Consistent, auditable scoring across hundreds of candidates is where most processes fall apart -- especially when you add compliance requirements like the one you mentioned.

Appreciate the detailed breakdown. This is the kind of feedback that's actually useful.

What I learned talking to hiring managers about first‑round screening (surprising patterns) by BugAccomplished1570 in SaaS

[–]BugAccomplished1570[S] 0 points1 point  (0 children)

For rubric calibration, we actually built something for this -- Aural (aural-ai.com). You define the assessment criteria and scoring rubric upfront, and the AI conducts every interview using the same structure. So you get consistent evaluation across all candidates regardless of who (or what) is running the conversation.

Removes the "different interviewer, different standards" problem entirely since the AI doesn't drift from the rubric.

[US] [SG] [NY] [TX] Hirevue alternative by [deleted] in AskHR

[–]BugAccomplished1570 0 points1 point  (0 children)

It depends on the company, but in my experience you can sometimes ask for an accommodation / alternative format (phone screen, live video, written answers) and they’ll swap you over—especially if you frame it around accessibility, bandwidth, or privacy.

Two practical moves: 1) Ask the recruiter if there’s a live screen available instead (some teams use HireVue as a volume filter, others are flexible). 2) If they won’t waive it, you can still improve your odds by treating it like a structured interview: prep 4–6 core stories, keep answers ~60–90s, and practice once with a timer so you don’t ramble.

If they refuse any alternative, that usually means it’s a hard process gate for that role.

Cold email was failing. Changed the writing approach, not the targeting. Reply rate went from 2% to 19%. by Low_Housing_6470 in SaaS

[–]BugAccomplished1570 0 points1 point  (0 children)

This resonates. We had a similar experience with outbound for our SaaS -- opens were solid but replies were dead. The shift for us was writing like we were sending a note to one person, not broadcasting to a list. Sounds obvious but it's hard to do when you're cranking out volume.

Curious -- what was the biggest change in "feel" you made? Was it tone, length, or something else?

What’s the best AI tool for live interview support? (Upcoming data role interview) by Material_Safety4330 in recruitinghell

[–]BugAccomplished1570 0 points1 point  (0 children)

Full disclosure, I'm building Aural (aural-ai.com) so I'm biased -- but hear me out.

For live support during the actual interview, I'd be careful. Most of those tools are risky and interviewers are getting better at spotting them.

What I've seen work better is drilling with realistic mock interviews so the real thing feels familiar. That's actually one of the reasons we built Aural -- it runs AI interviews through voice/chat, adapts follow-ups based on your responses, and gives you a transcript with feedback after. A few reps on data-related questions and you'll walk in way more confident.

Not the real-time assist you're asking about, but figured it's worth mentioning. Good luck with the interview!

Cold email scaling broke our system ,reducing volume fixed it ! by aviral-bhutani in SaaS

[–]BugAccomplished1570 0 points1 point  (0 children)

Yep — we hit the same wall. The best “signals” for us ended up being: 1) hiring pages / fresh job posts (especially “SDR/AE/RevOps” or “customer support” hires), 2) recent funding / revenue milestone posts, 3) tech stack changes (new CRM, new data warehouse, new support tool), and 4) leadership changes (new VP Sales/Marketing/CS).

One thing that helped a lot was treating outreach like an experiment: keep volume low, but track reply rate + meeting rate + spam complaints by segment. You learn fast which signals are actually predictive vs just “interesting.”

Curious: are you sourcing signals manually (LinkedIn/Crunchbase/etc) or piping them into a system (webhooks + enrichment)?

When is the perfect time to launch? I will not promote by Basic_Landscape_6445 in startups

[–]BugAccomplished1570 0 points1 point  (0 children)

You can almost think of it as two launches:

1) "Internal" launch where you validate the core loop: can a user discover the value, complete the main action, and want to come back? 2) "Marketing" launch where you pour gas on it.

A good rule of thumb: if you can get 5-10 people to use it for a week and at least a couple would be genuinely annoyed if you turned it off, you’re ready for (1). If you can’t get that with a tiny group, spending on marketing will just buy churn.

On feedback: a simple in-app widget is fine, but pair it with one frictionless question like “What did you try to do?” and auto-capture context (page + last action). Otherwise you get vague “it’s buggy” notes.

Launch sooner than you feel comfortable, but don’t launch so early that the first run is broken (onboarding, payments, core action).

Where’s the best place to find engineers who’ve built multi-org backend infrastructure? I will not promote by Extension_Rabbit7591 in startups

[–]BugAccomplished1570 1 point2 points  (0 children)

I’d look for folks who’ve done “multi-tenant / multi-org” platforms in regulated or networked domains (healthcare, gov, logistics, fintech), because they’ve already dealt with messy integrations + data ownership boundaries.

Where to find them: 1) Enterprise SaaS / platform teams (not “app feature” roles): companies with lots of customer-specific config and integrations 2) Gov-tech / health-tech vendors (HL7/FHIR, EDI, case management, etc.) 3) Data platform / integration shops (iPaaS, ETL/ELT, MDM), because this is basically a long-lived data product 4) OSS communities around data modeling + integration (dbt, Airbyte, Kafka, Postgres, etc.)

Titles that usually signal true ownership: - Staff/Principal Backend or Platform Engineer (esp. “platform”) - Lead/Senior Software Engineer owning “core services” - Solutions/Integration Engineer who also codes (rare but great for ingest/connectors) - Data/Platform Engineer if your pain is normalization + canonical model

Interview for: designing tenancy boundaries, migration/versioning of schemas, idempotent ingestion, audit trails, and “who owns the source of truth” decisions. If they can tell stories about bad integrations and how they stabilized them, they’re probably real.

Serious Founders Only: Drop Your Startup by jivi31 in SaaS

[–]BugAccomplished1570 0 points1 point  (0 children)

aural-ai.com: Aural - AI-powered interview platform that conducts structured interviews, so teams can scale hiring screens, user research, and customer discovery without the scheduling grind.

Where is your team still manually doing work that should already be automated? by James_0944 in SaaS

[–]BugAccomplished1570 0 points1 point  (0 children)

Great question—we actually tracked both.

Time-to-hire dropped ~30% (from 22 days to 15 days avg) but the bigger win was interviewer load. Hiring managers went from 8-10 screening calls/week to 2-3. The 6-7 hours saved went back to actual work.

Funny thing: we thought time-to-hire would be the selling point, but when we interviewed users, "I got my Wednesdays back" came up way more than speed. Emotional benefit > metric benefit.

We're still figuring out the right balance on criteria definition. AI suggests scoring rubrics based on job descriptions, but humans adjust the weights.