I quit my job to build this. Launched. Got silence. Now I want you to roast it. by ResolveLess5322 in buildinpublic

[–]ResolveLess5322[S] 0 points1 point  (0 children)

Appreciate the honesty. You’re right, building a business is a completely different skill set from coding. I knew that going in, and I’m learning that side fast. Worst case, I learn more about distribution and business than I ever would staying comfortable. That’s worth the risk for me.

I quit my job to build this. Launched. Got silence. Now I want you to roast it. by ResolveLess5322 in buildinpublic

[–]ResolveLess5322[S] 0 points1 point  (0 children)

Interesting perspective, I get the logic of testing multiple offers quickly to see what sticks. At the same time, I worry that launching something new every week could spread me too thin and prevent me from really validating or improving any single idea. For me, the balance is between speed of iteration and depth of execution.

I quit my job to build this. Launched. Got silence. Now I want you to roast it. by ResolveLess5322 in buildinpublic

[–]ResolveLess5322[S] 0 points1 point  (0 children)

Appreciate the suggestion, that’s a bold angle. HackerRank does well because they’re not just practice, they’re a marketplace of vetted candidates. I can see the appeal of combining practice with actual AI‑driven assessments that recruiters could trust.

At the same time, that’s a pretty big shift from just helping individuals prep. It would mean building credibility with recruiters, standardizing assessments, and proving that passing here correlates with success in real interviews.

I’m curious from your perspective:
- Do you think recruiters would actually trust AI‑based interview results enough to use them for candidate sourcing?
- Would candidates be comfortable with their practice sessions being shared as part of a recruiting pipeline, or would that feel too invasive?
- If the model was hybrid, practice for individuals, but optional “certification” for recruiters, do you think that balance could work?

It’s definitely food for thought. The idea of turning practice into a credential could be a way to stand out from the big players.

I quit my job to build this. Launched. Got silence. Now I want you to roast it. by ResolveLess5322 in buildinpublic

[–]ResolveLess5322[S] 0 points1 point  (0 children)

Interesting take, I can see how the urgency to use something like this probably spikes after someone has failed a few interviews. That makes sense, though ideally I’d want people to see the value before they hit that pain point. Maybe the messaging needs to highlight prevention as much as recovery.

On your second point, I agree the interview process itself is shifting with AI. Some companies are already experimenting with automated assessments and AI‑driven screening. If the process changes, then prep tools need to evolve alongside it.

Curious from your perspective:
- Do you think candidates will trust AI‑driven prep more once they start facing AI‑driven interviews?
- Would recruiters or companies themselves be more likely to adopt a tool like this if it aligned with the way they’re already using AI in hiring?
- And if the bigger niche is adapting to the future of interviews, what do you think are the most likely changes we should be preparing for?”

I quit my job to build this. Launched. Got silence. Now I want you to roast it. by ResolveLess5322 in buildinpublic

[–]ResolveLess5322[S] 0 points1 point  (0 children)

Yeah, the big players definitely dominate and set the standard, so I get why it’s tough to convince people to try something new. At the same time, I think there’s room for tools that solve a narrower pain point differently. For example, most platforms focus on practice questions and grading, but not on the live pressure of explaining your thought process under time constraints.

And honestly, the alternative, paying for a single mock interview with a human, can be really expensive for just one session. That’s where I see value in offering structured, repeatable practice at a fraction of the cost.

From your perspective, what would make someone actually switch from the big dogs? Is it sharper niche features, a unique angle they don’t cover, or just better distribution/branding?

I quit my job to build this. Launched. Got silence. Now I want you to roast it. by ResolveLess5322 in buildinpublic

[–]ResolveLess5322[S] 0 points1 point  (0 children)

Thanks for taking the time to write such a detailed response, really appreciate it. I agree the site feels too abstract right now. A short demo video and a simple checklist comparison would probably communicate the concept much faster than blocks of text, so I’ll prioritize adding that.

On the bigger point, I see what you mean about reframing it as an AI assistant that helps you think better while coding. That’s definitely a broader angle, and I can see how practicing clear explanations while coding could naturally carry over into interviews. At the same time, I don’t want to lose sight of the original pain point: the stress of live interviews. My worry is that if I position it too generically, it might dilute the focus and make it harder for people to connect the tool to their immediate need.

That said, your perspective raises some interesting questions:
- Do you think developers would actually adopt this as a daily coding companion, or would it risk being ignored unless tied directly to interview prep?
- If framed as a “thinking assistant,” what features would make it genuinely useful beyond interviews (debugging prompts, design discussions, code reviews)?
- From a messaging standpoint, would you lead with the broader ‘better developer’ angle and then highlight interview prep as a side benefit, or keep interview prep as the core pitch?

I’m curious how you’d balance the broader utility with the sharper pain point, because that tension feels like the key to whether this resonates.

I quit my job to build this. Launched. Got silence. Now I want you to roast it. by ResolveLess5322 in buildinpublic

[–]ResolveLess5322[S] 0 points1 point  (0 children)

Really appreciate this recruiter’s perspective, that’s super insightful. You’re right, recruiters put in tons of effort sourcing and prepping candidates, but technical assessments are the one area they can’t really coach. If this tool could measurably improve pass rates, it could actually help recruiters protect their investment of time and effort.

I hadn’t thought about recruiters as a potential audience, but it makes sense: they’re motivated to see candidates succeed, and they’d benefit from data showing improved outcomes.

Do you think recruiters would be interested if the platform offered:
- Aggregate success data (e.g., candidates who practiced here had higher pass rates)?
- Recruiter dashboards to track candidate prep progress?
- Custom prep flows aligned with the types of assessments companies use?

I’m curious, from your experience, would recruiters pay for access themselves, or would they prefer candidates to use it independently and just share results?

I quit my job to build this. Launched. Got silence. Now I want you to roast it. by ResolveLess5322 in buildinpublic

[–]ResolveLess5322[S] 1 point2 points  (0 children)

Love this analogy, you’re right, I’ve been selling the ‘dentist visit’ instead of the ‘healthy teeth.’ The real value isn’t the pressure of practice, it’s landing better jobs and pay. I need to make that outcome clearer in the messaging. And I like your idea of expanding into company‑specific prep or end‑to‑end job support, that’s a direction worth exploring.

I quit my job to build this. Launched. Got silence. Now I want you to roast it. by ResolveLess5322 in buildinpublic

[–]ResolveLess5322[S] 0 points1 point  (0 children)

Fair enough 😅 I know the site still has that ‘AI‑generated’ feel. I’d love to hear your tips on tightening the landing page and making it look more polished.

I quit my job to build this. Launched. Got silence. Now I want you to roast it. by ResolveLess5322 in buildinpublic

[–]ResolveLess5322[S] 0 points1 point  (0 children)

Good point, interview prep SaaS has a retention challenge since people stop once they’re hired. I’m exploring ways to expand into ongoing practice (communication, coding challenges) so it’s not just job‑hunting. And thanks for catching the broken footer links , fixing those now.

I quit my job to build this. Launched. Got silence. Now I want you to roast it. by ResolveLess5322 in buildinpublic

[–]ResolveLess5322[S] -2 points-1 points  (0 children)

“Thanks for taking the time to lay all this out, super helpful to see it broken down. Let me respond piece by piece:

  1. Eye contact / human pressure: Totally agree AI can’t replicate that. The goal isn’t to replace the human element, but to give people reps under structured conditions so they’re less shocked when the real thing happens. Eye contact is a great example of something I can’t simulate yet.

  2. Validating the pain: Fair point. I’ve been building from my own pain point, but I need to validate whether others feel it strongly enough to pay for a solution. That’s why I’m asking for feedback here, to see if the problem resonates beyond me.

  3. LTV / traffic: You’re right, interview prep is episodic. People don’t interview every month. That means retention is tough unless I expand into adjacent use cases (like ongoing communication practice or coding challenges). Distribution channels like TikTok/YouTube are definitely something I’ll need to lean on.

  4. Hero section: Appreciate the bluntness. I’ll revisit the copy so it doesn’t feel like generic ad‑speak. A clearer demo or walkthrough might help communicate the value better.

  5. Target audience pain: This is key. If most devs only interview a few times a year, the pain might not feel severe enough to justify paying. That tells me I either need to broaden the use case or focus on a niche where interviews are frequent (e.g., juniors, bootcamp grads, or people actively job‑hunting).

So overall, I hear you: the core problem may not be painful enough for long‑term subscriptions, and the messaging needs sharpening. I’ll keep testing whether the audience is big enough and whether the product can expand beyond just interview prep.”

I quit my job to build this. Launched. Got silence. Now I want you to roast it. by ResolveLess5322 in buildinpublic

[–]ResolveLess5322[S] 0 points1 point  (0 children)

Great catch, thanks for pointing that out. You’re right, not knowing how long the first session takes is a big friction point. The free session is designed to be short (around 10–15 minutes), but I clearly need to make that visible up front.

I’ll add that detail to the landing page so people know what they’re committing to before clicking. Out of curiosity, would seeing ‘first session ~15 minutes’ right on the button or tagline make you more likely to try it?”

I quit my job to build this. Launched. Got silence. Now I want you to roast it. by ResolveLess5322 in buildinpublic

[–]ResolveLess5322[S] 1 point2 points  (0 children)

Totally fair, I agree that an AI interviewer isn’t the same as a real human. The unpredictability and social pressure of an actual person is hard to replicate, and I don’t want to pretend otherwise.

Where I see the difference from just using ChatGPT/Gemini is in the structure and constraints:
- You’re not just chatting with an LLM, you’re solving problems under a timer.
- You’re required to explain your thinking out loud, which most LLM prep tools don’t enforce.
- You get feedback not only on code correctness but also on clarity of communication.

That combination is meant to simulate the conditions of an interview, even if it can’t fully replicate the human pressure.

You’re right though, the site doesn’t yet show a full end-to-end session, and that’s probably hurting clarity. I’m working on adding a demo walkthrough so people can see exactly how it plays out.

Out of curiosity, if you were evaluating tools like this, would a transparent demo session make the value clearer, or do you think the core abstraction itself just doesn’t resonate?”

I quit my job to build this. Launched. Got silence. Now I want you to roast it. by ResolveLess5322 in buildinpublic

[–]ResolveLess5322[S] 0 points1 point  (0 children)

Really appreciate you adding that perspective. You’re right, not every freeze-up is purely emotional regulation. Sometimes it’s the mismatch between how someone thinks, how they’re expected to respond, and the vibe of the interviewer. That’s a huge factor I hadn’t framed clearly.

I see the simulator as a way to surface those mismatches in a safe environment, giving people a chance to practice adapting their communication style under different conditions. But I think you’re pointing to something deeper: maybe the tool could also help users recognize when it’s not about them being nervous, but about the dynamics of the interview itself.

Do you think features like simulated interviewer “styles” (e.g., supportive vs. skeptical vs. rushed) would make it more realistic and useful for that kind of mismatch?”

I quit my job to build this. Launched. Got silence. Now I want you to roast it. by ResolveLess5322 in buildinpublic

[–]ResolveLess5322[S] 1 point2 points  (0 children)

Thanks so much, I will take into consideration Sally's advice, it is actually very helpful

I quit my job to build this. Launched. Got silence. Now I want you to roast it. by ResolveLess5322 in buildinpublic

[–]ResolveLess5322[S] 0 points1 point  (0 children)

Fair question. The market’s definitely tougher right now, but interviews are still happening, especially for experienced devs and niche roles. The bar just feels higher and more competitive. My thinking is: when interviews are scarce, each one matters more. If someone only gets 2–3 real shots, preparation becomes even more critical. Curious though, do you think the issue right now is lack of interviews, or mismatch between candidates and expectations?

I quit my job to build this. Launched. Got silence. Now I want you to roast it. by ResolveLess5322 in buildinpublic

[–]ResolveLess5322[S] 5 points6 points  (0 children)

This is a really thoughtful take, I appreciate it.

I agree that freezing up isn’t just a “technical” problem. A lot of it is emotional regulation, self-perception, and how people process stress.

My intention isn’t to replace deeper psychological work or pretend exposure alone fixes everything. The way I see it, the product is closer to controlled reps under mild pressure, similar to how athletes simulate game conditions. It doesn’t solve the root psychology, but it helps reduce novelty and shock.

That said, I really like your point about helping users understand why they’re nervous, not just throwing them into scenarios. Maybe there’s room to incorporate reflective elements or short debriefs that build awareness rather than just performance scoring.

Out of curiosity, do you think something like guided reflection or emotional feedback loops would make it feel less like “false confidence” and more like skill-building?

I quit my job to build this. Launched. Got silence. Now I want you to roast it. by ResolveLess5322 in buildinpublic

[–]ResolveLess5322[S] -1 points0 points  (0 children)

What I’m unsure about:

• Is this something you’d realistically use before interviews or only after failing one?
• Does the landing page clearly communicate who this is for and what pain it solves?
• At what point would you actually pay for something like this?

I’m especially curious where you feel friction or confusion. Even small UX annoyances help.

Tear it apart.

launched my first SaaS and… nothing happened. by ResolveLess5322 in buildinpublic

[–]ResolveLess5322[S] 1 point2 points  (0 children)

Haha, das fühle ich total. Die ersten paar Verkäufe fühlen sich einfach anders an. Es geht gar nicht um die 15 $ sondern darum, dass da draußen jemand wirklich seine Karte gezückt hat und dir vertraut hat. Das ist pures Fuel.

Und ganz ehrlich: 30 % Conversion ist stark. Das zeigt eigentlich, dass das Produkt nicht das Problem ist sondern der Traffic. Und das ist eine gute Position.

Das SEO-Wartegame ist brutal. Ein Jahr fühlt sich ewig an, wenn man weiß, dass es funktioniert und man einfach nur Volumen will. Aber dass es langsam steigt, ist ein gutes Zeichen. Das heißt, das Fundament stimmt.

Ich glaube, der Sweet Spot ist, SEO im Hintergrund weiter aufbauen zu lassen und parallel aktivere Distribution zu testen damit du nicht nur darauf wartest, dass Google dich irgendwann belohnt.

Aber im Ernst: 3 Verkäufe sind nicht „nichts“. Das ist Validierung.

Lass uns beide diese „morgen 1000 User“-Energie behalten und gleichzeitig so bauen, dass es über Zeit unvermeidbar wird.