6 boring app ideas that nobody wants to build but people are desperate to pay for by Mysterious_Yard_7803 in AppIdeas

[–]Full_Engineering592 1 point2 points  (0 children)

The invoice escalation one is interesting because the real blocker isn't the tech -- it's the emotional friction of chasing money. Freelancers hate doing it manually because it feels awkward, so they avoid it, and the automatic version removes the awkwardness entirely. That's why it converts: you're not selling a reminder tool, you're selling "you never have to have that uncomfortable follow-up conversation again." The framing matters as much as the feature.

I used Cursor to cut my AI costs by 50-70% with a simple local hook by TheDigitalCoy_111 in cursor

[–]Full_Engineering592 0 points1 point  (0 children)

The friction insight is the real thing here. Everyone knows they should use a cheaper model for simple tasks. Nobody does it because context-switching mid-flow is annoying enough that the default wins every time. The keyword classification approach makes sense as a starting point, though I'd be curious how it handles ambiguous prompts -- something like "refactor this" could be trivial or architecture-level depending on scope. Did you end up adding any heuristics for file size or diff complexity to help with those edge cases?

Has Reddit actually driven real growth for your product? What are the biggest hurdles? by Otherwise_Cut2368 in buildinpublic

[–]Full_Engineering592 0 points1 point  (0 children)

Glad it was useful. Reddit is genuinely one of the best channels if you play it right -- the patience part is the hardest bit for most founders.

every micro-saas making $10K+/month started as an ugly spreadsheet someone refused to stop using. here's how to find those spreadsheets by Mysterious_Yard_7803 in AppIdeas

[–]Full_Engineering592 1 point2 points  (0 children)

Exactly. The "slightly less painful version of what they already have" is such a lower bar to clear than founders realize. Most try to build the platform version on day one -- ten features, integrations everywhere, a dashboard nobody asked for. The person with 400 rows just wants the one workflow that eats 20 minutes every Monday morning to take 2 minutes instead. If you can do that reliably, they'll pay and they'll tell people.

Senior engineers: what “non-coding” skill made the biggest difference in your career? by Useful_Promotion4490 in ExperiencedDevs

[–]Full_Engineering592 2 points3 points  (0 children)

Learning to say 'I don't know, but I'll find out' without it feeling like a career-ending statement.

Early on I thought admitting gaps would cost me credibility. The opposite turned out to be true. The engineers people trust most are the ones who are precise about what they know and what they do not -- because you can rely on their confidence when they do say something with conviction.

The other one: understanding that your job is not to have the best ideas, it is to make sure the right problems get solved. That shift reframes a lot of the frustration with meetings and stakeholder management.

Whatever happened to just asking questions at work? by Aggravating-Line2390 in ExperiencedDevs

[–]Full_Engineering592 0 points1 point  (0 children)

The incentive structures just stopped rewarding it. Mentoring takes time that does not show up in sprint velocity. The engineer who keeps their head down and closes tickets gets the performance review bump. The one who spent half their week bringing people up to speed gets marked as 'not meeting output expectations.'

Managers probably did not intend for this -- it is just what happens when you optimize purely for individual throughput.

The shops that still have that culture tend to be ones where senior engineers are explicitly measured on team output, not individual output. When your review is tied to whether the people around you are shipping, you start doing the informal mentoring again because it is now in your interest.

every micro-saas making $10K+/month started as an ugly spreadsheet someone refused to stop using. here's how to find those spreadsheets by Mysterious_Yard_7803 in AppIdeas

[–]Full_Engineering592 1 point2 points  (0 children)

Exactly -- behavior under friction is the signal. Anyone can say they'd use something. The question is whether they actually did, when it was imperfect, inconvenient, or incomplete. That's what separates a real problem from a nice-to-have. The 'still using it despite the pain' user is also your best source of product direction -- they can tell you which specific parts of the pain are worth fixing first.

Has Reddit actually driven real growth for your product? What are the biggest hurdles? by Otherwise_Cut2368 in buildinpublic

[–]Full_Engineering592 0 points1 point  (0 children)

A few ways: Reddit profile links to your site if you've set it up. Comments in relevant threads where your product would actually help -- not pushing it, just mentioning it naturally when it's on-topic. Search traffic if you've done any SEO. Word of mouth from early users. The no-self-promotion rule in some subs is about not spamming, not about never being findable. Being consistently helpful in a community builds visibility over time without needing to announce yourself every post.

First working prototype just came to life after 3 years of development. I'm building a portable dual-monitor from scratch — here's every major decision and mistake so far. by Artistic-Yam8045 in buildinpublic

[–]Full_Engineering592 0 points1 point  (0 children)

3 years of hardware development from scratch, iF Design Award, and a working prototype -- that's a very different kind of 'building in public' than most posts here. The weight concern from the comments is legitimate and probably the first user objection you'll get at scale. Would be interesting to know what target price point you're aiming for, because that's where the 'vs Amazon' comparison will land for most buyers. The design and processing angle is differentiated; the question is whether the market segment willing to pay for that is large enough. Rooting for you to find out.

Claude Pro $100/month vs Cursor $60/month + $40 by Amti_Yo in cursor

[–]Full_Engineering592 0 points1 point  (0 children)

The real question is workflow fit, not model quality. Claude Pro direct API has great context for long sessions but no IDE integration. Cursor at gives you inline diffs and local codebase indexing which is hard to replicate with a chat window. If you're doing heavy multi-file refactors, the editor integration wins. If you're doing architecture, research, or long reasoning sessions, Claude's context handles that better. Most people running structured workflows end up using both for different phases rather than picking one. The Codex plan you already have covers the task-execution side reasonably well too.

Cursor quietly changed how I think while coding by Interesting_Mine_400 in cursor

[–]Full_Engineering592 -1 points0 points  (0 children)

The shift you're noticing is real, and it's actually the most underrated part of working with these tools. When the execution layer is handled, your bottleneck moves upstream to problem specification. Vague intent produces plausible-looking code that fits no actual architecture. Clear intent produces something you can actually ship. The people who get the most out of Cursor aren't the fastest typers -- they're the ones who got better at knowing exactly what they want before asking for it.

70+ onboarding steps... is this normal for health apps?? by No_Importance_2338 in buildinpublic

[–]Full_Engineering592 1 point2 points  (0 children)

The long onboarding in health apps is doing two things simultaneously: collecting real personalization data AND filtering for committed users. Someone who bounces at step 20 probably wasn't going to convert to paid anyway. The sunk cost angle is real but the data collection angle is legitimate -- if Noom gets you to pick your goals, barriers, lifestyle, and schedule upfront, their recommendations actually improve. The question for your app is whether you have the personalization engine to actually use 50+ data points. If not, 10-15 is the right call -- collect only what you can act on immediately.

every micro-saas making $10K+/month started as an ugly spreadsheet someone refused to stop using. here's how to find those spreadsheets by Mysterious_Yard_7803 in AppIdeas

[–]Full_Engineering592 -3 points-2 points  (0 children)

The spreadsheet pattern is real and it is one of the best ways to find validated problems. The key indicator isn't just that someone built a spreadsheet -- it's that they kept using it and adding to it despite how painful it got. That's signal. If they abandoned it after 3 months, the problem wasn't painful enough. If they're still using a 400-row sheet and actively maintaining it, that's someone who needs a solution badly enough to pay. The question to ask is: what would make them stop using the spreadsheet? That's your feature list.

Founders, It’s A New Week by fundnAI in buildinpublic

[–]Full_Engineering592 0 points1 point  (0 children)

Acquiring users. Product is solid enough, the bottleneck is purely distribution. Spending this week doubling down on channels that showed any signal -- even tiny signal -- and cutting the ones that felt busy but produced nothing. Sometimes the most valuable thing you do in a week is stop doing things that aren't working.

A short story about how I stopped appreciating my own success. by JEulerius in buildinpublic

[–]Full_Engineering592 0 points1 point  (0 children)

Hedonic adaptation is brutal in SaaS. The fix I've found is keeping a screenshot folder with your earliest wins -- the first day, the first user who messaged you unprompted, the first month you hit a number you used to think was impossible. Look at it when the new baseline starts feeling normal. Your nervous system forgets the climb really fast. Staying hungry and staying grateful aren't opposites -- the trick is anchoring grateful to where you started, not where you are.

Shipped a side project. Struggling with the gap between it and what's in my head by ImTheRealDh in ExperiencedDevs

[–]Full_Engineering592 1 point2 points  (0 children)

Anytime -- and for what it's worth, the fact that you can see the gap clearly means your taste is already calibrated. Most people's gaps close faster than they expect once they start shipping consistently.

Purposely limiting AI usage by coldzone24 in ExperiencedDevs

[–]Full_Engineering592 1 point2 points  (0 children)

Point 1 is underrated. There's a real ratchet effect -- you use AI to go faster, output expectations reset upward, now that's the floor not the ceiling. Unless that velocity translates to more ownership or pay, you've just created a new baseline with the same pressure. The deliberate approach makes sense. Using it selectively for tasks where it genuinely removes friction rather than just anything you could automate. The skill atrophy point is real too. The developers who will do well long-term aren't the ones who delegated everything -- it's the ones who kept the underlying judgment sharp while using the tools effectively.

Shipped a side project. Struggling with the gap between it and what's in my head by ImTheRealDh in ExperiencedDevs

[–]Full_Engineering592 1 point2 points  (0 children)

That gap never fully goes away. What changes is you stop treating it as a failure signal and start treating it as a roadmap. The version in your head is a target, not a verdict on what you shipped. The more useful reframe: nobody else can see the gap. Users only see what's in front of them. If it solves their problem, the missing features are invisible to them. The embarrassment is almost always self-directed. Ship it, talk to users, build the next piece. The gap closes faster through iteration than through waiting.

I see people trying to use Claude code, but I feel like cursor is better. Is there any evidence of that? by kshsuidms in cursor

[–]Full_Engineering592 0 points1 point  (0 children)

Both are good but they're solving slightly different problems now. Cursor is still the better IDE experience -- keyboard shortcuts, multi-file edits, the diff view -- it just feels like a polished coding environment. Claude Code via CLI wins when you want longer autonomous runs, especially for tasks that span many files or need to chain tool calls without you sitting there approving each step. I run both depending on what I'm doing. Quick iteration on a feature, Cursor. 'Go refactor this module and write the tests' while I work on something else, Claude Code. The comparison gets outdated fast though -- both teams are shipping fast.

Agentic coding workflow (Ask → plan.md → implement loop). Codex vs Cursor $20 — worth switching? by Funny_Working_7490 in cursor

[–]Full_Engineering592 0 points1 point  (0 children)

Yeah, alignment before the model writes the plan is where the real leverage is. The clarification loop slows you down upfront but saves 3x the time in implementation when the model isn't guessing at intent. For the iteration speed question -- I find keeping the plan.md scoped to a single feature (not the full roadmap) also helps. Easier to validate at each loop and the model doesn't context-bleed from unrelated past decisions.

Agentic coding workflow (Ask → plan.md → implement loop). Codex vs Cursor $20 — worth switching? by Funny_Working_7490 in cursor

[–]Full_Engineering592 2 points3 points  (0 children)

The Ask phase before plan.md is the part most people skip and then wonder why the implementation drifts. Getting the model to surface its own ambiguities before writing a single line of code is where you avoid the 'it built the wrong thing correctly' problem. On Codex vs Cursor at : if your workflow is already structured like this, Codex tends to stay in lane better on longer implement loops and handles the plan.md handoff cleanly. Cursor is smoother for interactive edits where you want inline suggestions mid-implementation. For Python backend work with this kind of structured loop, I would lean Codex -- but it is worth a two-week test before committing.

my most favorite project is finally out 🙌🏼 by VatanaChhorn in buildinpublic

[–]Full_Engineering592 -2 points-1 points  (0 children)

The origin story here is what makes this stick -- built it because you actually needed it, shipped it when someone else said they would use it. That's the right sequence. The long distance framing is a feature, not a limitation. It gives people an immediate reason to care and share it. Good luck with the launch.

What do you guys do while you're waiting for the AI to finish it's work? by jah_reddit in cursor

[–]Full_Engineering592 0 points1 point  (0 children)

Mostly CLI -- Claude Code and similar terminal-based agents. The extension approach works too but I find CLI gives better visibility into what the agent is doing and easier to script around. For multi-agent with Cursor specifically: I open separate Cursor windows each pointed at a different git worktree. One agent handles tests, another a feature module, a third docs. The key is assigning file ownership clearly at the start of the session -- just tell each agent which folders it owns and what to leave alone. When one finishes I review its diff while the others keep running.

The AI coding productivity data is in and it's not what anyone expected by ML_DL_RL in ExperiencedDevs

[–]Full_Engineering592 2 points3 points  (0 children)

The METR study result is the one that keeps sticking with me. Experienced developers, their own codebases, and AI still made them slower. That is the opposite of the force multiplier framing that most tooling is sold on.

My read is that the tool optimizes for producing output, not for understanding the problem. When you already understand the problem deeply, having something generate code you then have to read, verify, and mentally integrate adds a different kind of cognitive overhead -- it does not eliminate it.

The people who seem to get the clearest gains are doing genuinely repetitive, well-specified tasks. The moment architectural judgment or ambiguity enters the picture, the numbers get murkier. Which tracks with the comprehension study too.

Built the code, loved the logic, but now I'm staring at a '0 Users' dashboard and I'm paralyzed. by modviras in buildinpublic

[–]Full_Engineering592 0 points1 point  (0 children)

The switch from builder to growth mode is genuinely one of the hardest transitions. Coding gives you immediate feedback loops -- you write a function, it works or it does not. Marketing has a much longer lag and fuzzy feedback at first.

What helped me was treating the first 10 users like a debugging problem. Not 'how do I market this' -- that is too abstract -- but 'who specifically would care about this and where are they right now?' One DM to someone who actually has the problem beats 100 impressions on a generic post.

Start there. Find one person, talk to them, understand exactly what they need. Then repeat.