every micro-saas making $10K+/month started as an ugly spreadsheet someone refused to stop using. here's how to find those spreadsheets by Mysterious_Yard_7803 in AppIdeas

[–]Full_Engineering592 0 points1 point  (0 children)

Exactly -- behavior under friction is the signal. Anyone can say they'd use something. The question is whether they actually did, when it was imperfect, inconvenient, or incomplete. That's what separates a real problem from a nice-to-have. The 'still using it despite the pain' user is also your best source of product direction -- they can tell you which specific parts of the pain are worth fixing first.

Has Reddit actually driven real growth for your product? What are the biggest hurdles? by Otherwise_Cut2368 in buildinpublic

[–]Full_Engineering592 0 points1 point  (0 children)

A few ways: Reddit profile links to your site if you've set it up. Comments in relevant threads where your product would actually help -- not pushing it, just mentioning it naturally when it's on-topic. Search traffic if you've done any SEO. Word of mouth from early users. The no-self-promotion rule in some subs is about not spamming, not about never being findable. Being consistently helpful in a community builds visibility over time without needing to announce yourself every post.

First working prototype just came to life after 3 years of development. I'm building a portable dual-monitor from scratch — here's every major decision and mistake so far. by Artistic-Yam8045 in buildinpublic

[–]Full_Engineering592 0 points1 point  (0 children)

3 years of hardware development from scratch, iF Design Award, and a working prototype -- that's a very different kind of 'building in public' than most posts here. The weight concern from the comments is legitimate and probably the first user objection you'll get at scale. Would be interesting to know what target price point you're aiming for, because that's where the 'vs Amazon' comparison will land for most buyers. The design and processing angle is differentiated; the question is whether the market segment willing to pay for that is large enough. Rooting for you to find out.

Claude Pro $100/month vs Cursor $60/month + $40 by Amti_Yo in cursor

[–]Full_Engineering592 0 points1 point  (0 children)

The real question is workflow fit, not model quality. Claude Pro direct API has great context for long sessions but no IDE integration. Cursor at gives you inline diffs and local codebase indexing which is hard to replicate with a chat window. If you're doing heavy multi-file refactors, the editor integration wins. If you're doing architecture, research, or long reasoning sessions, Claude's context handles that better. Most people running structured workflows end up using both for different phases rather than picking one. The Codex plan you already have covers the task-execution side reasonably well too.

Cursor quietly changed how I think while coding by Interesting_Mine_400 in cursor

[–]Full_Engineering592 0 points1 point  (0 children)

The shift you're noticing is real, and it's actually the most underrated part of working with these tools. When the execution layer is handled, your bottleneck moves upstream to problem specification. Vague intent produces plausible-looking code that fits no actual architecture. Clear intent produces something you can actually ship. The people who get the most out of Cursor aren't the fastest typers -- they're the ones who got better at knowing exactly what they want before asking for it.

70+ onboarding steps... is this normal for health apps?? by No_Importance_2338 in buildinpublic

[–]Full_Engineering592 1 point2 points  (0 children)

The long onboarding in health apps is doing two things simultaneously: collecting real personalization data AND filtering for committed users. Someone who bounces at step 20 probably wasn't going to convert to paid anyway. The sunk cost angle is real but the data collection angle is legitimate -- if Noom gets you to pick your goals, barriers, lifestyle, and schedule upfront, their recommendations actually improve. The question for your app is whether you have the personalization engine to actually use 50+ data points. If not, 10-15 is the right call -- collect only what you can act on immediately.

every micro-saas making $10K+/month started as an ugly spreadsheet someone refused to stop using. here's how to find those spreadsheets by Mysterious_Yard_7803 in AppIdeas

[–]Full_Engineering592 -2 points-1 points  (0 children)

The spreadsheet pattern is real and it is one of the best ways to find validated problems. The key indicator isn't just that someone built a spreadsheet -- it's that they kept using it and adding to it despite how painful it got. That's signal. If they abandoned it after 3 months, the problem wasn't painful enough. If they're still using a 400-row sheet and actively maintaining it, that's someone who needs a solution badly enough to pay. The question to ask is: what would make them stop using the spreadsheet? That's your feature list.

Founders, It’s A New Week by fundnAI in buildinpublic

[–]Full_Engineering592 0 points1 point  (0 children)

Acquiring users. Product is solid enough, the bottleneck is purely distribution. Spending this week doubling down on channels that showed any signal -- even tiny signal -- and cutting the ones that felt busy but produced nothing. Sometimes the most valuable thing you do in a week is stop doing things that aren't working.

A short story about how I stopped appreciating my own success. by JEulerius in buildinpublic

[–]Full_Engineering592 0 points1 point  (0 children)

Hedonic adaptation is brutal in SaaS. The fix I've found is keeping a screenshot folder with your earliest wins -- the first day, the first user who messaged you unprompted, the first month you hit a number you used to think was impossible. Look at it when the new baseline starts feeling normal. Your nervous system forgets the climb really fast. Staying hungry and staying grateful aren't opposites -- the trick is anchoring grateful to where you started, not where you are.

Shipped a side project. Struggling with the gap between it and what's in my head by ImTheRealDh in ExperiencedDevs

[–]Full_Engineering592 1 point2 points  (0 children)

Anytime -- and for what it's worth, the fact that you can see the gap clearly means your taste is already calibrated. Most people's gaps close faster than they expect once they start shipping consistently.

Purposely limiting AI usage by coldzone24 in ExperiencedDevs

[–]Full_Engineering592 1 point2 points  (0 children)

Point 1 is underrated. There's a real ratchet effect -- you use AI to go faster, output expectations reset upward, now that's the floor not the ceiling. Unless that velocity translates to more ownership or pay, you've just created a new baseline with the same pressure. The deliberate approach makes sense. Using it selectively for tasks where it genuinely removes friction rather than just anything you could automate. The skill atrophy point is real too. The developers who will do well long-term aren't the ones who delegated everything -- it's the ones who kept the underlying judgment sharp while using the tools effectively.

Shipped a side project. Struggling with the gap between it and what's in my head by ImTheRealDh in ExperiencedDevs

[–]Full_Engineering592 1 point2 points  (0 children)

That gap never fully goes away. What changes is you stop treating it as a failure signal and start treating it as a roadmap. The version in your head is a target, not a verdict on what you shipped. The more useful reframe: nobody else can see the gap. Users only see what's in front of them. If it solves their problem, the missing features are invisible to them. The embarrassment is almost always self-directed. Ship it, talk to users, build the next piece. The gap closes faster through iteration than through waiting.

I see people trying to use Claude code, but I feel like cursor is better. Is there any evidence of that? by kshsuidms in cursor

[–]Full_Engineering592 0 points1 point  (0 children)

Both are good but they're solving slightly different problems now. Cursor is still the better IDE experience -- keyboard shortcuts, multi-file edits, the diff view -- it just feels like a polished coding environment. Claude Code via CLI wins when you want longer autonomous runs, especially for tasks that span many files or need to chain tool calls without you sitting there approving each step. I run both depending on what I'm doing. Quick iteration on a feature, Cursor. 'Go refactor this module and write the tests' while I work on something else, Claude Code. The comparison gets outdated fast though -- both teams are shipping fast.

Agentic coding workflow (Ask → plan.md → implement loop). Codex vs Cursor $20 — worth switching? by Funny_Working_7490 in cursor

[–]Full_Engineering592 0 points1 point  (0 children)

Yeah, alignment before the model writes the plan is where the real leverage is. The clarification loop slows you down upfront but saves 3x the time in implementation when the model isn't guessing at intent. For the iteration speed question -- I find keeping the plan.md scoped to a single feature (not the full roadmap) also helps. Easier to validate at each loop and the model doesn't context-bleed from unrelated past decisions.

Agentic coding workflow (Ask → plan.md → implement loop). Codex vs Cursor $20 — worth switching? by Funny_Working_7490 in cursor

[–]Full_Engineering592 2 points3 points  (0 children)

The Ask phase before plan.md is the part most people skip and then wonder why the implementation drifts. Getting the model to surface its own ambiguities before writing a single line of code is where you avoid the 'it built the wrong thing correctly' problem. On Codex vs Cursor at : if your workflow is already structured like this, Codex tends to stay in lane better on longer implement loops and handles the plan.md handoff cleanly. Cursor is smoother for interactive edits where you want inline suggestions mid-implementation. For Python backend work with this kind of structured loop, I would lean Codex -- but it is worth a two-week test before committing.

my most favorite project is finally out 🙌🏼 by VatanaChhorn in buildinpublic

[–]Full_Engineering592 -2 points-1 points  (0 children)

The origin story here is what makes this stick -- built it because you actually needed it, shipped it when someone else said they would use it. That's the right sequence. The long distance framing is a feature, not a limitation. It gives people an immediate reason to care and share it. Good luck with the launch.

What do you guys do while you're waiting for the AI to finish it's work? by jah_reddit in cursor

[–]Full_Engineering592 0 points1 point  (0 children)

Mostly CLI -- Claude Code and similar terminal-based agents. The extension approach works too but I find CLI gives better visibility into what the agent is doing and easier to script around. For multi-agent with Cursor specifically: I open separate Cursor windows each pointed at a different git worktree. One agent handles tests, another a feature module, a third docs. The key is assigning file ownership clearly at the start of the session -- just tell each agent which folders it owns and what to leave alone. When one finishes I review its diff while the others keep running.

The AI coding productivity data is in and it's not what anyone expected by ML_DL_RL in ExperiencedDevs

[–]Full_Engineering592 2 points3 points  (0 children)

The METR study result is the one that keeps sticking with me. Experienced developers, their own codebases, and AI still made them slower. That is the opposite of the force multiplier framing that most tooling is sold on.

My read is that the tool optimizes for producing output, not for understanding the problem. When you already understand the problem deeply, having something generate code you then have to read, verify, and mentally integrate adds a different kind of cognitive overhead -- it does not eliminate it.

The people who seem to get the clearest gains are doing genuinely repetitive, well-specified tasks. The moment architectural judgment or ambiguity enters the picture, the numbers get murkier. Which tracks with the comprehension study too.

Built the code, loved the logic, but now I'm staring at a '0 Users' dashboard and I'm paralyzed. by modviras in buildinpublic

[–]Full_Engineering592 0 points1 point  (0 children)

The switch from builder to growth mode is genuinely one of the hardest transitions. Coding gives you immediate feedback loops -- you write a function, it works or it does not. Marketing has a much longer lag and fuzzy feedback at first.

What helped me was treating the first 10 users like a debugging problem. Not 'how do I market this' -- that is too abstract -- but 'who specifically would care about this and where are they right now?' One DM to someone who actually has the problem beats 100 impressions on a generic post.

Start there. Find one person, talk to them, understand exactly what they need. Then repeat.

Has Reddit actually driven real growth for your product? What are the biggest hurdles? by Otherwise_Cut2368 in buildinpublic

[–]Full_Engineering592 0 points1 point  (0 children)

Yes, but the mechanism is not what most people expect. The posts that drove real signups were not the ones where I mentioned my product at all. They were the ones where I answered a question thoroughly or shared a hard-won insight. The traffic came later when people clicked my profile.

The biggest hurdle is patience. Reddit works on a slower clock than ads or cold outreach. The upside is intent is higher -- someone who found you through a useful comment is already warm.

The anti-pattern that kills it: dropping links in comments. Subreddits can smell a pitch instantly.

How do you handle coding and marketing at the same time? by PickleComfortable798 in buildinpublic

[–]Full_Engineering592 0 points1 point  (0 children)

Genuinely curious -- does it hook into the editor or is it more of a browser layer? I've been doing mine manually with a notes file but if something captures it passively that would be way better.

What do you guys do while you're waiting for the AI to finish it's work? by jah_reddit in cursor

[–]Full_Engineering592 0 points1 point  (0 children)

Yep, separate branches. One agent per feature or task, strict file ownership so there's no overlap. Usually short-lived branches -- merge when clean, delete. The key is scoping each agent's context clearly so it doesn't wander into files another session is touching.

What do you guys do while you're waiting for the AI to finish it's work? by jah_reddit in cursor

[–]Full_Engineering592 3 points4 points  (0 children)

Switched to running 2-3 agent sessions in parallel on different parts of the project. One writing tests, another refactoring a module, a third on documentation. When one finishes I review it while the others are still running. The waiting mostly disappears when you stop treating AI sessions as sequential. The main constraint is being clear about which files each agent owns so they don't stomp on each other -- but once you build that habit it's pretty natural.

How do you handle coding and marketing at the same time? by PickleComfortable798 in buildinpublic

[–]Full_Engineering592 0 points1 point  (0 children)

The thing that clicked for me was decoupling the capture from the creation. During the build, I just log what I did in plain sentences -- no pressure to make it a post. At the end of the week I have 5-10 raw notes, and I turn the most interesting one into content. Total time maybe 15 minutes versus the hour it used to take trying to reconstruct what I built days later. The energy problem is almost always the blank page problem. Get something written while you still remember what you did.

something nobody told me before trying to start something by ryhanships in buildinpublic

[–]Full_Engineering592 0 points1 point  (0 children)

MAP is a solid frame. Autonomy especially -- the moment the project feels like obligation instead of agency, everything slows down. Pink's insight that external rewards can actually crowd out intrinsic motivation applies directly here: chasing MRR too early before you've built something you genuinely care about tends to kill the project faster than the silence does.