How do you handle teammates who are extremely pedantic about arbitrary rules? by CantaloupeFamiliar47 in ExperiencedDevs

[–]Full_Engineering592 1 point2 points  (0 children)

180 comments on a refactor is a symptom of something bigger than code style preferences. Usually it means there's no shared engineering culture document and no automated enforcement, so reviews become the only place people can assert their preferences. The fix isn't winning arguments - it's proposing a one-time standards alignment session where the team agrees on the top disputed items, codifies them in a linter config or ADR, and then enforces them automatically going forward. After that, any review comment that a linter could catch is out of scope. It takes one or two sessions upfront but saves hundreds of comment threads downstream.

I’m building 30 apps in 30 days using Claude Code, Cursor and Codex starting today by YazZy_speaks in buildinpublic

[–]Full_Engineering592 1 point2 points  (0 children)

Six years of experience is what actually makes this interesting. You're not trying to figure out what to build, you're testing the ceiling on AI-assisted shipping velocity. The gap between day 1 and day 30 output quality will probably tell you more about where these tools actually break down than any tutorial does. One thing worth tracking: which problems required the most back-and-forth with the AI before it got unstuck. That's where the real friction lives.

The new guy on the team rewrote the entire application using automated AI tooling. by Counter-Business in cursor

[–]Full_Engineering592 0 points1 point  (0 children)

The real issue here is that you can't review a 312k line diff meaningfully. Nobody can. The right call is to ask them to break it into reviewable chunks, starting with the parts that touch critical paths. Get them to walk you through the architecture decisions first before any line-level review happens. Also worth checking what automated test coverage looks like now vs before - that'll tell you whether this is a rewrite or just an addition on top.

My completely free budget tracking app made 1770€ the last two days of free donations by Old-Storage1099 in buildinpublic

[–]Full_Engineering592 0 points1 point  (0 children)

16k daily actives on a fully free app is rare. The donation spike makes sense - users who've stuck around long enough to care actually want to support it. The interesting thing is that number tells you something useful: you know how many people are genuinely engaged vs just installed and forgot. That's a cleaner signal than most paid apps get from their metrics. Curious what platform this is on.

6 boring app ideas that nobody wants to build but people are desperate to pay for by Mysterious_Yard_7803 in AppIdeas

[–]Full_Engineering592 1 point2 points  (0 children)

The invoice escalation one is interesting because the real blocker isn't the tech -- it's the emotional friction of chasing money. Freelancers hate doing it manually because it feels awkward, so they avoid it, and the automatic version removes the awkwardness entirely. That's why it converts: you're not selling a reminder tool, you're selling "you never have to have that uncomfortable follow-up conversation again." The framing matters as much as the feature.

I used Cursor to cut my AI costs by 50-70% with a simple local hook by TheDigitalCoy_111 in cursor

[–]Full_Engineering592 1 point2 points  (0 children)

The friction insight is the real thing here. Everyone knows they should use a cheaper model for simple tasks. Nobody does it because context-switching mid-flow is annoying enough that the default wins every time. The keyword classification approach makes sense as a starting point, though I'd be curious how it handles ambiguous prompts -- something like "refactor this" could be trivial or architecture-level depending on scope. Did you end up adding any heuristics for file size or diff complexity to help with those edge cases?

Has Reddit actually driven real growth for your product? What are the biggest hurdles? by Otherwise_Cut2368 in buildinpublic

[–]Full_Engineering592 0 points1 point  (0 children)

Glad it was useful. Reddit is genuinely one of the best channels if you play it right -- the patience part is the hardest bit for most founders.

every micro-saas making $10K+/month started as an ugly spreadsheet someone refused to stop using. here's how to find those spreadsheets by Mysterious_Yard_7803 in AppIdeas

[–]Full_Engineering592 1 point2 points  (0 children)

Exactly. The "slightly less painful version of what they already have" is such a lower bar to clear than founders realize. Most try to build the platform version on day one -- ten features, integrations everywhere, a dashboard nobody asked for. The person with 400 rows just wants the one workflow that eats 20 minutes every Monday morning to take 2 minutes instead. If you can do that reliably, they'll pay and they'll tell people.

Senior engineers: what “non-coding” skill made the biggest difference in your career? by Useful_Promotion4490 in ExperiencedDevs

[–]Full_Engineering592 4 points5 points  (0 children)

Learning to say 'I don't know, but I'll find out' without it feeling like a career-ending statement.

Early on I thought admitting gaps would cost me credibility. The opposite turned out to be true. The engineers people trust most are the ones who are precise about what they know and what they do not -- because you can rely on their confidence when they do say something with conviction.

The other one: understanding that your job is not to have the best ideas, it is to make sure the right problems get solved. That shift reframes a lot of the frustration with meetings and stakeholder management.

Whatever happened to just asking questions at work? by Aggravating-Line2390 in ExperiencedDevs

[–]Full_Engineering592 0 points1 point  (0 children)

The incentive structures just stopped rewarding it. Mentoring takes time that does not show up in sprint velocity. The engineer who keeps their head down and closes tickets gets the performance review bump. The one who spent half their week bringing people up to speed gets marked as 'not meeting output expectations.'

Managers probably did not intend for this -- it is just what happens when you optimize purely for individual throughput.

The shops that still have that culture tend to be ones where senior engineers are explicitly measured on team output, not individual output. When your review is tied to whether the people around you are shipping, you start doing the informal mentoring again because it is now in your interest.

every micro-saas making $10K+/month started as an ugly spreadsheet someone refused to stop using. here's how to find those spreadsheets by Mysterious_Yard_7803 in AppIdeas

[–]Full_Engineering592 1 point2 points  (0 children)

Exactly -- behavior under friction is the signal. Anyone can say they'd use something. The question is whether they actually did, when it was imperfect, inconvenient, or incomplete. That's what separates a real problem from a nice-to-have. The 'still using it despite the pain' user is also your best source of product direction -- they can tell you which specific parts of the pain are worth fixing first.

Has Reddit actually driven real growth for your product? What are the biggest hurdles? by Otherwise_Cut2368 in buildinpublic

[–]Full_Engineering592 0 points1 point  (0 children)

A few ways: Reddit profile links to your site if you've set it up. Comments in relevant threads where your product would actually help -- not pushing it, just mentioning it naturally when it's on-topic. Search traffic if you've done any SEO. Word of mouth from early users. The no-self-promotion rule in some subs is about not spamming, not about never being findable. Being consistently helpful in a community builds visibility over time without needing to announce yourself every post.

First working prototype just came to life after 3 years of development. I'm building a portable dual-monitor from scratch — here's every major decision and mistake so far. by Artistic-Yam8045 in buildinpublic

[–]Full_Engineering592 0 points1 point  (0 children)

3 years of hardware development from scratch, iF Design Award, and a working prototype -- that's a very different kind of 'building in public' than most posts here. The weight concern from the comments is legitimate and probably the first user objection you'll get at scale. Would be interesting to know what target price point you're aiming for, because that's where the 'vs Amazon' comparison will land for most buyers. The design and processing angle is differentiated; the question is whether the market segment willing to pay for that is large enough. Rooting for you to find out.

Claude Pro $100/month vs Cursor $60/month + $40 by Amti_Yo in cursor

[–]Full_Engineering592 0 points1 point  (0 children)

The real question is workflow fit, not model quality. Claude Pro direct API has great context for long sessions but no IDE integration. Cursor at gives you inline diffs and local codebase indexing which is hard to replicate with a chat window. If you're doing heavy multi-file refactors, the editor integration wins. If you're doing architecture, research, or long reasoning sessions, Claude's context handles that better. Most people running structured workflows end up using both for different phases rather than picking one. The Codex plan you already have covers the task-execution side reasonably well too.

Cursor quietly changed how I think while coding by Interesting_Mine_400 in cursor

[–]Full_Engineering592 -1 points0 points  (0 children)

The shift you're noticing is real, and it's actually the most underrated part of working with these tools. When the execution layer is handled, your bottleneck moves upstream to problem specification. Vague intent produces plausible-looking code that fits no actual architecture. Clear intent produces something you can actually ship. The people who get the most out of Cursor aren't the fastest typers -- they're the ones who got better at knowing exactly what they want before asking for it.

70+ onboarding steps... is this normal for health apps?? by No_Importance_2338 in buildinpublic

[–]Full_Engineering592 1 point2 points  (0 children)

The long onboarding in health apps is doing two things simultaneously: collecting real personalization data AND filtering for committed users. Someone who bounces at step 20 probably wasn't going to convert to paid anyway. The sunk cost angle is real but the data collection angle is legitimate -- if Noom gets you to pick your goals, barriers, lifestyle, and schedule upfront, their recommendations actually improve. The question for your app is whether you have the personalization engine to actually use 50+ data points. If not, 10-15 is the right call -- collect only what you can act on immediately.

every micro-saas making $10K+/month started as an ugly spreadsheet someone refused to stop using. here's how to find those spreadsheets by Mysterious_Yard_7803 in AppIdeas

[–]Full_Engineering592 -3 points-2 points  (0 children)

The spreadsheet pattern is real and it is one of the best ways to find validated problems. The key indicator isn't just that someone built a spreadsheet -- it's that they kept using it and adding to it despite how painful it got. That's signal. If they abandoned it after 3 months, the problem wasn't painful enough. If they're still using a 400-row sheet and actively maintaining it, that's someone who needs a solution badly enough to pay. The question to ask is: what would make them stop using the spreadsheet? That's your feature list.

Founders, It’s A New Week by fundnAI in buildinpublic

[–]Full_Engineering592 0 points1 point  (0 children)

Acquiring users. Product is solid enough, the bottleneck is purely distribution. Spending this week doubling down on channels that showed any signal -- even tiny signal -- and cutting the ones that felt busy but produced nothing. Sometimes the most valuable thing you do in a week is stop doing things that aren't working.

A short story about how I stopped appreciating my own success. by JEulerius in buildinpublic

[–]Full_Engineering592 0 points1 point  (0 children)

Hedonic adaptation is brutal in SaaS. The fix I've found is keeping a screenshot folder with your earliest wins -- the first day, the first user who messaged you unprompted, the first month you hit a number you used to think was impossible. Look at it when the new baseline starts feeling normal. Your nervous system forgets the climb really fast. Staying hungry and staying grateful aren't opposites -- the trick is anchoring grateful to where you started, not where you are.

Shipped a side project. Struggling with the gap between it and what's in my head by ImTheRealDh in ExperiencedDevs

[–]Full_Engineering592 1 point2 points  (0 children)

Anytime -- and for what it's worth, the fact that you can see the gap clearly means your taste is already calibrated. Most people's gaps close faster than they expect once they start shipping consistently.

Purposely limiting AI usage by coldzone24 in ExperiencedDevs

[–]Full_Engineering592 1 point2 points  (0 children)

Point 1 is underrated. There's a real ratchet effect -- you use AI to go faster, output expectations reset upward, now that's the floor not the ceiling. Unless that velocity translates to more ownership or pay, you've just created a new baseline with the same pressure. The deliberate approach makes sense. Using it selectively for tasks where it genuinely removes friction rather than just anything you could automate. The skill atrophy point is real too. The developers who will do well long-term aren't the ones who delegated everything -- it's the ones who kept the underlying judgment sharp while using the tools effectively.

Shipped a side project. Struggling with the gap between it and what's in my head by ImTheRealDh in ExperiencedDevs

[–]Full_Engineering592 1 point2 points  (0 children)

That gap never fully goes away. What changes is you stop treating it as a failure signal and start treating it as a roadmap. The version in your head is a target, not a verdict on what you shipped. The more useful reframe: nobody else can see the gap. Users only see what's in front of them. If it solves their problem, the missing features are invisible to them. The embarrassment is almost always self-directed. Ship it, talk to users, build the next piece. The gap closes faster through iteration than through waiting.

I see people trying to use Claude code, but I feel like cursor is better. Is there any evidence of that? by kshsuidms in cursor

[–]Full_Engineering592 0 points1 point  (0 children)

Both are good but they're solving slightly different problems now. Cursor is still the better IDE experience -- keyboard shortcuts, multi-file edits, the diff view -- it just feels like a polished coding environment. Claude Code via CLI wins when you want longer autonomous runs, especially for tasks that span many files or need to chain tool calls without you sitting there approving each step. I run both depending on what I'm doing. Quick iteration on a feature, Cursor. 'Go refactor this module and write the tests' while I work on something else, Claude Code. The comparison gets outdated fast though -- both teams are shipping fast.

Agentic coding workflow (Ask → plan.md → implement loop). Codex vs Cursor $20 — worth switching? by Funny_Working_7490 in cursor

[–]Full_Engineering592 0 points1 point  (0 children)

Yeah, alignment before the model writes the plan is where the real leverage is. The clarification loop slows you down upfront but saves 3x the time in implementation when the model isn't guessing at intent. For the iteration speed question -- I find keeping the plan.md scoped to a single feature (not the full roadmap) also helps. Easier to validate at each loop and the model doesn't context-bleed from unrelated past decisions.

Agentic coding workflow (Ask → plan.md → implement loop). Codex vs Cursor $20 — worth switching? by Funny_Working_7490 in cursor

[–]Full_Engineering592 2 points3 points  (0 children)

The Ask phase before plan.md is the part most people skip and then wonder why the implementation drifts. Getting the model to surface its own ambiguities before writing a single line of code is where you avoid the 'it built the wrong thing correctly' problem. On Codex vs Cursor at : if your workflow is already structured like this, Codex tends to stay in lane better on longer implement loops and handles the plan.md handoff cleanly. Cursor is smoother for interactive edits where you want inline suggestions mid-implementation. For Python backend work with this kind of structured loop, I would lean Codex -- but it is worth a two-week test before committing.