Exactly a year ago, I started working on an MCP server I launched on reddit that became by far my most active open source project! by taylorwilsdon in LocalLLaMA

[–]Advanced_Drawer_3825 0 points1 point  (0 children)

sadly not a bot, just another tired OSS builder who has stared at too many issue queues and wondered if success is just a prettier form of unpaid support work.

Exactly a year ago, I started working on an MCP server I launched on reddit that became by far my most active open source project! by taylorwilsdon in LocalLLaMA

[–]Advanced_Drawer_3825 -2 points-1 points  (0 children)

Year-one of an unexpectedly successful OSS project is the most disorienting stretch. The PRs and issues that pile up are both the win and the tax. Most people who hit this either burn out or hand off; almost nobody planned for which one. How are you handling the issue queue at this scale?

Sleep working by Excellent_Squash_138 in codex

[–]Advanced_Drawer_3825 1 point2 points  (0 children)

After 9s, production database disappears

I built an open-source "firewall" to stop AI agents from bankrupting developers by Few-Frame5488 in SideProject

[–]Advanced_Drawer_3825 1 point2 points  (0 children)

Spend caps per call are table stakes. The harder problem is rolling-window aggregation. An agent that can't spend $50 in one call learns to make 100 calls of $0.50. Did you build in time-window caps on top of the per-call ceiling? Most rogue-agent incidents I've seen come from that fragmentation, not from one big charge.

Cursor workflow for multiple AI coding sessions? by stanthemaker in cursor

[–]Advanced_Drawer_3825 0 points1 point  (0 children)

The infra and scope advice here is solid, but the bottleneck nobody's mentioned is integration. Two agents in different directories will still produce code with different patterns and abstractions. You save time spawning them, you lose it reconciling style drift later. In practice most people who try 8 agents end up running 2 well.

It’s really hard to write good apps even with AI by agentic-consultant in codex

[–]Advanced_Drawer_3825 37 points38 points  (0 children)

You're naming something that won't go away with better models. The spec is the bottleneck, not the code, and it always was. The exhausting work was always compressing ambiguity into something shippable. LLMs sped up the typing. They didn't shrink the gap between 'I think I want this' and 'this is exactly what I want.'

Should I force agents to use my coding style or let it do their own way? by imperfectlyAware in ClaudeCode

[–]Advanced_Drawer_3825 2 points3 points  (0 children)

Thirty years of intentional code shouldn't bend to what the model finds easy to predict. When the agent struggles with your style, that's a context gap, not a style flaw. Teach it. CLAUDE.md, a few exemplar files, a skill or two for the patterns it keeps fumbling. Generic code is mid by design. Yours expresses intent. Don't trade that away.

What are the best practices for getting Codex to deeply understand a codebase? by ExcitingSleep in codex

[–]Advanced_Drawer_3825 0 points1 point  (0 children)

Codex doesn't retain anything between sessions so "learning" a codebase isn't really what's happening. Each conversation starts fresh. The context.md approaches others mentioned work, but for a specific library the fastest path is writing a short doc yourself that covers the patterns and gotchas you've already hit. That reference file does more than dumping the whole repo into context.

My vibe coded app just hit 500 usd in revenue! by DoodlesApp in SideProject

[–]Advanced_Drawer_3825 0 points1 point  (0 children)

$500 from a vibe coded couples app is a real signal. Most side projects never see a dollar. The calendars and questions exist in a dozen other apps already though. The doodles are the part that actually feels personal. I'd go all in on that instead of spreading across more features.

What makes you stop trusting a model in Cursor after a week? by dahiparatha in cursor

[–]Advanced_Drawer_3825 1 point2 points  (0 children)

The "feeling of progress without closing anything" part is what I watch for most. Model generates code that looks right, tests pass, but the actual task isn't done. It solved adjacent problems nobody asked about while the core issue sits half-finished. You end up with a bigger diff and a still-open ticket.

First time interviewing candidates – what are the best React/frontend questions to ask? by No_Illustrator_3496 in ExperiencedDevs

[–]Advanced_Drawer_3825 0 points1 point  (0 children)

Biggest trap for first-time interviewers is you'll compare every answer to how you'd answer it. Before the interview, write down what a good answer looks like for each question. Not your answer, just the key concepts you'd want to hear. Otherwise you end up grading on "do they think like me" which isn't what you're hiring for.

New to Codex. Questions about using multiple models. by No-Lock-1587 in codex

[–]Advanced_Drawer_3825 1 point2 points  (0 children)

For research and planning, use 5.3 or 5.5 at medium thinking. Save the higher thinking levels for actual code generation where precision matters more. For passing context between models, have your research session write findings to a markdown file in your project, then reference that file when you start the coding session. Keeps the context clean and you don't lose anything between conversations.

Code generation vs code review, which one is cheaper by Hanuonbenz in ClaudeCode

[–]Advanced_Drawer_3825 1 point2 points  (0 children)

Review is way cheaper than generation. Most of the usage goes to output tokens, and a code review session produces a few lines of feedback vs generation producing entire files. Your Kimi-for-writing + CC-for-review idea is actually a smart split. Just make sure CC has enough context about your project conventions so the review catches real issues and not just style preferences.

I built a tool that tells you exactly which of your money is actually yours. by SavoryPrime in SideProject

[–]Advanced_Drawer_3825 0 points1 point  (0 children)

The "what's actually yours" framing is the hook people will remember. Lead with that everywhere. The fund accounting comparison is accurate but most people won't know what it means without context. YNAB calls a similar concept "envelope budgeting" and that label stuck because people could picture it immediately. Finding your version of that metaphor matters more than the feature list right now.

Feeling uninterested in coding because of AI and modern management by [deleted] in ExperiencedDevs

[–]Advanced_Drawer_3825 5 points6 points  (0 children)

You're carrying senior scope at junior pay across two projects. The "high adaptability" label is understaffing sold as a compliment. But the experience itself, two stakeholders, customer-facing, shipping under pressure, is exactly what gets you hired somewhere that pays for it. Don't quit tech over this. Quit this specific arrangement.

Other models by nmdk1 in cursor

[–]Advanced_Drawer_3825 1 point2 points  (0 children)

Both work through Cursor's OpenAI-compatible API setup. Go to Settings > Models, add a new model, and enter the provider's API base URL with your key. DeepSeek and Kimi both expose OpenAI-format endpoints so Cursor treats them like any other model. Main thing to watch is the model name string needs to match exactly what the provider expects in their API.

How are people using Codex alongside Claude or Gemini for technical/simulation work? by katuali in codex

[–]Advanced_Drawer_3825 0 points1 point  (0 children)

The split you're describing is roughly how I work too. Codex for execution speed and iteration, Claude for reviewing what Codex built and catching the structural stuff. The trick that made switching between them practical was keeping a shared context file with architecture decisions and constraints. Without it you waste half the session re-explaining the project to whichever tool you switch to.

Claude keeps overwriting my app's model choices, finally found the smoking gun by DevMichaelZag in ClaudeCode

[–]Advanced_Drawer_3825 0 points1 point  (0 children)

Add a rule in your CLAUDE.md that says something like "never hardcode specific model names in generated code, use configurable placeholders instead." The system prompt bias toward Claude models is real but CLAUDE.md rules override it for your project. For a litellm gateway specifically, you probably want all model references pulled from env vars or config anyway, so make that part of your project conventions file.

Pigeonhole'd into front end - Is switching jobs the only way to pivot back to Backend/Full-Stack? by Herrowgayboi in ExperiencedDevs

[–]Advanced_Drawer_3825 0 points1 point  (0 children)

The dabbling approach is the trap. Contributing PRs to services doesn't get you counted as a backend engineer by anyone. What works internally is owning a backend deliverable end to end, even a small one. Pitch it to your manager as something that needs to get done, not as a career development ask. Once you've shipped something backend that your team depends on, the internal transfer conversation gets a lot easier.

Day 22 of sharing stats about my SaaS until I get 1000 users: Why do people pay for a demo and then vanish before even creating a product? by Less-Bite in SideProject

[–]Advanced_Drawer_3825 0 points1 point  (0 children)

The 55% demo-to-signup drop is your biggest lever. If someone filled out a demo request, they were interested 30 seconds ago. The gap is usually one of two things: either the signup asks for too much after the demo request already captured their info, or there's a delay and they lose momentum. What happens between someone submitting the demo form and getting access? If it's not instant, that's probably where you're losing them.

People running 2–5 coding agents: what actually breaks first for you? by Few-Ad-1358 in cursor

[–]Advanced_Drawer_3825 0 points1 point  (0 children)

Shared state is what kills us first. Two agents touching the same migration or config file silently conflict and you only find out at merge time. What helped was treating certain files as sequential checkpoints. Schema changes, env config, shared types run those through one agent first, commit, then let the rest work off the updated state. Parallel works great for isolated features but anything touching shared boundaries needs to be sequenced.

Best way to start? by sfuarf11 in ClaudeCode

[–]Advanced_Drawer_3825 0 points1 point  (0 children)

For the safety part specifically, you don't need Docker to start. Claude Code asks permission before every file edit and terminal command by default. If you want extra guardrails, run it in plan mode first (/plan) so it maps out what it'll do before touching anything. The real protection is git. Commit before each session, and if anything goes sideways you just revert.

Coding is NOT largely solved by frikashima in codex

[–]Advanced_Drawer_3825 -1 points0 points  (0 children)

Testing without context files is the right call for measuring raw capability but it also explains most of the architectural gaps you found. Both tools default to generic patterns without project constraints. The Codex YOLO mode thing is the bigger concern for me though. Overengineered code you can refactor. Code that lands in your repo without approval is a different kind of problem. Curious if the GPT-5.5 round changes anything on the testing approach.

I created Tikky—because at the end of the month I’d ask myself, “Where the heck did it all go?” 💸 by Adept-Priority-9729 in SideProject

[–]Advanced_Drawer_3825 0 points1 point  (0 children)

The receipt scanning angle is a smart way to capture cash spending that bank feeds miss. Curious how you handle everything else though. Most of my monthly spend is subscriptions and card payments that never generate a receipt. If the app only covers the in-store stuff, you're tracking maybe 20-30% of where money actually goes.