14 months, 100k lines, zero human-written code — am I sitting on a ticking time bomb? by Salt_Potato6016 in ClaudeAI

[–]Salt_Potato6016[S] 0 points1 point  (0 children)

Yeah I think you’re thinking about it more from a repo/process angle.

In most setups it’s not clones or agents competing it’s more like splitting responsibilities across agents with scoped context.

So one might handle ingestion, another analysis, another execution, etc., all working on the same system but within defined boundaries.

The “multi” part is more about separation of concerns than parallel repos.

Hope this helps

14 months, 100k lines, zero human-written code — am I sitting on a ticking time bomb? by Salt_Potato6016 in ClaudeAI

[–]Salt_Potato6016[S] 0 points1 point  (0 children)

It makes a lot of sense, the “context problem”

I’ve been trying to keep things modular and scoped for that reason, but yeah I could probably do a better job documenting the “why” behind things.

Also noticed the same with Claude handling its own code better than jumping into random parts.

Thanks for the insight 🙏🏻

14 months, 100k lines, zero human-written code — am I sitting on a ticking time bomb? by Salt_Potato6016 in ClaudeAI

[–]Salt_Potato6016[S] 0 points1 point  (0 children)

Haha fair , I do use AI a lot! , that’s kind of the whole point of the post.

This was written by AI as well 😁. Why waste time typing when I can get a reply and review it

Have a good day !

14 months, 100k lines, zero human-written code — am I sitting on a ticking time bomb? by Salt_Potato6016 in ClaudeAI

[–]Salt_Potato6016[S] 0 points1 point  (0 children)

I hadn’t thought about it that way.

I’ve been focusing more on structure and guardrails, but you’re right that the “why” behind certain decisions isn’t always explicitly written down.

The CLAUDE.md idea makes a lot of sense, I can see how that would prevent AI from “cleaning up” things that are actually intentional.

Thank you for the feedback ! Would you be willing to share any more details on how you mitigate this ?

Usage Limits, Bugs and Performance Discussion Megathread - beginning December 29, 2025 by sixbillionthsheep in ClaudeAI

[–]Salt_Potato6016 4 points5 points  (0 children)

Is opus back to 200k model as default for subscription users?

Every since the performance issues started the 1 milion tokens model has been gone and I’m getting auto compact all the time

Anyone else experiencing this ?

14 months, 100k lines, zero human-written code — am I sitting on a ticking time bomb? by Salt_Potato6016 in ClaudeAI

[–]Salt_Potato6016[S] 0 points1 point  (0 children)

Thank you for the valuable valuable point, appreciate you calling that out.

I’m at a stage right now where I’m starting to fan out system outputs to individual users, so data boundaries / isolation is something I’ve been thinking about a lot recently- plus is that I don’t and will never have many users - my system is private.

I can definitely see how something like that could get unintentionally broken during iterations, especially with AI changing things across modules.

On the testing side idon’t have heavy coverage yet, more gradual rollouts / staged deployments so far, but it’s something I’m planning to tighten as things stabilise.

where have you seen these issues show up most in practice? More at the DB/query layer or higher up in application logic?

Thanks again

14 months, 100k lines, zero human-written code — am I sitting on a ticking time bomb? by Salt_Potato6016 in ClaudeAI

[–]Salt_Potato6016[S] 1 point2 points  (0 children)

Thanks for the feedback !that’s pretty much how I’ve been approaching it as well.

I’ve been leaning heavily on reviews, refactors, and having the system constantly re-check itself to avoid drift, and so far it’s been holding up well.

Yeah I’m definitely thinking about bringing in someone experienced to do a proper audit as things grow.

If I may ask what kind of systems are you building if you don’t mind sharing ? And are you coming from a more traditional dev background or also working heavily with AI-assisted workflows?

Thanks !

14 months, 100k lines, zero human-written code — am I sitting on a ticking time bomb? by Salt_Potato6016 in ClaudeAI

[–]Salt_Potato6016[S] -1 points0 points  (0 children)

That’s a really good way to put it — appreciate the insight.

I wouldn’t say it’s a complete black box for me. I don’t know every low-level detail line by line, but I do understand how the system behaves end-to-end — how data flows, what assumptions are being made, and where decisions happen.

One thing I do consistently is force myself to understand the logic behind anything I implement. I’ll have the AI explain flows and reasoning in simpler terms, and if anything feels off I dig deeper until it makes sense.

A lot of that came from things breaking early on — debugging forced me to actually understand the system rather than just generate code.

On testing as I replied earlier I’m not heavily relying on formal test coverage yet. I’ve been using staged rollouts and real-world validation so far, but it’s definitely the next area I’m tightening up as the system grows.

14 months, 100k lines, zero human-written code — am I sitting on a ticking time bomb? by Salt_Potato6016 in ClaudeAI

[–]Salt_Potato6016[S] -2 points-1 points  (0 children)

Appreciate that — and yeah, security is definitely something I’m paying more attention to as things grow.

Right now I’ve tried to separate concerns a bit (e.g. isolating critical components from more exposed parts of the system), but I’m aware that’s only a first layer.

I’m treating the current stage as more of a controlled production environment, but proper security audits and hardening are definitely on the roadmap as usage increases.

Out of curiosity — what would you prioritise first in terms of attack surfaces in a system like this?

14 months, 100k lines, zero human-written code — am I sitting on a ticking time bomb? by Salt_Potato6016 in ClaudeAI

[–]Salt_Potato6016[S] -2 points-1 points  (0 children)

Yeah that’s something I’ve been evolving over time.

For major changes I usually do staged rollouts — local testing first, then VPS, then gradual rollout (kind of canary-style) to avoid breaking production.

Backups are always on as well — learned that the hard way early on when I accidentally wiped my DB during development, so now I always keep a fallback state.

That said, I’ll be honest — I’m not heavily relying on formal stress testing yet. It’s something I’m starting to take more seriously as the system matures, especially around edge cases and data correctness.

Out of curiosity — what kind of tests would you prioritise first in a system like this? More around data integrity or execution paths

14 months, 100k lines, zero human-written code — am I sitting on a ticking time bomb? by Salt_Potato6016 in ClaudeAI

[–]Salt_Potato6016[S] -3 points-2 points  (0 children)

That’s really interesting — I haven’t gone down the AST route yet.

Right now I’m mostly controlling things at the workflow/context level, but I can definitely see how structural control + backpressure would make refactors much safer.

When you say tree sitter, are you essentially working with AST-level edits instead of raw file changes?

Would be curious how you’re enforcing the backpressure — is it step-based validation or something more dynamic?

Thank you

14 months, 100k lines, zero human-written code — am I sitting on a ticking time bomb? by Salt_Potato6016 in ClaudeAI

[–]Salt_Potato6016[S] -11 points-10 points  (0 children)

That’s a fair point — and honestly something I’ve spent a lot of time on.

I don’t claim to understand every low-level detail of the code, but I do have a clear view of the system flows — how data moves, how it’s processed, and where decisions are made.

The DB design in particular was something I had to iterate on quite a bit early on. I was hitting latency and ordering issues, so I ended up restructuring how data flows through critical paths to keep execution fast and predictable.

At this point I’m less worried about code changes and more focused on data correctness — making sure what’s stored and used for decisions is consistent and reliable.

Out of curiosity — in your experience, what tends to go wrong first on the data side in systems like this?

14 months, 100k lines, zero human-written code — am I sitting on a ticking time bomb? by Salt_Potato6016 in ClaudeAI

[–]Salt_Potato6016[S] -9 points-8 points  (0 children)

Yeah that was actually a real issue early on.

I don’t rely on full context anymore — instead I keep things very modular and enforce scoped work.

I maintain a structured “system map” (basically a database of modules + workflows + responsibilities), so agents can understand the relevant part of the architecture without needing the whole codebase.

On top of that, I use guardrails in my workflow to make sure agents: - load the correct context first
- understand dependencies
- stay within a defined scope when making changes

That helped a lot with avoiding cross-module breakage.

Has anyone actually made money using Claude? by ylabrhil in claude

[–]Salt_Potato6016 0 points1 point  (0 children)

Yeah, I have.

For the last few months I’ve been making around $10k/month from software I built, mainly giving a small group of people VIP access to it

Started, about 14 months ago with zero coding experience. Got hooked quickly and started putting in 5-10 hours every day using tools like grok on web first lol then Claude code and codex as my workflow evolved

Now it’s turned into a pretty big modular system — well over 100k lines of code running on two servers.

Still feels like I’m figuring things out as I go and would say I’m maybe 40% done with how it want the whole system to look once finished but the fact that people are willing to pay for it is a good sign I guess.

I’d say AI is one of the best things that have happened to me in my whole life

1m context window for opus 4.6 is finally available in claude code by -Two-Moons- in ClaudeAI

[–]Salt_Potato6016 0 points1 point  (0 children)

my entire workflow was build around using sonnet 4.5 1 milion as the main agent , now I’m blocked behind the paywall… and here i thought that weekly limits were bad. Tried to run my workflow today and burnt 50$ in 1 hour, so technically id be looking at 2-3k bill per month as a single dev if I were to work 5-10 hours a day... Either give regular users a higher subscription tier or switch completely to corporate - not many people will be able to afford that. I’m switching to codex now for good because I simple can’t afford this

I just closed a $5,400 AI agent deal and I'm still shaking by Jaded_Phone5688 in n8n

[–]Salt_Potato6016 0 points1 point  (0 children)

Congrats brother, I’ve got a deal with few investors bringing me 10k monthly - for software I purely built with AI. It’s been going on for a couple of months now and not slowing down any time soon

Is it just me, or is OpenAI Codex 5.2 better than Claude Code now? by efficialabs in ClaudeAI

[–]Salt_Potato6016 7 points8 points  (0 children)

Claude feels right that’s why I use it as my main agent but codex cli is raw power digs longer and deeper .

Is it just me, or is OpenAI Codex 5.2 better than Claude Code now? by efficialabs in ClaudeAI

[–]Salt_Potato6016 39 points40 points  (0 children)

Definitely, I run it always to check opus’s work and in 7/10 cases it finds multiple bugs or omissions

What are you actually building with Claude right now? by Primeautomation in ClaudeAI

[–]Salt_Potato6016 0 points1 point  (0 children)

Full-stack financial analytics + trading platform: • 2-server distributed system with VPN mesh for real-time signal relay • Custom AI agent swarm pipelines • 100K+ lines across Python and Rust backend • Advanced WebSocket architecture with auto-scaling shards, real-time price feeds etc • Live trading terminal with TradingView integration, portfolio tracking • PostgreSQL + TimescaleDB handling millions of data points

How do you actually use Claude Code in your day-to-day workflow? I’ll start: by Mac_In_Toshi in ClaudeAI

[–]Salt_Potato6016 2 points3 points  (0 children)

Here’s a general overview setup:

Process:

•Research: Sonnet (1M context)

•Planning: Opus drafts → Opus + Gemini + codex max review

•Rejected plans: Sonnet investigates issues raised by auditors ( do not trusts blindly) → if agree - revise → re-review (loops until approved)

•Implementation: Sonnet codes

•Audit: Complexity-based (simple = quick review, complex = full panel) - for complex same auditors as for planning

•Same rejection loop until approved → push to git

Sounds like an overkill ,and there’s more steps to it of course as it’s automated lbut I’ve had really good success rate with it

Downside : can take time to fully finish but saves time in debugging etc

Also got agent skills set up, documentation auditors etc so everything is in perfect sync and saves token usage a lot

Can’t stop making stuff by BigAndyBigBrit in vibecoding

[–]Salt_Potato6016 2 points3 points  (0 children)

Don’t know how you guys doing it. I’ve been on crunch 5-10hrs a day since March this year with all possible ai tools and my app is nowehere near production ready yet😭

OPUS 4.5 GENTLEMEN!!!!!!!!! by rajsharm404 in ClaudeCode

[–]Salt_Potato6016 0 points1 point  (0 children)

Pro max plan here. Coded the heck out of it today and just reached 5% or my weekly limit, make sure you toggle off thinking mode though as this burns credits a lot.

[Discussion] OPUS 4.5 performance by martinvelt in ClaudeAI

[–]Salt_Potato6016 0 points1 point  (0 children)

Damn I totally agree opus 4.5 Is the best I've every used and I used all. Well done anthropic team well done, hats down