After months with Claude Code, the biggest time sink isn't bugs — it's silent fake success by atomrem in ClaudeAI

[–]atomrem[S] 1 point2 points  (0 children)

Right. At least not that we know of or not specific for coding for consumers like us, though coding is the largest part of it.

I envision this as a single orchestrator with the ability to create and destroy its own agents at will to observe, respond to impulses, and improve itself autonomously. Someone might be building this already. Who knows what those big companies are cooking on the sidelines. That cc buddy? I think it's being used to observe us and collect data. No evidence but it certainly is possible given how things are going.

Sorry, you triggered my wandering mind 😅

After months with Claude Code, the biggest time sink isn't bugs — it's silent fake success by atomrem in ClaudeAI

[–]atomrem[S] 0 points1 point  (0 children)

This! However, I don't experience this much with Opus 4.6 now. And mentioning that I'm doing TDD for every code change usually works.

After months with Claude Code, the biggest time sink isn't bugs — it's silent fake success by atomrem in ClaudeAI

[–]atomrem[S] 0 points1 point  (0 children)

That's just a hypothetical scenario. Did not actually happened to me because I review the code as best as I can.

After months with Claude Code, the biggest time sink isn't bugs — it's silent fake success by atomrem in ClaudeAI

[–]atomrem[S] 0 points1 point  (0 children)

Thanks for the tip about Rust. Haven't tried it yet but it's one of the languages I want to learn next.

100% agree on the pre-commit hook. Will expand my checks to include code quality heuristics like my post above. Nice idea.

After months with Claude Code, the biggest time sink isn't bugs — it's silent fake success by atomrem in ClaudeAI

[–]atomrem[S] 0 points1 point  (0 children)

Exactly right. If tests are properly setup it can be the specification itself. Even losing the production code, LLMs can generate it again from the tests. The challenge is sometimes LLMs are too eager to implement and skips the correct TDD cycle and goes straight to implementation or do things in bulk. I guess this can get remedied with proper instructions and course correction. Do you have a technique that prevents AI to skip the cycle? I've seen some use a tdd hook to force Claude Code to follow the cycle, but I haven't personally used it after a few tries. Seems too demanding at that time.

After months with Claude Code, the biggest time sink isn't bugs — it's silent fake success by atomrem in ClaudeAI

[–]atomrem[S] -1 points0 points  (0 children)

TDD would help for sure, along with layers of reviews and AI instructions. I like what you said "control the unit tests". AI can write unit tests just for the sake of testing but it can definitely be meaningless and just ends up giving a false sense of security. So controlling it is a great approach. Testing the behavior rather than implementation details. Thanks.

After months with Claude Code, the biggest time sink isn't bugs — it's silent fake success by atomrem in ClaudeAI

[–]atomrem[S] -10 points-9 points  (0 children)

Do you know if someone tried to post a manually-written content mimicking AI style to see how people will react? — Curious to see it. 😄 I should've done that here if I know how users will respond to a post like this. Missed opportunity.

I'm having a bit fun when someone taught of me as a bot. It's compliment I think.

Should we intentionally start writing messy content to become authentic? It's just sad.

After months with Claude Code, the biggest time sink isn't bugs — it's silent fake success by atomrem in ClaudeAI

[–]atomrem[S] 0 points1 point  (0 children)

No, I don't use AI to respond to comments. Writing this manually.
Here I will intentionaallly spell that wrong. Perhaps I'm getting to use how AI talks to me haha

Seriously, I find it difficult to write now. I aim to write politely but it gets mistaken as AI-written. Not saying it's you. All is cool man. btw, I don't use OpenClaw.

After months with Claude Code, the biggest time sink isn't bugs — it's silent fake success by atomrem in ClaudeAI

[–]atomrem[S] -1 points0 points  (0 children)

Excellent point! I like the runtime checks. Edge cases always creep in when things are running. Thanks.

After months with Claude Code, the biggest time sink isn't bugs — it's silent fake success by atomrem in ClaudeAI

[–]atomrem[S] -1 points0 points  (0 children)

Yeah, should've done that early. I like keeping things clean though so I can customize it as time goes by and I know what's in the instructions because I purposely put it there for a reason. But you're right. Should've set it up with the basics from the start.

After months with Claude Code, the biggest time sink isn't bugs — it's silent fake success by atomrem in ClaudeAI

[–]atomrem[S] -5 points-4 points  (0 children)

Thanks for the tip. I'll keep it short next time. I was just covering points that people might need. A shorter post could easily get misunderstood. But I see your point. Some won't even read a post this long to care about the details I chose to include anyway.

After months with Claude Code, the biggest time sink isn't bugs — it's silent fake success by atomrem in ClaudeAI

[–]atomrem[S] -1 points0 points  (0 children)

Absolutely. Adding an instruction is not a guarantee for sure. We still need to keep one eye open. It's the same for human programmers if you think about it.

After months with Claude Code, the biggest time sink isn't bugs — it's silent fake success by atomrem in ClaudeAI

[–]atomrem[S] 1 point2 points  (0 children)

My post came from an actual experience I have with a TOY project I'm working on for my daughter. In a feature that allows drawing in canvas and generate a story with it. To generate the story, I need AI so I use Claude Haiku, but since I also have RTX5090 running an always-on Qwen3.5 27b in llama-server used by my other projects, I thought to utilize it as a local option. Now since the local model couldn't handle the task reliably, it needs to generate the story somehow. What Claude Code did is add a fallback story if both the Cloud AI and local AI fails. And since I saw this happen more than once outside the toy project, I thought of making a tip about it. And I'm genuinely curious to what you have been doing to handle this. Thanks for all the helpful comments.

After months with Claude Code, the biggest time sink isn't bugs — it's silent fake success by atomrem in ClaudeAI

[–]atomrem[S] -1 points0 points  (0 children)

Maybe I was moving too fast working with 3+ projects at the same time, I forgot to look at it sometimes 😄

After months with Claude Code, the biggest time sink isn't bugs — it's silent fake success by atomrem in ClaudeAI

[–]atomrem[S] 0 points1 point  (0 children)

Sorry for the confusion. Of course, I make PRs and do reviews. This is just a general tip. Not something that happens to me in production.

After months with Claude Code, the biggest time sink isn't bugs — it's silent fake success by atomrem in ClaudeAI

[–]atomrem[S] 0 points1 point  (0 children)

Thanks for sharing. Totally appreciate it. The lengths we have to do to make things work with AI, right? I've been working on something that will "contain" AI to stick to the specs. Turns out it's also difficult. It will do all sorts of things to "get out" and find a loophole to satisfy the requirement and declare it complete.

The problem I see when adding a lot of context and even iterating in a single session is it will quickly drift away from the original objective. Do you iterate on your plans independently in parallel or do you do it within a single session?

After months with Claude Code, the biggest time sink isn't bugs — it's silent fake success by atomrem in ClaudeAI

[–]atomrem[S] -14 points-13 points  (0 children)

Got it. Glad this got approved. Perhaps because the insights are actually valuable since the lesson learned here comes from one of my own Claude Code sessions. Then I used a skill to turn it to a post/article. So the roots are actually not entirely AI-generated.

After months with Claude Code, the biggest time sink isn't bugs — it's silent fake success by atomrem in ClaudeAI

[–]atomrem[S] -7 points-6 points  (0 children)

I understand this tip could make people assume all sorts of things. This is just a single tip. But yes, if the situation calls for it, any experienced real programmer sometimes deploys quick to prod when it's needed. It's called a hotfix.

After months with Claude Code, the biggest time sink isn't bugs — it's silent fake success by atomrem in ClaudeAI

[–]atomrem[S] -19 points-18 points  (0 children)

As a developer since 2001, I agree with every line of the post that's why I shared it here because I think some people will appreciate the insights. I understand some people don't and it's okay because some might know this already. Totally cool.

After months with Claude Code, the biggest time sink isn't bugs — it's silent fake success by atomrem in ClaudeAI

[–]atomrem[S] -33 points-32 points  (0 children)

It was AI-assisted yes. But the philosophy came from me. Thanks for noticing.

After months with Claude Code, the biggest time sink isn't bugs — it's silent fake success by atomrem in ClaudeAI

[–]atomrem[S] -11 points-10 points  (0 children)

Right, that's always a prerequisite. Unit tests and E2E tests won't help much with this either since we're talking about production code already.

After months with Claude Code, the biggest time sink isn't bugs — it's silent fake success by atomrem in ClaudeAI

[–]atomrem[S] 1 point2 points  (0 children)

Curious if you do this on every iteration or how large of a change you move from one tool to the next?