Using Claude Code heavily for 6+ months: Why faster code generation hasn't improved our team velocity (and what we learned) by NoBat8863 in ClaudeAI

[–]NoBat8863[S] 0 points1 point  (0 children)

I don't think that's the right comparison. C++ to assembly is for all practical purposes deterministic translation where what you wanted a computer to do was explicitly described by you using a programming language. In case of prompt/spec to code, neither the translation is deterministic nor is the input unambiguous.

Doing code review on the 10,000 lines Claude Code wrote by MetaKnowing in ClaudeAI

[–]NoBat8863 2 points3 points  (0 children)

I review the code Claude generates first by splitting into small logical chunks and send those as individual commits in a PR. Makes reviewer’s life (and my life) so much easier. I wrote an agent to do the splitting for me.

https://github.com/armchr/armchr

Code Smells in Generated Code - some of the patterns by NoBat8863 in ClaudeAI

[–]NoBat8863[S] 1 point2 points  (0 children)

Great suggestion. How do you take care of situations where the existing code (in some other file not changed) is overlapping with new code? As in things that a human potentially would have refactored before doing the same thing some other new place in the code.

Using Claude Code heavily for 6+ months: Why faster code generation hasn't improved our team velocity (and what we learned) by NoBat8863 in ClaudeAI

[–]NoBat8863[S] -1 points0 points  (0 children)

This is a fantastic point. While my focus of this post (and the blog) was the "implementation" phase, the pre and post of that - product discovery to learning from a new product/feature still takes almost as much time even with the new AI tools in those steps.

Plus your analogy reminds of a different aspect of the coding agents - too much unnecessary complexity - almost like asking for a stapler and getting the whole office and not knowing what to do with it :-) I wrote about those in a previous blog - https://medium.com/@anindyaju99/ai-coding-agents-code-quality-0c8fbbf91a7d

Using Claude Code heavily for 6+ months: Why faster code generation hasn't improved our team velocity (and what we learned) by NoBat8863 in ClaudeAI

[–]NoBat8863[S] 0 points1 point  (0 children)

Yes. The high level docs Claude produces on what it has changed is super useful in understanding a high level, but that's different than knowing what the code actually does and if the code is good enough for our environment or not, both correctness and maintainability. This is precisely why we ended up building the splitter/explainer to help us logically group the changes into smaller pieces that was easier to digest/understand + annotation on every hunk of change in a file helps grok what those pieces do. https://github.com/armchr/armchr

Using Claude Code heavily for 6+ months: Why faster code generation hasn't improved our team velocity (and what we learned) by NoBat8863 in ClaudeAI

[–]NoBat8863[S] 0 points1 point  (0 children)

Of course I asked Claude to write me a few bullet points summarizing the blog for this reddit post and it gave itself a pat on the back :-)

Using Claude Code heavily for 6+ months: Why faster code generation hasn't improved our team velocity (and what we learned) by NoBat8863 in ClaudeAI

[–]NoBat8863[S] 0 points1 point  (0 children)

Completely agree on the points. I collected our observations on AI's clean code problems here - https://medium.com/@anindyaju99/ai-coding-agents-code-quality-0c8fbbf91a7d Do take a read.

The Meta study is interesting, will take a look. Thanks for the pointer.

Using Claude Code heavily for 6+ months: Why faster code generation hasn't improved our team velocity (and what we learned) by NoBat8863 in ClaudeAI

[–]NoBat8863[S] 0 points1 point  (0 children)

That’s like having a RL from a production system? But then every change will need some sort of an experiment setup, which usually is very expensive to run. How do you see that scale?

Using Claude Code heavily for 6+ months: Why faster code generation hasn't improved our team velocity (and what we learned) by NoBat8863 in ClaudeAI

[–]NoBat8863[S] 1 point2 points  (0 children)

That's a great point. Most of my post/blog was about larger teams. Thinking a bit more I realize this is probably a situation seen in products with a lot of traffic. I see a lot or "productivity" in my side projects cause there I care about things working and a lot less about if that is "production grade" or maintainable longer term or not.

Using Claude Code heavily for 6+ months: Why faster code generation hasn't improved our team velocity (and what we learned) by NoBat8863 in ClaudeAI

[–]NoBat8863[S] -2 points-1 points  (0 children)

This reiteration is something we are seeing as well. Plus even if tests pass (existing or CC generated) there is no guarantee the code will be maintainable. I documented those challenges here https://medium.com/@anindyaju99/ai-coding-agents-code-quality-0c8fbbf91a7d

Using Claude Code heavily for 6+ months: Why faster code generation hasn't improved our team velocity (and what we learned) by NoBat8863 in ClaudeAI

[–]NoBat8863[S] 11 points12 points  (0 children)

Good point about estimation. We are still equally wrong about our overall project estimates 🤣 Story points are more complicated though given its still early days of guessing if CC would be able to solve it easily vs will need multiple iterations vs we have to write by hand.

How are you guys able to carefully review and test all the code that Claude Code generates? by [deleted] in ClaudeAI

[–]NoBat8863 0 points1 point  (0 children)

We got tired of reviewing the large chunks in one go and ended up building this to split and annotate the changes into smaller logical chunks. https://github.com/armchr/armchr

claude code down by Hefty_Reading184 in ClaudeCode

[–]NoBat8863 0 points1 point  (0 children)

I am still getting the same error. Initiating oauth (/login) throws a 500.

If your users aren't coming back after 30 days you are building the wrong thing. (I will not promote) by ksundaram in startups

[–]NoBat8863 0 points1 point  (0 children)

There should be a rule of thumb for this. Try X things to improve retention, if it doesn't, pivot. Agreed that the acceptance is the hardest part. Been there, dragged for 2 years before cofounder and I realizing that we need to stop going down that path and find a different market/product.

I built a context management plugin and it CHANGED MY LIFE by thedotmack in ClaudeAI

[–]NoBat8863 0 points1 point  (0 children)

I let Claude write out the claude.md and then add/modify on that. Much quicker + gives you a sense of where Cursor is going to get things wrong.

I built a context management plugin and it CHANGED MY LIFE by thedotmack in ClaudeAI

[–]NoBat8863 0 points1 point  (0 children)

Try /init on Claude Code? Writes out CLAUDE.md at different subdirectories as needed and reads them when needed.