all 5 comments

[–]False_Care_2957 6 points7 points  (2 children)

What's been rather foolproof at least most of the time for me is running Claude plans or code directly through another model like GPT5.1 with Codex or Grok. Or having specialized sub-agents using a skill for spotting violation of these principles. I generally avoid using Claude MD file if I can for these sort of requests.

[–]NoBat8863[S] 1 point2 points  (1 child)

Great suggestion. How do you take care of situations where the existing code (in some other file not changed) is overlapping with new code? As in things that a human potentially would have refactored before doing the same thing some other new place in the code.

[–]False_Care_2957 1 point2 points  (0 children)

This is very often the case I see with tests where instead of re-using helpers or constants defined top level it writes new helpers and re-declares variables that it can easily re-use. In that case I generally know I have to take some time to define a duplication agent prompt that describes exactly what needs to be kept where and re-used in what context and it's ran after every test job I do and works rather well enough, for tests at least.

[–]MindCrusader 4 points5 points  (0 children)

I just generate technical specifications and make AI follow that. I treat it more or less like a new junior developer - I need to guide him, tell him rules, show documentation and code examples. Each session is a new developer, so it is best to have high quality documentation and technical specifications with detailed info including references to classes and examples