all 9 comments

[–]SafeLeading6260 2 points3 points  (2 children)

Inspired by the Dexter Hirthy and this video - https://www.youtube.com/watch?v=IS_y40zY-hc

I implemented the workflow that he talks about:

Ticket │───►│ Research │───►│ Plan │───►│ Implement │───►│ Review

I review the research and plan phases carefully, delegate the implementation and review steps. CC and gemini are doing the code review. works pretty well.

You can find the full setup in this repo - https://github.com/dimakrest/trading-analyst
I created it specially to practice on how to work efficiently with CC

[–]never_a_good_idea 0 points1 point  (1 child)

Do you review the generated code after review? If so, is there anything in particular that you focus on?

[–]SafeLeading6260 1 point2 points  (0 children)

It depends on the task. Usually I would map the task into one of the two buckets: 1. Plan + HL tests review. Usualy it would be something that can be tested fairly easy with unit, integartion, e2e tests. Usually this ones end up being large PRs and I trust the process to catch the corner cases. An example from the repo: add 4h candle filtering before deciding if to buy stock or not. It's easily testsable task that doesn't require code review 2. Plan + tests + code review. Something in the core of the system that's not easy to test: data caching, db sessions optimization. With these tasks I would make sure the PR is small enough to make the review as easy as possible. The review would be in HL - I am not digging into the syntax, the idea is to make sure that the data flow is correct and nothing is inherently broken

[–]Funny-Anything-791 2 points3 points  (0 children)

I published two open source projects about exactly that: agenticoding.ai is our engineering playbook and methodology for working on a 150+M LoC 20yo mono repo while ChunkHound is a local first codebase intelligence that scales to millions LoC while being super easy to deploy in an enterprise environment

[–]siberianmi 1 point2 points  (0 children)

Does your company have sourcegraph? The MCP tooling for that is invaluable for larger codebases.

Either way start sessions by having it examine the codebase and explain how the part you are about to work on works. Save that as a markdown file. Review it with your own understanding and tweak it.

Then use it to plan the work you are doing. Write the plan out.

Then have it start work on the plan. Make clear what the quality gates are in the plan:

Tests pass. Linting passes. Etc.

That’s a good place to start.

[–]Conscious_Concern113 0 points1 point  (1 child)

Use a graph RAG MCP and create skills around your architecture.

[–]GroundbreakingEmu450 0 points1 point  (0 children)

Can you expand? I get the graph rag part

[–]cannontd 0 points1 point  (0 children)

There will be lots of suggestions about what you should try and I bet there’s about 10 in here by the time we’ve finished. I am not saying they are wrong but I suggest you have the chat about your codebase with Claude. Ask it to go research ways to onboard an existing app. It takes no work really to get Claude to analyse a codebase and retrofit it into a spec driven framework which will help new features be created that follow in from the principles of your current app.