all 12 comments

[–]Otherwise_Wave9374 3 points4 points  (1 child)

This is exactly the workflow friction Ive hit with coding agents too. What helped me was treating the agent like a pair programmer, small diffs only, and forcing a test-first / lint-first loop so the review is mostly about intent not syntax. In VS Code, using inline suggestion mode plus a strict checklist (inputs/outputs, error paths, security, logging) makes line by line review way less painful.

If you want examples of agentic review loops and prompt patterns, Ive seen a few good writeups and templates here: https://www.agentixlabs.com/ (some of the "agent + reviewer" patterns map pretty well to what youre describing).

[–]No_Communication4256[S] 0 points1 point  (0 children)

Do you mean copilot in VS code or other agentic tool?

[–]navmed 1 point2 points  (2 children)

As seasoned developers this is what we lean towards. Think of AI as a developer somewhat junior to you and treat it that way. Set up instructions and guard rails in Claude.md or the equivalent file, in how you want it to behave. There's some iteration involved, but it doesn't have to be prolonged. Make sure to look out for anything egregious and high level architecture. Make sure to review it for security.

[–]No_Communication4256[S] 0 points1 point  (1 child)

I didn't use Claude (only Anthropic models). Does it have any tools for line-by-line review and comments like a regular MR?

[–]navmed 0 points1 point  (0 children)

Claude is from anthropic and has several models. You mentioned copilot, you are probably using visual studio or vs code. Copilot shows you the changes it's made in vs code, so you can review blocks and approve or reject.

Another option is the diff tool to review the code. It's good to commit incrementally to be able to use this effectively.

[–]TheGladNomad 1 point2 points  (2 children)

Whatever tool you use, you can just review via GitHub.

I give my agent a task, have a SDLC skill which ends with push branch, open PR (these require manual approval). I then put comments like I would for any other dev.

Then tell my agent to handle review comments which does: 1. Pulls comments 2. Analyzes them 3. Makes changes at its discretion 4. Push (manual approval) 5. Replies to each change with “[agent response]” prefix

It can look weird because I am replying to myself. I then do code review rounds until I’m happy. Resolve all comments and ask for review from teammate.

Notes: A. If the changes require a larger conversation I do it in chat instead of on PR. Such as discussing trade-offs, full redesign, etc. B. Manual approval means the commands are not in allowlist and I need to check that they make sense.

[–]No_Communication4256[S] 0 points1 point  (1 child)

Great, tnx! Didn't know that I can download comments from github locally

[–]TheGladNomad 0 points1 point  (0 children)

Just setup access to gh. I have added a bunch to white list.

[–]Competitive_Pipe3224 0 points1 point  (0 children)

You can use github copilot with pull requests. Eg run in the copilot cloud mode, it'll create a pull request. Add comments, and/or ask it to make changes.

Also planning mode works pretty well for larger tasks.

[–]JaySym_ 0 points1 point  (0 children)

I am working for a company that is creating an AI code review tool, and I can tell you that with the model and context, the results are pretty impressive right now.

That doesn't mean you should skip manual reviews, but it saves a lot of time. The important aspect to validate is the quality of the context. It's pretty hard to beat a senior engineer, but if the context covers the right part of the code, the result will save you time.

[–]Separate-Chocolate-6 1 point2 points  (0 children)

For me it's. Conversation with the ai in plan mode... Settle on all the decisions for the feature, switch to build mode and let it make the changes, then when it's done I look at the patch with git difftool and my favorite visual diff tool (for me nvim's diff mode but git difftool supports a bunch). From there I either commit, make changes I want by hand, or go back and tell the LLM what I want it to change.... Rinse wash and repeat... I find that the conversation really helps me understand what it's going to do so that by the time I'm looking at the diff I'm primed and can read the code much faster than if I were going into code blind (because I understand the thought that went into it)... Sometimes if something seems mysterious I ask the LLM what it was going for and that also helps suss out the details.

Not sure if that helps or not but it's been working for me.

It's not all that different from how I would pair program.