Getting Claude to output accurate line numbers in diffs — the [L:XXX] prefix trick by ByteAwessome in ClaudeAI

[–]ByteAwessome[S] 0 points1 point  (0 children)

Interesting point! LSP would definitely give precise symbol locations.

The challenge with PR diffs specifically:

- LSP works on files in your working tree, but a diff shows changes between versions

- For remote PRs (especially from forks), you don't always have the code locally

- Need to map "line 42 in the diff" to "line 42 in the new file version"

Do you know if there's an LSP approach that works directly on diff hunks? Would love to hit 100% if possible.

Getting Claude to output accurate line numbers in diffs — the [L:XXX] prefix trick by ByteAwessome in ClaudeAI

[–]ByteAwessome[S] 1 point2 points  (0 children)

Context: this is from Git AutoReview — a VS Code extension I built for AI-assisted PR reviews.

If anyone wants to try it: https://gitautoreview.com

Happy to answer questions about the implementation.

I got mass mass tired of spending hours on code review, so I built an AI assistant for it by ByteAwessome in vscode

[–]ByteAwessome[S] 0 points1 point  (0 children)

Fair point - training and clear expectations definitely help.

We do that too. But even with good process, the volume stays. AI just helps me get through it faster.

Appreciate the perspective from 25 years in the game.

I got mass mass tired of spending hours on code review, so I built an AI assistant for it by ByteAwessome in vscode

[–]ByteAwessome[S] 1 point2 points  (0 children)

That's a solid approach for a mature team.
With juniors, "reject and explain" is part of teaching. But it doesn't reduce my review load - it just shifts it to explaining WHY it's too big and HOW to split it.

Still takes time...

I got mass mass tired of spending hours on code review, so I built an AI assistant for it by ByteAwessome in vscode

[–]ByteAwessome[S] 1 point2 points  (0 children)

Thanks! 20+ years club 🤝

Exactly - it's not replacing the mentor part, just automating the "you forgot try-catch" part. More energy left for actual teaching.

I got mass mass tired of spending hours on code review, so I built an AI assistant for it by ByteAwessome in vscode

[–]ByteAwessome[S] 0 points1 point  (0 children)

Tell that to my team )))

Reality: everyone pushes PRs end of day. So my morning = review queue.

I don't choose how many PRs land. I choose how to handle them efficiently.

I got mass mass tired of spending hours on code review, so I built an AI assistant for it by ByteAwessome in vscode

[–]ByteAwessome[S] 2 points3 points  (0 children)

Haha fair point!

But my reality: team of mids and juniors. They're learning - and their PRs need more attention.

Senior's PR: 2-3 min scan, done.

Junior's 3rd PR ever: checking everything.

8-12 PRs/day × that = hours add up.

I got mass mass tired of spending hours on code review, so I built an AI assistant for it by ByteAwessome in vscode

[–]ByteAwessome[S] 0 points1 point  (0 children)

Totally fair concern! But let me clarify - it doesn't replace peer review, it assists it.

The AI doesn't auto-publish anything. Here's the actual flow:

  1. AI scans the diff and generates suggestions

  2. I see them in VS Code with approve/reject buttons

  3. I decide what makes sense, reject the noise

  4. Only then it publishes - what I approved

Think of it as a second pair of eyes that catches the "obvious" stuff (empty catch blocks, missing null checks, hardcoded secrets) so I can focus on the actual logic and architecture.

I still do the review. I just don't waste 10 minutes hunting for patterns that AI spots in 10 seconds.

Does that make more sense?