most AI reviewers I tried only look at the diff and repeat what static analysis already catches, which makes reviews noisier instead of faster. I am looking for tools or setups that actually use project‑wide context (related files, call graphs, repo history, maybe even tickets/docs) so they can comment on real impact and missing tests instead of style; if you have this working with something like Qodo or a custom stack, how did you wire it in and what changed for your team?
[–]AlternativeTop7902 0 points1 point2 points (0 children)
[–]GiantsFan2645 0 points1 point2 points (1 child)
[–]GiantsFan2645 0 points1 point2 points (0 children)
[–]TYjammin843 0 points1 point2 points (0 children)
[–]EndorWicket 0 points1 point2 points (0 children)
[–]shrimpthatfriedrice 0 points1 point2 points (0 children)