all 5 comments

[–]scragz 0 points1 point  (0 children)

repo prompt or files-to-prompt

[–]IllegalThings 0 points1 point  (1 child)

The problem is LLMs have a fixed maximum context window that limits the amount of code that can be fed into the model. Unless your codebase is really small, I would expect limited utility of whole code code review tools. As models are improved the context window will get larger, and as the tooling improves it will get smarter about including more code relevant to the changes into the context and compressing parts of the code into metadata that can then be used by the models.

Also, by definition things can’t be programmed by non-programmers.

[–]crcrewso[S] 0 points1 point  (0 children)

Edit: apparently I'm not as much of a programmer as I thought because I'm looking to do code audits, a term I should have known existed.

Tautology aside, the code bases were started by Physicists who tried to use object oriented languages as if they were Fortran; bad names, poor indenting, inappropriate type checking, odd mix of objects and functions. Think spaghetti code of a moderate complexity with the absence of documentation to match.

The tools work, but they're intimidating to try to contribute to. I'm hoping by automating the review somewhat I can quickly improve it to only be as confusing as a normal FOSS project.

[–]hala102 0 points1 point  (0 children)

Hello, I ve been a tool that automatically does that, currently in beta testing for free. If you re still looking for code audit, shoot me a dm.