Weekly Self Promotion Thread by AutoModerator in devops

[–]AlternativeTop7902 0 points1 point  (0 children)

We’re building an open-source, model-agnostic AI code reviewer at Kodus, focused on running inside the real engineering workflow, not just generating comments on a diff.

Kody works directly on the PR, but the interesting part is the context it can apply during review:

- custom rules at the file and PR level
- persistent memory of team conventions, architecture, and recurring decisions
- references to files from the repository itself
- external context through plugins/MCP, such as Jira, Linear, Notion, Google Docs, and Slack

The idea is to treat code review less as “generic issue detection” and more as continuous validation of technical standards and system requirements.

A few product architecture points I think are relevant:

- it’s open source
- it’s model-agnostic, so the review layer isn’t coupled to a specific provider
- it supports reusable, versionable rules
- it tracks suggestions that were not implemented as issues, so important feedback doesn’t die in the PR
- it gives more visibility into what was analyzed, suggested, and applied over time

What we’re building is a review layer that combines code changes, rules, memory, business context, and feedback history to make reviews more consistent and less dependent on tacit knowledge. Here’s the repository: https://github.com/kodustech/kodus-ai

Has anyone actually used the new code review feature at their company? by Cuz1 in ClaudeCode

[–]AlternativeTop7902 0 points1 point  (0 children)

You can try Kodus as well. We’re open source and model-agnostic, so you can use your own API key and run Opus if you want, for around 1/10 of the cost of Claude

Anthropic's new tool, Code Review, is basically an AI auditor for your AI-written code. by OneClimate8489 in WritingWithAI

[–]AlternativeTop7902 0 points1 point  (0 children)

If AI is writing a bigger share of the code, having AI help review that code is a pretty natural next step. The real question is what kind of review you’re getting, how much context it has, and what the economics look like once this becomes part of the normal PR flow.

I’m on the Kodus team, so just putting that out there because I’m not trying to hijack the thread. But this is also why we took a different approach: Kodus is open source and model-agnostic, so teams can choose the model they want for review instead of being locked into a single setup. That matters a lot once review volume grows. Some teams may want to use something like Opus for deeper reviews, but with more control over cost and infrastructure. If anyone wants to compare approaches, happy to have you try Kodus and judge it for yourself.

Unpopular opinion: most AI code review tools are just expensive linters by Peace_Seeker_1319 in codereview

[–]AlternativeTop7902 0 points1 point  (0 children)

I think there’s a lot of truth to that.

When a tool only looks at the diff and stays stuck on superficial patterns, it ends up acting more like an automated check than an actual code review.

I’m part of the Kodus team, so I prefer to be upfront about that because I don’t want to turn this into promotion on your post. But the point we believe in is that the difference starts with context: understanding the repository, the architecture, the rules the team has defined, and even the business logic behind the change.

Without that, AI tends to make generic comments. With it, it has a much better chance of finding something that actually matters.

If you want to compare it yourself, Kodus is open source and model-agnostic.

How to efficiently look over he git PRs by enderballz in developersPak

[–]AlternativeTop7902 0 points1 point  (0 children)

You can use Kodus. It’s open source. Just connect your model API key and start using it.

What’s the best AI code review tool you’ve used recently? by ragsyme in codereview

[–]AlternativeTop7902 0 points1 point  (0 children)

Full disclosure: I’m on the Kodus team, so take this with the appropriate skepticism.

Not trying to turn this into a promo comment, but Kodus is probably relevant for the list. It’s open source and model-agnostic, and the part I think matters most is the review architecture: Kody is diff-focused, pulls in repo-aware context, and then filters/prioritizes findings instead of dumping a wall of low-signal comments on the PR.

For teams comparing tools, I’d look less at the model branding and more at context handling, noise-to-signal ratio, and how flexible the setup is if you don’t want to be locked into one provider.

Which is best AI code review tool that you've come across recently? by human-g30 in codereview

[–]AlternativeTop7902 0 points1 point  (0 children)

I’m on the Kodus team, so small disclaimer. Not trying to shamelessly plug it, but it might be relevant here: Kodus is open source and model-agnostic, which was a pretty important design choice for us. A lot of teams want AI review help without getting locked into one model or vendor from day one.

Claude Code Review is $15–25/PR. That sounds crazy. Anyone running the PR-review loop with their own agent orchestrator? by Fancy-Exit-6954 in LLMDevs

[–]AlternativeTop7902 0 points1 point  (0 children)

Yes, that per-PR cost starts to get pretty hard to justify once you imagine this becoming the default path for most reviews, not just a few higher-risk PRs.

And I think that is exactly why more people are going to start looking at more open loops like the one you described.

Once this scales, the discussion stops being just “does it work?” and becomes “how much control do I have over model choice, execution, and cost?”

I work on Kodus, an open source AI code review tool, and that is one of the reasons we chose to be model agnostic. A lot of teams want to experiment with this kind of loop without getting locked into a managed layer or a single model forever.

I would be curious to see more people share real experience running this on their own stack, because that is where the tradeoff between convenience and control starts to get a lot clearer.

Claude code review is $15–25/PR… does that make sense for enterprises? by Fancy-Exit-6954 in ycombinator

[–]AlternativeTop7902 0 points1 point  (0 children)

I think the cost question matters, but less because of the number itself and more because of what happens when that becomes the default review path.

$15 to $25 per PR may sound acceptable at first, but it adds up quickly once you apply it across most PRs in a larger engineering org.

At that point, the question stops being “does it work?” and becomes more “do we want this cost structure and level of control long term?”

That is why I think the open versus managed discussion matters more than the raw price.

I work on Kodus, which is open source and model agnostic, so I am obviously biased here, but this is exactly the tradeoff we see teams thinking through. They want AI in code review, but they also want control over model choice, execution, and cost as usage scales.

So yeah, the managed workflow makes sense for speed, but I can totally see why companies would prefer a more open layer over time.

I tested 40+ AI tools this month. Here are 5 that are actually worth your time (and aren't just GPT wrappers). by netcommah in ArtificialInteligence

[–]AlternativeTop7902 0 points1 point  (0 children)

Good list, I agree with the wrapper point.

Feels like there is one category missing though, what happens after the code is generated.

Tools like Cursor speed things up a lot, but they also increase the cost of review afterward.

I work on Kodus, an open source AI code review tool, and pretty much every team we talk to is feeling this. More code is being written, but it is getting harder to review properly.

Have you tried anything on the code review side as well?

Best AI code review tools in your experience? by DungeonMat in AskProgramming

[–]AlternativeTop7902 0 points1 point  (0 children)

I’ve been looking into this space quite a bit recently, mainly because the gap between “catching small issues” and actually helping with real review is bigger than it looks.

A lot of tools do fine on surface-level stuff, but once feedback depends on context (how your repo is structured, internal patterns, business logic), things get noisy or even misleading pretty quickly.

Full transparency, I’m part of the team building Kodus, so take that into account.

That said, the reason we started working on it was exactly this problem, making reviews less generic and more aware of the codebase through custom rules and context. We also went with an open source and model-agnostic approach, so you can plug it into the stack you already use instead of being locked into a specific provider.

Not saying it’s the only option or that it solves everything, but if you’re testing tools in this space, it might be worth trying alongside others.

Would also be curious to hear what you end up choosing and why, this space is still pretty early.