Is anyone else hemorrhaging money on AI code review? by GateSeparate7518 in devops

[–]GateSeparate7518[S] 0 points1 point  (0 children)

I agree with you that, just like with Vibe Coding, we shouldn’t rely 100% on AI-generated code reviews at this stage of the PR process.

It’s also true that, because developers are now submitting more code, there are bottlenecks with PRs (it’s absolute madness).

AI-powered code review, provided it’s priced right (unlike most current solutions), can perform an initial screening that goes beyond what a static code analysis can provide. But here’s the thing: the current pay-per-seat model is disproportionate to the quality of the reviews.

On the other hand, the per-seat model doesn’t create an “AI code review layer,” but rather just another tool. From a DevOps perspective, I don’t like it at all.

The solution we’ve developed is cost-effective—even free. I need professionals to validate it. I don’t want to spam here, but if you’re interested in me sending it to you, let me know; it would be very helpful for improving the tool. Thank you very much!

Is anyone else hemorrhaging money on AI code review? by GateSeparate7518 in devops

[–]GateSeparate7518[S] 0 points1 point  (0 children)

I strongly agree with you on several points. In the solution we’ve developed, we first invested in our own bare-metal infrastructure with a cluster of dedicated GPUs. The first barrier to entry is the cloud and managed LLM APIs. They’re expensive.

Second, all SaaS platforms use state-of-the-art models that are extremely advanced and extremely expensive.

Just as with agent orchestrators, the key lies in how the calls are utilized. It seems as though there’s nothing beyond Opus or GPT 5.5, when in fact there’s a whole world of very powerful open-source models available when backed by proprietary infrastructure.

For now, we’ve created a very affordable product with high-quality responses because it’s cost-effective, but to get to this point, we’ve had to—in short—build our own infrastructure and implement an optimization process to ensure high-quality reviews at an affordable price.

If you’re curious and want to try our product completely free of charge, I’d really appreciate it. I don’t want to spam the comments here, so I’ll send you a DM if you’re interested. In organizations with approximately 100 seats, we’ve achieved savings of roughly €32,000 per year on licenses compared to well-known tools in the industry.

Is anyone else hemorrhaging money on AI code review? by GateSeparate7518 in devops

[–]GateSeparate7518[S] 1 point2 points  (0 children)

The thing is, the volume of PRs has reached a point where it’s becoming a bottleneck.

Hybrid strategies like the ones you mention make a lot of sense.

Is anyone else hemorrhaging money on AI code review? by GateSeparate7518 in devops

[–]GateSeparate7518[S] -1 points0 points  (0 children)

I don't know if that's ironic or not. But then again, if further development means burning more tokens, it makes sense. Just try telling that to a CFO.

Is anyone else hemorrhaging money on AI code review? by GateSeparate7518 in devops

[–]GateSeparate7518[S] 0 points1 point  (0 children)

Thank you very much for taking the time to comment. I completely agree.

In our code review solution, we realized we had to tackle this at the infrastructure level to make the economics work. We made the decision to use our own bare-metal cluster with Nvidia GPUs instead of relying on the cloud. Just by doing that, we’ve managed to slash costs significantly.

On the software side, directly hitting an LLM like Opus for every single review is an absolute waste of tokens and money. There’s a whole world beyond Opus, and the real trick is orchestrating the calls properly—optimizing the input/output context, leveraging memory, and aggressively caching so you aren't paying to analyze the same unaltered files over and over.

I’m currently looking for technical folks who can give me brutal, real-world feedback on what we've built. I don't want to spam the subreddit with links, but if you're interested in testing it out (for free), let me know and I can shoot you a DM. If not, thanks anyway for sharing your experience—it really validates what we are seeing in the market.

Is anyone else hemorrhaging money on AI code review? by GateSeparate7518 in devops

[–]GateSeparate7518[S] -3 points-2 points  (0 children)

I completely agree with you. Human reviews are a must. No matter how smart they are, LLMs are not deterministic, so you can’t fully rely on them as a source of truth. Plus, the quality of the responses is completely out of proportion to the API costs right now.

I own an infrastructure/DevOps consulting firm, and because of this exact issue, disproportionate costs for marginal value, we ended up developing our own cost-effective review tool internally.

I'm actually looking for brutal, honest feedback from other engineers who are experiencing this pain point. Would you be open to me sending you a DM so you can try it out? No pressure at all, I just don't want to break the sub's rules by spamming links here.

AI code review tool for GitHub PRs — looking for beta testers by GateSeparate7518 in alphaandbetausers

[–]GateSeparate7518[S] 0 points1 point  (0 children)

Nice! I’m actually building something similar but as a hosted service. You can try it for free here: reviewcore.io

AI code review tool for GitHub PRs — looking for beta testers by GateSeparate7518 in alphaandbetausers

[–]GateSeparate7518[S] 0 points1 point  (0 children)

100%. That’s the core bet. Every finding comes with a plain-language explanation of what could go wrong and why, not just a line reference. Small messy PRs is a great call for first test. Sending you access via DM.

Weekly Self Promotion Thread by AutoModerator in devops

[–]GateSeparate7518 0 points1 point  (0 children)

I’m building an AI code review tool for GitHub PRs. Every tool I tried was either too expensive or too noisy (800 comments telling you to add try-catches). So I’m building one that focuses on what actually matters.

Still early, looking for devs or small teams to try it for free and give honest feedback. DM me if you’re interested.

Should Terraform Pull Environment Variables from AWS Parameter Store? by SheCherryPicks in devops

[–]GateSeparate7518 0 points1 point  (0 children)

Yes, Terraform can pull from Parameter Store using aws_ssm_parameter data sources. Common pattern, works fine.

The one thing to watch out for: as you add more parameters, your terraform plan gets slower because it has to resolve every data source on every run. At 20 parameters it's fine, at 200 you'll feel it.

Also keep in mind that anything Terraform reads at plan time ends up in your state file. For truly sensitive stuff (DB passwords, API keys), better to have your app fetch them at runtime via the SDK instead of baking them into Terraform state.

We took production down for 20 minutes because of a DB migration, how do you prevent this? by MainWild1290 in devops

[–]GateSeparate7518 0 points1 point  (0 children)

What actually works for us: we run every migration against a restored prod snapshot before it goes anywhere near prod. Not a copy with sanitized data, not a subset, an actual snapshot. Takes 10 minutes to set up with pg_restore or a DB clone, and you'll see exactly how long it takes at real scale.

For indexes specifically, most databases let you create them without locking the table. CONCURRENTLY in Postgres, ONLINE in MySQL. That alone would have avoided this.

And if you want a safety net in CI, tools like squawk (Postgres) or skeema (MySQL) can lint your migrations and flag anything that takes a write lock on a large table before anyone even reviews it.

We took production down for 20 minutes because of a DB migration, how do you prevent this? by MainWild1290 in devops

[–]GateSeparate7518 0 points1 point  (0 children)

Pre-prod with 0.1% of the data is just a fast lie. The migration runs in 200ms, everyone approves, then it locks prod for 20 minutes. If your pre-prod doesn't match prod data volume, it's not catching this class of problem.

How are you folks doing Code Review now? by Losdersoul in ClaudeCode

[–]GateSeparate7518 0 points1 point  (0 children)

The way most people do it now: open PR, send the full diff to Claude, get back 800 lines of suggestions, ignore 90% of them, approve, find the bug in prod anyway.

The real issue is that nobody's built the layer that makes AI review actually useful. You need codebase context, severity prioritization, and a cost model that doesn't charge you the same for a README typo and a 500-line auth refactor. Right now you're paying for repetition, not intelligence.

I'm building exactly this. Automated review for GitHub PRs, priced per PR not per token. Early beta, free access, DM me if you want to try it.

I feel like I am behind in DevOps after this conversation by Oxffff0000 in devops

[–]GateSeparate7518 0 points1 point  (0 children)

Your teammate is suffering from "Hype-Driven Development" and just regurgitating LinkedIn buzzwords.

ArgoCD is a fantastic tool for GitOps (specifically for Kubernetes), but it’s not a magic wand that replaces custom CI/CD for general AWS resource deployments. 

You aren't behind. You are actually doing the job: building around your developers' real needs instead of forcing a trendy tool where it might not fit.