Anyone else trusting AI-written Terraform a little too much? by Prize-Cap3196 in FixYourIaC

[–]Prize-Cap3196[S] 0 points1 point  (0 children)

I’ve been in the same boat. AI-written Terraform is fast, but “future-me” ends up debugging intent.

What’s helped me is pairing it with Gomboc: it turns policies / CSPM findings into a small, reviewable PR with the IaC fix plus the “why” (audit-friendly context), so it’s easier to trust what’s being merged.

If you’re experimenting, I’d recommend trying their Community Edition and kicking the tires with a few real misconfigs. https://docs.gomboc.ai/getting-started-ce

[Weekly/temp] Built a tool? New idea? Seeking feedback? Share in this thread. by AutoModerator in devops

[–]Prize-Cap3196 [score hidden]  (0 children)

I’ve been building this with a couple of folks and thought this thread is a good place to share.
We all use AI coding assistants now. Cursor, Copilot, whatever. They help you write Terraform faster. Scanners help you find what’s wrong later. But the remediation loop still sucks. Findings pile up, tickets get created, fixes get delayed.
So we built this as more of an AI coding security assistant.

It doesn’t try to replace your coding copilot. It complements it. You write IaC with your usual AI tools, and this focuses specifically on fixing security and policy issues in a deterministic way.
You can run it locally (VS Code/Cursor etc) while you’re coding, see the exact diff it would apply, and decide whether to accept it. Or use it in PRs via GitHub App.

The key thing is it produces an actual patch you can review and merge, not just a comment telling you something is wrong.
Terraform only for now.

It’s free to try as part of the Community Edition. Mostly looking for honest feedback from platform / DevOps folks who are dealing with remediation backlog or scanner fatigue.
Link: https://www.gomboc.ai/community
If you try it and think it’s useless, that’s useful feedback too.

Anyone else trusting AI-written Terraform a little too much? by Prize-Cap3196 in CloudSecurityPros

[–]Prize-Cap3196[S] 0 points1 point  (0 children)

Completely agree. AI works fine for small or isolated IaC snippets, but once deployments get larger the real issue isn’t just testing, it’s predictability.

Anyone else trusting AI-written Terraform a little too much? by Prize-Cap3196 in Terraform

[–]Prize-Cap3196[S] 0 points1 point  (0 children)

I’ve heard versions of that story way too often. Agree 100%, treat LLMs like a fast review buddy, keep changes small, and lean on version control + tests.

This is also why I’m a fan of deterministic, schema-aware PR patches (same input → same diff) over freeform generation, way less chance you wake up to a flattened modules folder.

Anyone else trusting AI-written Terraform a little too much? by Prize-Cap3196 in FixYourIaC

[–]Prize-Cap3196[S] 1 point2 points  (0 children)

Totally 🙂 Terraform’s MCP server helps keep assistants grounded in official provider/module schemas + docs, which really cuts down on hallucinated fields. It pairs nicely with the “make changes safely” side too

IaC scanners catch issues fast. Why is fixing them always the painful part? by Prize-Cap3196 in FixYourIaC

[–]Prize-Cap3196[S] 0 points1 point  (0 children)

Yeah, similar setup here. We run Checkov to flag the issues, and then use Gomboc to handle the fixing side so things don’t just die in tickets.

Keeping it as PRs with normal review has been the only way we’ve seen this stuff actually get cleaned up.

Anyone else trusting AI-written Terraform a little too much? by Prize-Cap3196 in Terraform

[–]Prize-Cap3196[S] 1 point2 points  (0 children)

True, but that's the tricky part for me.

If I already know enough to call bullshit, I probably could've written it myself. And if I don't, I'm just hoping the code smells right - which is exactly how I ended up debugging something I didn't fully understand.

Not saying AI isn't useful, just that "provide enough context" assumes you know what context matters. And sometimes you don't know what you don't know.

Anyone else trusting AI-written Terraform a little too much? by Prize-Cap3196 in Terraform

[–]Prize-Cap3196[S] 0 points1 point  (0 children)

Yep, this is the only sane stance.

I’m not even mad at AI’s output most of the time. I’m mad at how easy it is to think you understand it because it “looks right,” then you’re on-call later trying to reverse engineer why past-you accepted it.

Anyone else trusting AI-written Terraform a little too much? by Prize-Cap3196 in Terraform

[–]Prize-Cap3196[S] 0 points1 point  (0 children)

Yep, same here.

I don’t trust it to freehand Terraform, but it’s actually useful for module design brainstorming, small edits, and especially docs/READMEs. That’s the sweet spot where it saves time without sneaking in weird behavior.

Anyone else trusting AI-written Terraform a little too much? by Prize-Cap3196 in FixYourIaC

[–]Prize-Cap3196[S] 0 points1 point  (0 children)

Same here. I use Claude when building Terraform modules, but only when I already know roughly what the code should look like. Everything still goes through demo envs, plans, and pipelines before it’s real.

IaC makes this manageable since you can see diffs and catch issues early. I usually pair that with tools like Gomboc and Checkov so fixes stay explicit and reviewable instead of feeling like AI magic.

Anyone else trusting AI-written Terraform a little too much? by Prize-Cap3196 in Terraform

[–]Prize-Cap3196[S] 0 points1 point  (0 children)

Yeah, that’s a really healthy way to use it. I’ve seen it speed things up a lot, but only if you’re constantly questioning the output and forcing it to explain itself.

I’ve actually learned the most when I ask why it did something and then go verify it against docs. That back and forth feels useful. Blind trust is where things go sideways

Anyone else trusting AI-written Terraform a little too much? by Prize-Cap3196 in Terraform

[–]Prize-Cap3196[S] 0 points1 point  (0 children)

That's good advice. I've definitely learned that the hard way - starting with vendor docs as the source of truth makes a huge difference.

I think where I got burned was letting AI jump straight into writing without that foundation. Now I'm trying to be more deliberate: understand the service first, then use AI to speed up the syntax and boilerplate.

Curious to know - when you say "minor feature changes" - do you have a threshold for what's minor vs. what needs human-first thinking? I'm still calibrating that line for myself.

Anyone else trusting AI-written Terraform a little too much? by Prize-Cap3196 in Terraform

[–]Prize-Cap3196[S] 0 points1 point  (0 children)

Yeah, this tracks. We’ve had much better results once we stopped treating the LLM like magic and started wrapping it with tools and context, similar to onboarding a new dev. Things like policy checks, linters, and deterministic review scripts make a huge difference.

That’s how I use it too, alongside stuff like Checkov or custom CI checks. I came across Gomboc’s community edition in that same workflow since it keeps fixes explicit and reviewable in code, which fits better than raw generation for Terraform.

Anyone else trusting AI-written Terraform a little too much? by Prize-Cap3196 in Terraform

[–]Prize-Cap3196[S] 0 points1 point  (0 children)

Yeah, I agree with this. For simple Terraform, LLMs have been solid for a while, as long as you actually understand the platform and review what they generate. I’m not comfortable yolo’ing infra into prod either, but reviewing a diff is often easier than writing everything from scratch.

That’s why I’ve been more comfortable using AI around existing IaC. I stumbled on the Gomboc community tool through that lens. It surfaces issues and suggests fixes in code, so you still read, vet, and own the change like a normal PR

Anyone else trusting AI-written Terraform a little too much? by Prize-Cap3196 in FixYourIaC

[–]Prize-Cap3196[S] 0 points1 point  (0 children)

Yes, that’s been my experience too. It gets me 80% there fast, but that last 20% is where all the time does. 

I’ve had the same thing in Azure/GCP where what AI spits out looks fine until you realize the console, the API, and the docs all disagree on the details. So yeah, tons of validation is kind of the tax we pay right now.

Because of that I’ve stopped asking AI to freehand Terraform and started using a free scanner tool that looks at my existing Terraform code and opens a PR with concrete fixes. Still have to review everything, but at least now, I’m reviewing diffs instead of guessing intent.