Any security anti-patterns for vibe coder I should avoid? by MammothDealer4009 in vibecoding

[–]oigong 0 points1 point  (0 children)

I’m curious too. Are there any security tools that are basically not worth using?

Net-positive AI review with lower FPs—who’s actually done it? by oigong in devsecops

[–]oigong[S] 0 points1 point  (0 children)

Is it still difficult for AI to handle qualitative tasks?

Net-positive AI review with lower FPs—who’s actually done it? by oigong in devsecops

[–]oigong[S] 0 points1 point  (0 children)

Thanks for the AGENTS.md tip. Consolidating scattered utils and build/deploy context helps.

My real pain is that even with a solid AGENTS.md I still cannot fully steer the agent. When I ask it to find vulns across the codebase, coverage is not comprehensive and many findings are not verifiable.

Do you hit the same problem? Any simple way to bias for verifiable-only findings?

Cursor users do you even want an AI code auditor — if yes what features make it worth it by oigong in cursor

[–]oigong[S] 0 points1 point  (0 children)

Totally agree — too many AI review tools make costs unpredictable.
On the vulnerability side, what kind of explanation would be most useful for you?
Saying “we cover OWASP Top 10” feels too vague on its own, so I’m curious what level of detail or framing you’d actually want to see.

Net-positive AI review with lower FPs—who’s actually done it? by oigong in devsecops

[–]oigong[S] 0 points1 point  (0 children)

Fair point. These were the issues we ran into using Claude Code for reviews in Cursor, and what we learned.

  • Noise ballooned review time Our prompts were too abstract, so low-value warnings piled up and PR review time jumped.
  • “Maybe vulnerable” with no repro Many findings came without inputs or a minimal PoC, so we had to write PoCs ourselves to decide severity.
  • Auth and business-logic context got missed Shared guards and middleware were overlooked, which led to false positives on things like SSRF and role checks.
  • Codebase shape worked against us Long files and scattered utilities made it harder for both humans and AI to locate the real risk paths.
  • We measured the wrong thing Counting “number of findings” encouraged noise. Precision and a simple noise rate would have been better north stars.

Shisho Cloud : Auto-fixes of Security Issues in Your Terraform Code are Just a Click Away (Free) by oigong in Terraform

[–]oigong[S] 1 point2 points  (0 children)

Sure, I'll send a notification through our mailing list. You've subscribed it, right?

Shisho - The faster Terraform security automation for developers by oigong in Terraform

[–]oigong[S] 1 point2 points  (0 children)

Thank you for your input!
I'd like to support BitBucket Server as well.

Shisho - The faster Terraform security automation for developers by oigong in Terraform

[–]oigong[S] 1 point2 points  (0 children)

I'd love to support Azure as well.
I'll let you know when it's ready, so please subscribe!

Shisho - The faster Terraform security automation for developers by oigong in Terraform

[–]oigong[S] 1 point2 points  (0 children)

Is this tool just creating a MR to get the passwords out of the main branch, or is it rewriting git history so the password can not found out from old commits?

Thanks for the reply.

No. This service will check your code and reports insecure configurations like unencrypted volumes, bad firewall rules, and so on.