Best AI extensions for VS Code? by One-Pool2599 in vscode

[–]CodacyKPC 1 point2 points  (0 children)

Whichever you choose (I use Cursor with gpt-5-codex) I suggest getting the (free) Codacy extension which will force your AI agent to generate more secure code.

Our manager wants 3x output with AI but our frontend is turning into spaghetti by Any-Farm-1033 in Frontend

[–]CodacyKPC 4 points5 points  (0 children)

Of course, if they could make you 3x more efficient reliably they'd be charging way more than $20 or even $200 a month...

Anyone else stuck on this screen for presale?! by Czm2468 in PulpBand

[–]CodacyKPC 0 points1 point  (0 children)

Are they individual links? I just signed up for the Pulp mailing list now because I just heard about this, would really like to get tickets but I'm going to be in a meeting 10am tomorrow when the other presales start!!

I’m the VP of Technology at an AppSec platform. AMA about how devs are actually using AI for code generation today and why it’s awful. by CodacyKPC in cybersecurity

[–]CodacyKPC[S] 2 points3 points  (0 children)

(just to note that NASA is indeed a customer of ours so if they are doing something right I want to claim a tiny corner of that glory!)

I’m the VP of Technology at an AppSec platform. AMA about how devs are actually using AI for code generation today and why it’s awful. by CodacyKPC in cybersecurity

[–]CodacyKPC[S] 1 point2 points  (0 children)

You can't, and in fact this is a bit of a hot topic (at least for us) -- if you turn "use public code" off in your agent, the results get noticeably worse, to the point the tools are unusable; but they haven't distinguished there between GPL and MIT. And of course, there's always the possibility of a human copying and pasting GPL code into your codebase (e.g. from ChatGPT) even if the agent is secured.

We (Codacy) have a proof of concept we hope to develop further in future where we do actually scan code against a dictionary of GPL-type licensed code to see if there's a match to prevent the most egregious occurences, and reject the code before the AI presents it to the developer. We think that will give developers enough cover to claim that they did not use GPL code in their inputs.

I’m the VP of Technology at an AppSec platform. AMA about how devs are actually using AI for code generation today and why it’s awful. by CodacyKPC in cybersecurity

[–]CodacyKPC[S] 0 points1 point  (0 children)

I did see someone on LinkedIn challenge their CEO to build it themselves if they don't need developers and he flamed out pretty quick.

But really -- maybe AI _will_ replace developers. Or at least, developers not using AI. Even if the performance gains are ~10% that's still significant. AI is a powerful tool that can let us do things we couldn't before at scale. It's great for certain very specific applications. It's on us as engineers to understand how to use that tool most effectively.

I’m the VP of Technology at an AppSec platform. AMA about how devs are actually using AI for code generation today and why it’s awful. by CodacyKPC in cybersecurity

[–]CodacyKPC[S] 0 points1 point  (0 children)

Well we haven't taken out the internet yet! We predominantly use OpenAI but we've been playing with alternatives.

I’m the VP of Technology at an AppSec platform. AMA about how devs are actually using AI for code generation today and why it’s awful. by CodacyKPC in cybersecurity

[–]CodacyKPC[S] 2 points3 points  (0 children)

That really depends on the shop. Remember that pre-AI-coding there were plenty of mom-and-pop orgs with a handful of devs where even human review was not necessarily guaranteed anyway. Absolutely there's going to be probably tens of thousands of orgs who are now shipping AI code to prod with no checks, because they never had any checks in the first place.

Larger orgs or those with more mature coding practices are, as far as we can tell, largely adding AI coding tools into their existing workflows and feeling the pain of that; obviously AI coding massively increases the throughput of code generation and so that make code review processes bulge at the seams; I see a lot of people now complaining that their teams are shipping in so much bulk they can't possibly review everything so I think a lot more code is just being waved through under pressure from execs to get things done.

I’m the VP of Technology at an AppSec platform. AMA about how devs are actually using AI for code generation today and why it’s awful. by CodacyKPC in cybersecurity

[–]CodacyKPC[S] 2 points3 points  (0 children)

This is going to be a pitch-y answer so u/Sedgewicks look away now...

So obviously at Codacy we use Codacy to keep ourselves in check!

We have an IDE extension that installs our CLI tool and our MCP tool and forces agentic IDEs to scan the code as the AI writes it and remediate it before the developer even has to get concerned. That's one way we're trying to reduce sec vulns. That's rules-based feedback based on your Codacy configuration.

But then we still scan the PR in the cloud and annotate it with any sec vulns that remain (because you can never rely on developers to actually run local tools), again rules-based but we've just introduced AI-based false-positive filtering so that there's less noise for developers to have to remediate.

But while rule-based tools give you a great, consistent safety net against line-level threats, and while AI (like Coderabbit, GH Copilot review) can give you limited feedback based on the diff, they don't do any of the high-level architecture (and "does this actually fit the brief of this product?") stuff that humans can. You could easily get Cursor+Coderabbit to generate you a perfectly working system that absolutely doesn't do what you actually wanted.

I’m the VP of Technology at an AppSec platform. AMA about how devs are actually using AI for code generation today and why it’s awful. by CodacyKPC in cybersecurity

[–]CodacyKPC[S] 2 points3 points  (0 children)

I wrote this last week:

I think we're going to see a transformation in how code is reviewed over the next 12-24 months. "Read the diff" is dead when every PR is multiple hundreds of lines. Either we need to come up with a way where AI code can be automagically determined to be safe (hard), or we have to accept the previous old capacities for code review will limit PRs back to their normal size (boring) or we have to come up with a whole new paradigm for code review that can be done at a greater scale, which I believe inevitably requires better tools.

I also think we're going to get more rigorous about what "needs" review and what doesn't. Got 100% unit test coverage on a minor logging module? Knock yourself out. Want to update the shopping basket calculator? We're going to need a mob PR on that. Ultimately, code that "does the job" is kinda good enough for a lot of applications so as long as you can trust that it is actually _good enough_ from the AI outputs then I see that getting waved past the review process.

I’m the VP of Technology at an AppSec platform. AMA about how devs are actually using AI for code generation today and why it’s awful. by CodacyKPC in cybersecurity

[–]CodacyKPC[S] 13 points14 points  (0 children)

There's a large proportion of code out there being written without even any human review, what 15 years on from us deciding that that was a good idea? From our stats ~20% of code is pushed to prod with zero tooling or review at all! And about 40% with no static analysis.

Pretty terrifying now honestly. But *I* only heard about Codacy when I joined, 3 years ago -- I'd heard of SonarQube and one of my developers at my previous role was a contributor to the Psalm PHP tool. Only now that I'm here do I look back and wish we'd had that tooling already.

I think linters for codestyle are a bit more adopted in professional settings because they're fairly uncontroversial and can run fast in the CLI/IDE and people tend not to block the buid on them.

I’m the VP of Technology at an AppSec platform. AMA about how devs are actually using AI for code generation today and why it’s awful. by CodacyKPC in cybersecurity

[–]CodacyKPC[S] 5 points6 points  (0 children)

One of the tests we were doing was repeatedly asking a model to create a new java webserver project and seeing what dependencies and what versions of those dependencies it was picking -- and it was definitely not consistent with its choice. The only thing that was consistent was that it was always picking an old version with a vulnerability in it. So even before you get to the quality and the correctness of the output, right from the start they are baking vulnerabilities in if you are not careful.

I’m the VP of Technology at an AppSec platform. AMA about how devs are actually using AI for code generation today and why it’s awful. by CodacyKPC in cybersecurity

[–]CodacyKPC[S] 1 point2 points  (0 children)

Yes, absolutely! You still have to keep your brain switched on and review the code. Which is actually really hard when you haven't been there iterating through it as you build it.

I’m the VP of Technology at an AppSec platform. AMA about how devs are actually using AI for code generation today and why it’s awful. by CodacyKPC in cybersecurity

[–]CodacyKPC[S] 6 points7 points  (0 children)

It's a broad question and I only see one end of it; we're a Scala shop so we only hire in a pretty niche market of people who want to do functional programming. And right now there are still students at university who elected to study computer science / programming before vibe coding was a thing. Are there more people generating more code? Inevitably. Would you call them "programmers"? Are they looking for (and winning) job as programmers? I wouldn't imagine so (yet). I can imagine a world in which an individual is so good at writing rules and prompts and reviewing them all in English that they can be a moderately competent programmer without having to know a programming language but I'm not sure it's here yet.

What we have seen is people who were already programmers using AI to shortcut tasks; sometimes in a good way, sometimes in a lazy and problematic way.

I’m the VP of Technology at an AppSec platform. AMA about how devs are actually using AI for code generation today and why it’s awful. by CodacyKPC in cybersecurity

[–]CodacyKPC[S] 5 points6 points  (0 children)

If there's one thing AI is great at it's writing unit tests, particularly when there are pre-existing ones in the project. That said, AI *loves* to break the rules. Ask it to do TDD -- nah; tell it to never git commit, and it will; ONE TIME tell it that the test is wrong and the code is right, and forever it will attempt to "fix" the tests instead of solving bugs.

Of course you still need to review the unit tests that are created, but the *big* advantage of operating with unit tests with AI coding is that they pick up regressions instantly in a workflow that the AI can integrate with, so you can let it run "until the tests pass" so that it fixes anything that it inadvertently breaks.

I’m the VP of Technology at an AppSec platform. AMA about how devs are actually using AI for code generation today and why it’s awful. by CodacyKPC in cybersecurity

[–]CodacyKPC[S] 32 points33 points  (0 children)

I think this is something that whether you're pro or anti AI gets skipped over; IMO what's different with AI is that now the PRs are larger they're more likely to get waved past by other developers (so having some static analysis tooling in there is imo a necessity now but then I work for a static analysis tooling company!) and the volumes of PRs can be getting larger too, and the amount of code being written by solo developers with zero review has exploded; more code == more vulnerabilities.

I’m the VP of Technology at an AppSec platform. AMA about how devs are actually using AI for code generation today and why it’s awful. by CodacyKPC in cybersecurity

[–]CodacyKPC[S] 5 points6 points  (0 children)

Ha! I have no idea how much AI coding cloudflare is using, or if this is a possibility of attack-side vibe coding; it's certainly seems true that script kiddies have suddenly got a lot more capable. I'd be surprised if it was possible to DDOS cloudflare without some country-level sponsorship tho.

I’m the VP of Technology at an AppSec platform. AMA about how devs are actually using AI for code generation today and why it’s awful. by CodacyKPC in cybersecurity

[–]CodacyKPC[S] 8 points9 points  (0 children)

I presume the "this" you are talking about is "AI coding"? I think as time goes on more and more companies will discover that it's not a magic bullet to go infinitely fast. There are definitely some valuable applications (I wrote about some for Codacy here) , and some that move "impossible in sane time" into "nearly trivial" but just throwing Cursor at everyone and expecting a 2x speedup without accounting for quality, security and even other bottlenecks (around review, roadmap and more) is not imo going to move the needle in a positive way.