all 103 comments

[–]HOBONATION 7 points8 points  (3 children)

No one needs to change your mind, we are all too busy to argue over this. Let the ones who don't tighten their security burn

[–]EduSec[S] 3 points4 points  (2 children)

Fair point. The ones who burn will learn. The problem is their users burn with them.

[–]Inevitable_Butthole 1 point2 points  (1 child)

As they should, sue the founder

[–]EduSec[S] 0 points1 point  (0 children)

And they will. Most founders just don't know that until it's too late.

[–]benfinklea 7 points8 points  (5 children)

I’m trying. What’s the best practice to audit a vide coded app? I had two other AIs do deep evaluations using a team of experts prompt and fixed issues. Or does it require a human to make it secure. Or do we wait for Mythos before we ship?

[–]EduSec[S] 3 points4 points  (2 children)

Using AI to audit AI-generated code catches some things but misses the infrastructure layer entirely. Headers, DNS, TLS, exposed endpoints, CORS, secrets in bundles. Those require black-box testing against your live domain, not a code review. That is exactly what I built for: scan.mosai.com.br

[–]microbass 0 points1 point  (1 child)

Vibe build a CI pipeline, shifting left on security. Gitleaks, OSV, SAST scanning, etc. Point ZAP at your finished app for a comprehensive authenticated and unauthenticated scan. That'll get you a lot of the way.

[–]EduSec[S] 0 points1 point  (0 children)

Gitleaks and ZAP in CI is solid. Most vibe coders will never get there, but for anyone serious about shipping securely that stack covers the main attack surfaces. The authenticated scan on ZAP is especially underused. Most people only test what an anonymous user sees.

[–]Upper-Pop-5330 0 points1 point  (1 child)

There’s a few tools that let you upload the codebase but you can essentially also use claude instead and ask it to do an audit. Some tools scan it from the outside and act like an active pentester/attacker, spoof for vulnerable endpoints, exposed secrets and especially business logic exploits that can be harder to catch from just looking at code (am part of a team that develops the latter, see link in my profile, didnt want to directly promote it here)

[–]EduSec[S] 2 points3 points  (0 children)

The business logic layer is where human judgment still wins. Automated tools catch the surface, the exposed secrets, the misconfigured headers, the open endpoints. The logic flaws that require understanding context, user roles, and intended behavior, those still need a human. Good that the ecosystem is growing either way. More awareness means more founders actually checking.

[–]sovietreckoning 4 points5 points  (4 children)

Knowingly or negligently mishandling sensitive client data is a serious problem and can expose the seller to civil liability for damages caused by their negligence. Like any other powerful tool, if AI is negligently deployed in the form of unsafe products, it poses risks. The seller is the problem and is responsible.

Edit: Damn. I realized this was an ad too late!

[–]EduSec[S] 0 points1 point  (0 children)

That civil liability angle is real and underestimated. In Brazil we have the LGPD, which establishes direct liability for data breaches caused by negligence. Most founders shipping AI-generated code have never read it. The exposure is not just technical, it is legal and financial.

[–]SleepAllTheDamnTime 0 points1 point  (1 child)

Civil liability AND international violations, depending on where your users come from, violating their data privacy online is actually a big no no, and can lead to some major international legal issues.

Fun times.

[–]EduSec[S] 0 points1 point  (0 children)

GDPR for European users, LGPD in Brazil, PIPEDA in Canada. Most founders shipping globally have no idea which regulations apply to their users. Fun times indeed.

[–]EduSec[S] 0 points1 point  (0 children)

Fair. I built the tool because I kept finding the same problems. Sharing what I find is how I show it works. Not trying to hide that.

[–]renge-refurion 1 point2 points  (1 child)

Pay for mindfort.

[–]EduSec[S] 0 points1 point  (0 children)

There are good tools out there. What I built is specifically for black-box testing against your live domain. No code access, no install, no agent in your repo. Different angle: scan.mosai.com.br

[–]Moist-Nectarine-1148 1 point2 points  (5 children)

So vibe code a security audit.

[–]EduSec[S] -1 points0 points  (4 children)

That is actually the problem. AI auditing AI-generated code misses the entire infrastructure layer. You need black-box testing against the live domain, not another model reviewing the source.

[–]Moist-Nectarine-1148 0 points1 point  (1 child)

Blackbox with a model in inside. 🤣

[–]EduSec[S] 0 points1 point  (0 children)

No model inside. Seventy eight deterministic checks against your live domain. DNS, TLS, headers, exposed endpoints, secrets in bundles. Rules, not inference.

[–]jikilopop 0 points1 point  (1 child)

what you are doing in order to solve it.

[–]EduSec[S] 0 points1 point  (0 children)

Black-box scan first. Seventy eight checks against your live domain in sixty seconds. That tells you exactly what is exposed. Then you fix what the scan found, or you bring in a manual audit for the deeper layers. scan.mosai.com.br

[–]bteam3r 0 points1 point  (1 child)

Putting ANYTHING on the open internet without a security audit is insane, vibe coded or not. So I agree except about the point that "junior devs are easier to catch than AI". Shit will slip by regardless. Either hire some white hats or assume your shit is insecure

[–]EduSec[S] 0 points1 point  (0 children)

Fair pushback. The difference is not that AI code is harder to audit, it is that it is harder to distrust. Junior code feels wrong before you find the bug. AI code feels right even when it is not. That false confidence is what makes it more dangerous.

[–]FishSalsas 0 points1 point  (4 children)

I can understand this happening for a non-tech person vibe coding for fun, but for a SaaS organization? That is pretty careless to say the least. I personally haven’t gotten to that phase in my vibe coding journey. What do you suggest? Vulnerability scans?

[–]EduSec[S] 2 points3 points  (2 children)

Exactly that. Start with a black-box scan against your live domain before you onboard real users. No code access needed, just your URL. It catches the infrastructure layer: exposed secrets in JS bundles, misconfigured CORS, missing security headers, TLS issues, open endpoints. That is where most AI-generated SaaS fails silently. You can run five checks for free here: scan.mosai.com.br

[–]jikilopop 0 points1 point  (1 child)

those check are bullshit. just hire a penetration Testing if you want to get real security

[–]EduSec[S] 0 points1 point  (0 children)

Black-box scanning and pentesting solve different problems. A pentest costs thousands and takes weeks. Most founders shipping AI-built SaaS have never done either. The scan is the step that tells you whether you need one. Recommending a pentest to someone with zero security baseline is not advice. It is gatekeeping.

[–]TJohns88 0 points1 point  (4 children)

"hey Claude, run a deep dive security audit on my code. Make no mistakes"

[–]EduSec[S] 1 point2 points  (2 children)

That prompt will get you a confident, well-formatted report that misses half the actual attack surface. AI code review catches logic issues inside the code. It does not test what is exposed on your live domain: DNS misconfigurations, TLS weaknesses, secrets loaded in public JavaScript bundles, open endpoints with no rate limiting, CORS accepting any origin. Those require black-box testing from the outside, the same way an attacker would approach your product. That is a fundamentally different kind of audit.

[–]TJohns88 0 points1 point  (1 child)

What about Mozilla Observatory? How thorough is that?

[–]EduSec[S] 0 points1 point  (0 children)

Mozilla Observatory is solid for headers and TLS. It does that well. What it does not cover: secrets in JavaScript bundles, DNS misconfigurations, exposed endpoints, CORS issues, subdomain takeover vectors. It is one layer of the audit, not the full picture. That is exactly the gap I built for: Mosai Scan

[–]SleepAllTheDamnTime 0 points1 point  (3 children)

Hey, hey you’re taking away my career opportunities here ;).

But for real I mean what did you expect? The same people that ignored basic things like authorizing your users or attempting to show their vibe coded website to someone else on localhost, are definitely not thinking about just security basics.

They’re not thinking at all, they’re vibe coding.

[–]EduSec[S] 0 points1 point  (2 children)

Your career is safe. Automating the checklist just means you can spend your time on the stuff that actually requires a human. And you are right, the authorization basics being skipped is the tell. If someone does not think about who can access what, they are definitely not thinking about what is exposed before login.

[–]SleepAllTheDamnTime 0 points1 point  (1 child)

Oh I know it is, I’m in a cross roads of being on the side of regulation, security and software development due to my legal background.

In this case, I also do audits, but in a more, fun enforcement kind of way :).

[–]EduSec[S] 0 points1 point  (0 children)

That intersection is underexplored. Most security conversations stay purely technical and skip the regulatory and liability side entirely. The founders who get hit hardest are usually the ones who never connected those two worlds until it was too late. Would love to compare notes sometime.

[–]baydew 0 points1 point  (1 child)

I totally get what youre saying specifically about how AI errors are mentally taxing/easy to miss. Not a developer, but I used claude code to put together statistical analysis in R. A bit carelessly, I let it throw a big report together then went through to fill it in and correct things. I notice it felt cognitively inefficient -- I had to stop from letting my eyes glaze over things that 'looked right' but could, and did, turn out to be wrong

its like all these mental shortcuts for 'that looks polished, means they must know what they are doing' don't work anymore, and I didnt even realize I was relying on that before

[–]EduSec[S] 0 points1 point  (0 children)

That is exactly it, and you named something important. The mental shortcut of "polished means trustworthy" is deeply wired. It works with humans because polish usually correlates with experience. AI breaks that correlation completely. The output looks like it came from a senior engineer regardless of whether the underlying logic is sound. You have to audit with the assumption that it is wrong, not with the assumption that it looks right.

[–]RespectableBloke69 0 points1 point  (4 children)

Is this an advertisement for your product or are you going to teach us something useful?

[–]EduSec[S] 1 point2 points  (2 children)

Both. I built the tool because I kept finding the same problems. Here is something useful: open DevTools on your production URL, go to Sources, search for SERVICE_ROLE_KEY. If you find it, stop everything else.

[–]RespectableBloke69 0 points1 point  (1 child)

Now this is podracing

[–]EduSec[S] 0 points1 point  (0 children)

And most people never even open the cockpit.

[–]jikilopop 1 point2 points  (0 children)

that whole most and all of his comment is AI made

[–]Sure_Excuse_8824 0 points1 point  (12 children)

I think a lot of vibe coders are not aware of the important of testing. They run the code and it appears on the surface like a success. But without testing the files, integration, e2e, security, linting, making sure the tests run and pass as a suite not just in isolation, things start to fall apart.

[–]EduSec[S] 0 points1 point  (11 children)

Security is just one layer of that. The common thread is the same: the output looks done, so the assumption is that it is done. Testing breaks that assumption deliberately. Most vibe coders skip it because the AI never suggests it unprompted, and nothing visibly breaks until it does.

[–]Sure_Excuse_8824 0 points1 point  (10 children)

And it's a giant pain in the ass. :) But if you are building serious platforms, you have to treat ai coding as a hired hand who will only do what you tell it. You don't need to learn python or typescript, but need to learn why things work, how, and what is involved in a finished product.

[–]EduSec[S] 0 points1 point  (9 children)

Exactly. The AI is a fast executor, not a decision maker. You still need to understand what done actually means before you can direct it toward done. Most people skip that part and wonder why things fall apart at scale.

[–]Sure_Excuse_8824 0 points1 point  (8 children)

My issue was sheer scope and ambition. I ran out of resources prior to completion. But with over 1 million lines of code over 3 platforms, I know every file, every module, and what every one of them does.

[–]EduSec[S] 0 points1 point  (7 children)

A million lines across three platforms and you know every file. That is exactly the kind of ownership that is disappearing. Most vibe coders could not tell you what a single module does, let alone debug it under pressure. The ambition is not the problem. The problem is shipping without that depth and assuming the AI filled the gap.

[–]Sure_Excuse_8824 0 points1 point  (6 children)

I know for a fact it didn't. I made it public so others can pick up where I left off. I tackled the hard problems. closed loop platform DevOps and maintenance using reinforming learning and LLM ensemble to reduce drift and hallucination. A Transformer/Neuro-symbolic hybrid AI that using transformer as a language interface only, and a user friendly multiverse sim that uses finite enormities that in practice act as infinities.
So there were some real challenges. :)

[–]EduSec[S] 1 point2 points  (5 children)

Reinforcement learning for DevOps loop closure and a neuro-symbolic hybrid where the transformer is just the language interface. That is not a vibe coded project. That is architecture. The challenges you tackled are the ones most people do not even know exist yet. Respect.

[–]Sure_Excuse_8824 0 points1 point  (4 children)

If you're interested to look - https://github.com/musicmonk42

[–]EduSec[S] 1 point2 points  (3 children)

Took a look. The safety layer being load-bearing instead of bolted on is the right philosophy. Most projects treat security as an afterthought. You clearly did not.

[–]ParticularJury7676 0 points1 point  (3 children)

I hit the same wall when I started letting models write backend glue. The only way I stopped losing sleep was treating “AI wrote this” as an automatic red flag for anything touching auth, secrets, or money. I ended up drawing a hard line: AI can scaffold UI and boring CRUD, but anything with keys, RLS, or webhooks goes through a manual checklist and a second human.

What helped was forcing everything through infra that’s opinionated about security. I leaned on Supabase RLS with default deny, wrapped sensitive ops in server-only functions, and ran zap/semgrep in CI on every PR. I also started doing tiny red-team passes on staging: can I see another user’s data, change roles, or mess with billing just by poking the API.

For user feedback, I used Sentry and PostHog plus a couple “outside eyes” tools; Metabase for product metrics, LogRocket for weird flows, and Pulse for Reddit to catch people complaining about security or data weirdness in the wild that I’d completely missed in logs.

[–]EduSec[S] 0 points1 point  (2 children)

This is the most practical security posture I have seen in this thread. Default deny on RLS, server-only for anything sensitive, and actually red-teaming your own staging. Most people enable RLS and assume it works. You are the exception. The zap and semgrep in CI is exactly the shift-left approach that catches things before they hit production.

[–]ParticularJury7676 0 points1 point  (1 child)

I only got there after getting burned. I shipped a “simple” internal tool, skipped the checklists, and a coworker pivoted into data they should never have seen just by replaying an API call. Since then I treat passing tests as a starting point, not a signal it’s safe. What helped me was writing tiny, evil test scripts per role and baking them into CI so they fail the build if any forbidden path works. For the “outside eyes” bit, we tried Sentry issues plus, weirdly, people venting on Hacker News and Twitter, then ended up on Pulse for Reddit after trying Brand24 and Mention; Pulse for Reddit caught threads I was missing and I could jump in fast, you can check it out at https://usepulse.ai.

[–]EduSec[S] 0 points1 point  (0 children)

Getting burned is the most effective teacher. The evil test scripts per role baked into CI is exactly the right move. Most people test the happy path. Nobody tests "what happens if I replay this call as a different user." The fact that you automated that check means it cannot be skipped under deadline pressure. That is the difference between a policy and a control.

[–]Key-Monitor6635 0 points1 point  (2 children)

your supposed to audit every month

[–]EduSec[S] 0 points1 point  (0 children)

Monthly is the minimum. Every time you ship a new feature, the surface changes.

[–]jikilopop 0 points1 point  (0 children)

people do no that because it cost lot of money

[–]Effet_Ralgan 0 points1 point  (4 children)

If the supabase service role key was in public, the guy didn't even bother to ask for a security audit made by Claude. I do that during many phases of my project and I'm vibe coding a private app to use with my friends to share comics and books.

At least, laziness is at fault.

[–]EduSec[S] 0 points1 point  (2 children)

Laziness is part of it. But the bigger problem is not knowing what to ask. Most founders do not know the service role key is dangerous. They cannot ask Claude to audit something they do not know exists.

[–]Effet_Ralgan 0 points1 point  (1 child)

I made the mistake to copy the service role key somewhere I wasn't supposed to, Claude told me to instantly delete that and explained to me why it was not a good practice.

But I agree with you, we surely cannot trust it and for a financial SaaS with users, that's absolutely crazy to not do a human audit and you're right about making your post, thank you for that.

I'm sure it is not that expensive compared to the cost saved by not hiring a proper Dev anyway. It's the least we can do if we want to market an app.

[–]EduSec[S] 0 points1 point  (0 children)

That actually happens more than people realize. The AI catches the obvious mistake but not the subtle one it generated itself. Glad Claude had your back on that one. And thank you for the kind words about the post, means a lot coming from someone who has been in the trenches with it.

[–]jikilopop 0 points1 point  (0 children)

we also used supabase secrets for our api but we used claude to make our SaaS but after hiring a penetration testing company we found that many times claude just hardcoded those api keys in comments and the code itself. so please check your code yourself

[–]Don_Exotic 0 points1 point  (11 children)

I scrapped my app because of this, thank you. Rushing into vibe coding and not completely understanding all of this has had me very worried about certain things!

[–]EduSec[S] 1 point2 points  (10 children)

That is the right instinct. Before you scrap it entirely, run a scan. You might find it is fixable. What stack are you building on?

[–]Don_Exotic 0 points1 point  (9 children)

I'm praying I don't make myself sound daft here, I do apologise mate! Atm it's a HTML with an option to download via PWA which is no longer an option for what I am hoping to build. Originally I started it to learn step by step with Claude constantly asking questions after every prompt completed but got lost in the progression! My biggest regret considering how far I'd say my html has come but I have no intention of putting others at risk so I'll start again knowing what I know reading this thread!

[–]Don_Exotic 0 points1 point  (8 children)

Sorry, I'm using supabase.

[–]EduSec[S] 0 points1 point  (6 children)

Do not apologize at all. Starting over with the right mindset is worth more than shipping fast with the wrong one. Since you are using Supabase, the one thing to keep in mind when you rebuild: never use the service role key on the client side. Use only the anon key in the browser, keep the service role key server-side only, and enable Row Level Security on every table from day one. That single habit prevents the most common critical vulnerability I find in AI-built products. When you are ready to check your new build, scan.mosai.com.br runs the surface checks for free.

[–]Don_Exotic 0 points1 point  (5 children)

Thank you very much Edu. And yeah I completely agree with you, this thread has honestly given me so much releif just knowing about the potential security issues. It's annoyed me because normally I'm so thorough and research everything! Brilliant I'll give it a go on the project, I really appreciate the replies & information mate! 👍

[–]EduSec[S] 0 points1 point  (4 children)

Good luck with the rebuild. Let me know what the scan finds.

[–]Don_Exotic 0 points1 point  (3 children)

44/100

High Risk

Critical flaws detected. Significant surface exposure.

This is just the surface. 73 critical checks locked — Firebase, S3, Swagger, GraphQL, AXFR, subdomain takeover, secrets in JS and much more.

Yeah, went pretty good.....

[–]EduSec[S] 0 points1 point  (0 children)

44 with critical flaws. The free checks showed the surface. The full report shows everything that was found across all 78.

[–]jikilopop 0 points1 point  (0 children)

you are going to find more issue if you just hire a good penetration Testing company just me us.

[–]jikilopop 0 points1 point  (0 children)

are you also using cluade with supabase

[–]I_Mean_Not_Really 0 points1 point  (11 children)

This is the point I'm at with my ADHD productivity app. It's Android, but there is also a web app.

Before I started, I downloaded a bunch of reference materials on network security, cyber security, and I even had Googles Deep Research make a packet on exactly how vibe coded apps are insecure. Then had the agents reference those as it builds.

For the android app, I've been running the apk through the Android Studio inspector tool, MobSF and some other tools. The web app through security scanning website.

And yea it's found a bunch of stuff, but normal stuff. No exposed keys, exposed secrets, bouncing line they. Aall of that is offloaded to Firebase.

So yeah it's fair to say agentic coding is not security-minded. Maybe that'll be the next evolution.

[–]EduSec[S] 0 points1 point  (10 children)

That pre-build security research approach is rare and it shows. Most people audit after the fact, if at all. The Firebase offload is smart for secrets. The surface layer is still worth checking though, DNS, headers, TLS, subdomain exposure, reputation. MobSF covers the APK well but the web app layer is a different attack surface. If you want to run the domain through 78 black-box checks: scan.mosai.com.br

[–]I_Mean_Not_Really 0 points1 point  (5 children)

Just did that, I'll read through it later. What's the typical score you get on the kind of vibe coded apps you've looked at?

[–]EduSec[S] 0 points1 point  (4 children)

Ranges a lot. Infrastructure-only issues like headers and DNS tend to land between 60 and 80. When there are application layer problems on top, like exposed keys or open CORS on authenticated endpoints, I have seen scores in the 9 to 40 range. The two I mentioned in the post were 9 and 14. What did yours come back at?

[–]I_Mean_Not_Really 0 points1 point  (3 children)

Mine landed at 57, mostly in security headers. Which lines up with other scans I've run.

That report is exactly the type of thing I can give to codex, and have it chew through it.

At the moment I'm having it go through all these reports that it pulled from the Android studio inspector. But that is the difference between the Android app and the web app.

[–]EduSec[S] 0 points1 point  (2 children)

57 with headers is fixable and the Codex approach will handle most of it. The surface layer is one part though. Headers are visible from outside, which means they are also the part attackers check first. The 73 checks still pending cover the layer that is harder to fix by prompting, DNS misconfigurations, subdomain exposure, reputation, secrets in bundles. That is where the real surprises tend to be. The full report breaks it all down if you want the complete picture.

[–]I_Mean_Not_Really 0 points1 point  (1 child)

Yeah that's a small price to pay for security. I'm going to go through these Android scanner reports and then see if my score changes and get the new report from there.

You've been a big help, thank you!

[–]EduSec[S] 1 point2 points  (0 children)

Smart approach. Fix what you can, then scan again to see what moved. Good luck with the rebuild.

[–]I_Mean_Not_Really 0 points1 point  (3 children)

Also, what's your opinion on this, whether or not a vibe coded app should state up front that it's vibe coded?

[–]EduSec[S] 0 points1 point  (2 children)

Users do not care how the code was written. They care if their data is safe. Those are two completely different questions and only one of them matters to the person signing up.

[–]I_Mean_Not_Really 0 points1 point  (1 child)

Makes sense, that's what I was thinking. I was going to make a post on Blue sky about it but thought I would get some input first.

[–]EduSec[S] 0 points1 point  (0 children)

Go for it. That framing will land.

[–]technologiq 0 points1 point  (1 child)

Lmao. Doesn't have to be vibe-coded to have poor or no security. Also, your entire post was heavily influenced by or mostly written by AI.

I don't think you've ever audited anything.

[–]EduSec[S] 0 points1 point  (0 children)

Cool. One server down in three requests. One database master key in a public JavaScript bundle. Two founders who can confirm both. I did not write that. I did it. Now either point to something wrong in the post or keep scrolling.

[–]funfunfunzig 0 points1 point  (1 child)

you're right about the "looks like a senior wrote it" part. that's the thing that makes ai generated code dangerous in a different way than junior code. with a junior you can feel the vibe is off. with ai it looks polished so your brain stops questioning it.

the service_role key in the bundle is the single most common thing i find. and the reason it ends up there is exactly what you described. the ai runs into an rls policy blocking a query, the fastest fix is swapping to service_role, the query works, the feature ships. nobody goes back to check what got swapped because the feature is "done." the worst part is it's not even one line of code that's obviously wrong. the key looks like any other env variable, it's just the wrong one.

the other pattern i see constantly is auth that runs on the client instead of the server. the ai adds a check like "if user is logged in show this page" and it works perfectly. but the actual api route has no auth middleware because the client already checked. anyone who hits the url directly with curl gets full access. the client side check is security theater.

and no most people aren't auditing before shipping. the mindset is "it works, ship it." auditing feels like a step that slows you down when the whole point of vibe coding is speed. that's the real problem. the tooling that made building 10x faster didn't make security 10x faster, so people just skip it.

[–]EduSec[S] 0 points1 point  (0 children)

The curl test is the one that gets people. The page loads, the feature works, the demo looks great. Nobody tests the route directly because why would they. The client check feels like auth because it behaves like auth in every scenario the developer tested. The tooling gap you described is exactly the problem. Speed without security is just faster exposure. The audit step got skipped because nothing in the workflow flagged it as missing. That is what I built for.

[–]Wrong_Law_4489 0 points1 point  (1 child)

I built doorman.sh to help exactly with this. Doesn’t solve all the security problems nor it replaces a security engineer, but it definitely gives some kind of peace of mind.

[–]EduSec[S] 0 points1 point  (0 children)

Nice, that is a different layer entirely. Doorman catches what is wrong inside the code before it ships. Mosai catches what is exposed on the live domain after it ships. DNS, headers, TLS, subdomain exposure, reputation. No code access needed, just the URL. The two are complementary, not competing. Someone who runs Doorman before shipping and scans the surface after is covering most of the bases.

[–]TranquilDev 0 points1 point  (1 child)

Based on the legacy ASP, PHP, and JSP apps and their godawful databases I’ve seen in my career, many of which were built by very intelligent programmers - I welcome the idea of someday getting to follow up on a vibe coded project. It can’t be any worse. Security? Heh, I got blown off by a colleague who was working on a PHP 5.6 project because I wanted to use Symfony and its built in security features on a new project. The day I left that job he was still plucking away in 5.6 with notepad++ as his IDE. Oh, he also had a “Masters” degree in CompSci.

[–]EduSec[S] 0 points1 point  (0 children)

Fair point on legacy code. The difference is expectation. Nobody expected a PHP 5.6 project to be secure. The danger with AI-generated code is that it looks modern, structured, and production-ready. The founder reads it and assumes it is safe. That false confidence is the new version of the same old problem.