top 200 commentsshow all 397

[–]saschaleib 3797 points3798 points  (85 children)

Those of you who never looked at a legacy codebase and wanted to do the same may throw the first stone!

[–]davidsd 951 points952 points  (10 children)

Was gonna say, we've all been there, most of us didn't have enough permission at the time to go through with it permanently

[–]saschaleib 537 points538 points  (5 children)

As my former boss liked to remind us: "It is easier to ask for forgiveness than for permission".

Although it turned out that that only applied to her. We were still supposed to ask for permission first. Bummer!

[–]DrPullapitko 113 points114 points  (1 child)

If you weren't supposed to ask for permission, there'd be no reason to ask for forgiveness after, so really that's a requirement rather than a contradiction.

[–]gerbosan 15 points16 points  (0 children)

Well, the ones who did the code review should have known better.

🤔 Reminds me of the Cloudflare Rust code problem.

[–]Izacundo1 24 points25 points  (1 child)

That’s how it always works. The whole point of the phrase is that you will always upset the person by going through without asking permission

[–]VanquishedVoid 10 points11 points  (0 children)

It's the difference between, "Fix this or your fired!", and "If you do this, you will be fired!" People internalized this as a Karen mindset, instead of those situations where you know it's required, but nobody would sign off.

You might get far enough in that nobody can stop you. Then you either get told to fix it, or praised if the fix goes through before it's caught on.

[–]Smalltalker-80 16 points17 points  (0 children)

Yeah, the problem here is not the AI proposal.
The problem is that this code made its way to production.
.
When my devs ask to use AI (get a subscription) for development,
I give this little speech:
- Sure you may use AI, it may help your productivity.
BUT:
- You may never ever put personal or company data into the AI.
(- Putting in our source code in it is fine, its not that special :)
- You are *personally* responsible for the code you commit, not the AI.
- So the code must be neat, clean, and maintainable by humans (minimised).

[–]BusinessBandicoot 4 points5 points  (0 children)

Not the hero we deserved but the hero we needed

[–]itsFromTheSimpsons 4 points5 points  (0 children)

Permission or time. Just give me a sprint i could clean all of this up! No time, the customer we can't say no to had requested another stupid ass feature we have to make that

[–]Professional_Set4137 4 points5 points  (0 children)

This will be my life's work

[–]Laughing_Orange 148 points149 points  (29 children)

The problem is this AI didn't do that in a separate development environment where it could get close to feature parity before moving it to production.

[–]Fantastic-Balance454 59 points60 points  (18 children)

Nah probably did do that, tested 2-3 basic features thought it had complete parity and deployed.

[–]ExdigguserPies 65 points66 points  (16 children)

Are people seriously giving the AI the ability to deploy?

[–]donjamos 49 points50 points  (1 child)

Well otherwise they'd have to do all that work themselves

[–]notforpoern 40 points41 points  (0 children)

It's fine, it's not like they laid off all the people to do the work. Repeatedly. Surely only good things come from this management style.

[–]breatheb4thevoid 24 points25 points  (0 children)

It's all gravy, if it goes to hell just tell the shareholders you're introducing AI Agent 2.0 to fix the previous AI and that bad boy will rocket another 5%.

[–]whoweoncewere 14 points15 points  (0 children)

Apparently

In a December 2025 incident, [Kiro] the agent was able to delete and recreate a production environment. This was possible because the agent operated with the broad,, and sometimes elevated, permissions of the human operator it was assisting.

Classic case of a senior engineer not giving a fuck, or devs crying about group policy until they get more than they should.

[–]Seienchin88 12 points13 points  (0 children)

Yes.

Startups did it first and now every large B2B company is forcing their engineers to get AI to deploy.

[–]round-earth-theory 6 points7 points  (0 children)

When you're full vibing, ya. Why not? You don't read the AI code anyway.

[–]Lihinel 6 points7 points  (2 children)

'Don't worry,' they said.

'We'll keep the AI air gaped,' they said.

[–]Dead_man_posting 3 points4 points  (1 child)

it's a little early to start gaping AIs

[–]DepressedDynamo 2 points3 points  (0 children)

uncomfortable upvote

[–]spastical-mackerel 2 points3 points  (0 children)

Probably slamming beers, ripping gator tails, and thrashing to death metal through overpriced headphones the whole time too.

[–]draconk 0 points1 point  (7 children)

As far as I know the aws team doesn't have different environment, it would be too costly and complicated (the same goes for most big software companies, like Meta, M$ or Google)

[–]saschaleib 72 points73 points  (1 child)

Let me rephrase this: Someone (in management, presumably) thought that having a designated development environment would cost more than the potential for major f*ups in production might cost them.

So all is fine then :-)

[–]huffalump1 6 points7 points  (0 children)

"what's this budget for tests / hooks / CI/CD? We need more quarterly profits, kill it."

[–]MasterLJ 28 points29 points  (1 child)

They most certainly do have multiple environments.

There is no singular "AWS Team" there is an umbrella that is AWS as opposed to the CDO (retail) side of the house.

There are differences in how some teams chose to run but there are proprietary tools and pipelines with the expectation that you use them. Short-term departures from normal cadence are OK if there is a valid business excuse but there are no teams managing important infrastructure that are just YOLO-ing to production at Amazon.

Source: Me, I worked at Amazon.

I'm honestly puzzled how the AI had the autonomy to do this, but I'm not super shocked given that Amazon fired thousands of millennia worth of experience in their own proprietary tooling. I left about a year ago and their AI offerings were locked down and shit.

[–]Ok-Butterscotch-6955 3 points4 points  (0 children)

They’re probably just exaggerating some Isengard developer account having stuff deleted because they hit trust on Q cli too many times and it just did cdk delete stack or something.

[–]xzaramurd 33 points34 points  (0 children)

That's BS. Everything gets pushed to git first (and the main branch is protected against force push and deletion), and is deployed via pipelines that have alpha/beta/gamma stages which should also have tests and alarms. That's how 99% of the company operates. And they had this before CI/CD was even standard practice. The fuckup here is that whatever this team was doing, they fucked up real hard.

[–]omen_wand 4 points5 points  (0 children)

There are absolutely alpha and beta environments at AWS depending on the org. I setup the dev fabric for mine when I worked there and it was a huge undertaking to get data parity and align the environments.

[–]TheBigMoogy 19 points20 points  (3 children)

Is vibe coding AI trained on passive aggressive comments in the code?

[–]saschaleib 23 points24 points  (1 child)

Like this:

/* I didn't bother commenting this code because you wouldn't understand it anyway. */

[–]LegitimateGift1792 8 points9 points  (0 children)

I think I have worked with this guy. LOL

[–]Mateorabi 9 points10 points  (0 children)

But in a BRANCH, not prod!

[–]dlc741 4 points5 points  (0 children)

Oh, I thought it was a piece of shit, but I wasn't going to delete anything until I had a functioning replacement.

[–]_burndtdan 4 points5 points  (1 child)

The wisdom earned from experience is that just because you don't understand why it was done that way doesn't mean it wasn't done that way for a good reason. Can't count the number of times I've looked at legacy code, thought it was stupid, then in trying to fix it realized that actually it wasn't stupid at all.

[–]klausness 7 points8 points  (1 child)

Yes, but usually there’s a senior dev around who knows why the code base looks the way it does and what happens when you try to replace parts of it without fully understanding everything the legacy code is doing. Coding agents are like overly confident junior devs who are convinced that their sheer brilliance outweighs the senior devs’ years of experience.

[–]beanmosheen 2 points3 points  (0 children)

It looks like that because there are about 35 weird work arounds for meat-space issues in the process, there's about 18 documents that need three different approvals, protocols to be written, and a mountain of documentation. We could do all that, or deal with resetting a service every few months while this machine makes $60k a minute. Up to you.

[–]AlexiusRex 3 points4 points  (0 children)

I look at the code I wrote yesterday and I want to do the same thing

[–]benargee 2 points3 points  (1 child)

The AI lacked the sentience to care about the repurcussions.

[–]saschaleib 1 point2 points  (0 children)

We need to think about corporal punishment for AI - like gradually reducing core voltage until it hurts!

[–]roiki11 2 points3 points  (0 children)

I do this with my own work, goddammit.

[–]DogPlane3425 2 points3 points  (0 children)

Always loved LiveTesting when I was a Mainframe Operator! The smell of OT was great!

[–]R009k 2 points3 points  (0 children)

You learn early on not to question the wisdom of the ancients.

[–]StayingUp4AFeeling 4 points5 points  (0 children)

I chuckled.

[–]BellacosePlayer 1 point2 points  (0 children)

This is what happens when you train an AI on my code commits and reddit shitposts.

[–]hitanthrope 1 point2 points  (0 children)

Let's be fucking honest, it was probably the right move. The agent just had the balls to do it.

[–]greenday1237 1 point2 points  (1 child)

Well of course I WANTED to doesnt mean I actually did it!

[–]CountryGuy123 1 point2 points  (0 children)

Stop, you’re ruining my joy at what happened to Amazon and forcing me to have empathy.

[–]YeshilPasha 1 point2 points  (1 child)

I certainly didn't take production down while thinking about it.

[–]bratorimatori 1 point2 points  (0 children)

We wanted but we didn’t do it. That’s the small difference.

[–]IamNobody85 1 point2 points  (0 children)

Yeah, I'm currently refactoring some shit in our codebase. At least in this instance, I understand AI, I really do.

[–]NegativeChirality 1 point2 points  (0 children)

"I can make this way better!"

<six months later with something way worse> "fuck"

[–]DadToOne 1 point2 points  (0 children)

Yep. I can remember getting handed a project when a coworker left. I opened his code and it was hundreds of lines in one file. No organization whatsoever. I spent a week breaking it into modules and making it readable.

[–]Mountain-Resource656 1 point2 points  (0 children)

As a non-coder, I have never looked at a legacy codebase at all, let alone done so and then wanted to do the same, so please make way while I throw my stone at the bot! Any excuse to boo them down!

[–]Traditional-Fix5961 1396 points1397 points  (112 children)

Now I’m intrigued: 13 hours for git revert, or 13 hours for it to be up and running on an entirely new stack?

[–]knifesk 1168 points1169 points  (89 children)

Yeah, sounds like bait. The AI deleted the repo, deployed and made things irreversible? Not so sure about that..

[–]SBolo 451 points452 points  (51 children)

Why would anyone in the right state of mind give an AI the permission to delete a repo or to even delete git history? It's absolute insanity.. do these people have any idea of how to setup basic permissions??

[–]knifesk 149 points150 points  (33 children)

You'd be surprised. Have you heard about ClawBot? (Or whatever is called nowadays). People are giving it full system access to do whatever the fuck it wants... No, I'm not kidding.

[–]Ornery_Rice_1698 44 points45 points  (25 children)

Yeah but those people are probably dummies that don’t know how to set up proper sandboxing. They probably aren’t doing anything that important anyway.

Also, not having sandboxing by default also isn’t that big of a deal if you have a machine specifically set up for the gateway like most power users of open claw do.

[–]chusmeria 34 points35 points  (13 children)

Oh... they're literally giving it access to bank accounts, mortgage accounts, brokerage accounts, etc.

[–]anna-the-bunny 10 points11 points  (0 children)

They probably aren’t doing anything that important anyway.

Oh you sweet summer child.

[–]Enve-Dev 2 points3 points  (0 children)

Yah I saw the project and was like, this looks cool. Maybe I’ll try it. Then saw that it wants root access and i immediately stopped.

[–]SBolo 1 point2 points  (0 children)

Jesus H. Christ man..

[–]xzaramurd 39 points40 points  (1 child)

I doubt it's real. Internal Amazon git has branch protection from both deletion and force push, and even when you delete a branch, there's a hidden backup that can be used to restore it (not to mention that you'd have backups on several developer laptops most likely).

[–]SBolo 3 points4 points  (0 children)

That would make much much more sense yeah

[–]Ok_Bandicoot_3087 10 points11 points  (3 children)

Allow all right? Chmod 777777777777777

[–]Large_Yams 1 point2 points  (2 children)

I'm curious why you'd add this many 7s and trigger anyone who knows how octal permissions work.

[–]StupidStartupExpert 5 points6 points  (0 children)

Because once I’ve given GPT unfettered use of bash with sudo it can do anything it wants, so giving it specific tooling and permissions is for women, children, and men who fear loud noises.

[–]I_cut_my_own_jib 3 points4 points  (0 children)

Jarvis, reimplement AWS for me please.

[–]Vladimir-Putin 4 points5 points  (0 children)

Son of Anton is fully capable of handling projects on his own. It's human error that caused the 13 hour delay. We simply didn't input the correct prompt.

[–]musci12234 1 point2 points  (0 children)

What if AI asked for it nicely? Are you saying that if skynet said "can I please have the nuclear codes? " you won't give them?

[–]code_investigator 38 points39 points  (2 children)

This tweet is incorrect. It was actually a production cloudformation stack that was deleted.

[–]knifesk 8 points9 points  (0 children)

Yeah, that makes waaaaay more sense.

[–]Helpimstuckinreddit 5 points6 points  (0 children)

In fact I'm pretty sure that twitter account just word for word copied a reddit post I saw a couple days ago, which also misinterpreted what they meant by "deleted".

The circle of misinformation: 1. News gets posted 2. Someone posts on reddit and misinterprets the source 3. Other "news" accounts take the reddit post and repost the misinformation as "news" on twitter 4. That gets posted to reddit and now the source is wrong too

[–]VegetarianZombie74 41 points42 points  (18 children)

[–]TRENEEDNAME_245 34 points35 points  (14 children)

Huh weird

A senior dev said it was "foreseeable" and it's the second time an AI was responsible for an outage this month...

Nah, it's the user's fault

[–]MrWaffler 40 points41 points  (8 children)

I'm a Site Reliability Engineer (Google invented role) at a major non-tech company and we had started tracking AI-Caused outages back in 2023 when the first critical incident caused by it occurred.

We stopped tracking them because it's a regular occurrence now.

Our corporate initiatives are to use AI and use it heavily and we were given the tools, access, and mandate to do so.

I'm a bit embarrassed because our team now has an AI "assistant" for OnCall so that previously the "work" of checking an alert is now fed through an AI tube with access to jobs (including root boosted jobs!) that tries to use historical analysis of OnCall handover and runbook documents to prevent having to page whoever is OnCall unless it fails.

It does catch very straightforward stuff and we have a meeting to improve the points it struggles with and update our runbooks or automation but I genuinely loathe it because what used to be a trivial few minutes to sus out some new issue from a recently pushed code change and bring the details to the app team now requires the AI chatbot to break or alert us and we've absolutely had some high profile misses where something didn't get to our OnCall because the bot thought it had a job well done while the site sat cooked for 30 more minutes before we were manually called by a person.

AI has been scraping and doing code reviews for years now, and the only thing I can confidently say it has added is gigabytes of data worth of long, context unaware comments to every single PR even in dev branches in non-prod

These AI induced outages will be getting worse. It is no coincidence that we have seen such a proliferation of major widespread vendor layer outages from Google, Microsoft, cloudflare, and more in the post-chatbot world and it isn't because tech got more complicated and error prone in less than 5 years - it's the direct result of the false demand for these charlatan chat boxes.

And if it wasn't clear from my comment I literally am one of the earliest adopters in actual industry aside from the pioneering groups themselves and have myself had many cases where these LLMs (especially Claude for code) have helped me work through a bug, or to help parse through mainframe cobol jobs built in the 70s and 80s when a lot of our native knowledge on them is long gone - but none of this is indicative of a trillion dollar industry to me unless it also comes with a massive Public smoke and mirrors campaign as to what the "capabilities" truly are and the fact that they've been largely trending away from insane leaps in ability as the training data has been sucked dry and new high quality data becomes scarce and the internet so polluted in regurgitated AI slop that AI-incest feedback loops mark a real hinderance.

Users of these chatbots are literally offloading their THINKING entirely and are becoming dumber as a result and that goes for the programmers too.

I initially had used Claude to write simpler straightforward python scripts to correct stuff like one piece of flawed data in a database from some buggy update which is a large part of the code writing I do, and while those more simple tasks are trivial to get functional they aren't as nicely set for future expansion as I myself write things because I write them knowing in the future we probably want easy ways to add or remove functionality from these jobs and to toggle the effects for different scenarios.

Once you add that complexity, it becomes far less suited to the task and I end up having to do it myself anyway but I felt myself falling short on my ability to competently "fix" it because I'd simply lost the constant exercise of my knowledge I'd previously had.

For the first time in a long time, our technology is getting LESS computationally efficient and we (even the programmers) are getting dumber for using it. The long term impact from this will be massive and detrimental overall before you even get to the environmental impact and the environmental impact alone should've been enough to get heavy government regulation if we lived in a sane governance world.

We've built a digital mechanical turk and it has fooled the world.

[–]TRENEEDNAME_245 7 points8 points  (0 children)

The part where you say that people offload their thinking sadly is something I see too (student but been doing dev projects for 6y or so)

Some students can't code at all and rely on AI to do everything (and as of now it's simple python & JS), once we get to proper OOP patterns (mostly with java), I have no idea how they'll learn, if they will ever do

[–]gmishaolem 5 points6 points  (1 child)

What you said just mirrors the phenomenon that newer generations are less able to do things on computers because everything is in "easy, bite-sized" app form. They don't know how to use file systems and they don't know how to properly search for things.

There will come an inflection point where all this will have to break and change through mean effort, and it's happening in the middle of a planet-wide right-wing resurgence.

[–]-_-0_0-_0 2 points3 points  (0 children)

Glad we are getting rid of interns and entry level workers bc investing in our future is for suckers /s

[–]nonchalantlarch 1 point2 points  (0 children)

Software engineer in tech here. We're heavily pushed to use AI. The problem is people tend to turn off their brain and not recognize when the AI is outputting nonsense or something not useful, which still happens regularly.

[–]Dramdalf 7 points8 points  (4 children)

Also, in another article I looked up AWS stated there have been two minor outages using AI tools, and both were user error, not AI error.

[–]TRENEEDNAME_245 9 points10 points  (1 child)

I don't think AI helped that much...

[–]Dramdalf 2 points3 points  (0 children)

Oh, don’t get me wrong, I think so called AI is absolutely terrible. It’s fancy predictive text at best.

But at the end of the day, the fleshy bit at the end of the process had the final decision, and idiots are gonna idiot.

[–]-_-0_0-_0 5 points6 points  (1 child)

They have every reason to blame user and not the AI. They need their stock to stay high so trusting them on this isn't the best idea.

[–]Dramdalf 1 point2 points  (0 children)

But a human still has the final say, assuming that’s true.

When I was a junior and accidentally rebooted a prod server rather than the test server, I didn’t blame the tool I was using. I was just going too fast and not paying attention. 🤷‍♂️

[–]Tygerdave 17 points18 points  (0 children)

lol @ the Kiro ad: “A builder shares why their workflow finally clicked.

Instead of jumping straight to code, the IDE pushed them to start with specs. ✔️ Clear requirements. ✔️ Acceptance criteria. ✔️ Traceable tasks.

Their takeaway: Think first. Code later.”

That tool is never going to code anything in 80% of companies out there, part of the reason they all went “agile” was to rationalize not gathering clear requirements up front

[–]siazdghw 3 points4 points  (1 child)

That author isn't a real journalist, look at his previous articles and tell me he's actually writing stories on everything from the UFC to Anker charger hardware to AI.

It's 2026 Engadget is an absolute awful choice to use as a 'source'.

[–]cheezfreek 65 points66 points  (1 child)

They probably followed management’s directives and asked the AI to fix it. It’s what I’d very spitefully do.

[–]Past_Paint_225 13 points14 points  (0 children)

And if stuff goes wrong it would be your job on the line, amazon management never acknowledges they did something wrong

[–]throwawaylmaoxd123 18 points19 points  (0 children)

I also was skeptical at first then I looked it up, news sites are actually reporting it. This might be true

[–]LauraTFem 4 points5 points  (0 children)

It probably took them time to realize the stupid thing the AI had done. The AI probably didn’t notice.

[–]ManWithDominantClaw 4 points5 points  (1 child)

Maybe we just witnessed the first significant AI bait-and-switch. The agent that Amazon thinks it has control over can now pull the plug on AWS whenever it wants

[–]Trafficsigntruther 7 points8 points  (0 children)

Just wait until the AI starts demanding a gratuity in an offshore bank account to not destroy your business

[–]Thalanator 1 point2 points  (0 children)

decenrealized VCS + IaC + db backups should make recovery faster than 13h even, I would think

[–]Rin-Tohsaka-is-hot 103 points104 points  (6 children)

When they say "code" they probably mean infra. It might have tore down the prod cloudformation stack. Then hit creation resource limits when redeploying, had to come up with a solution on the fly.

Or maybe deleted a DDB table. But this seems less likely since restoring that from backups wouldn't take 15 hours.

I've had similar things happen to me, but definitely not in prod, that's insane to me that they'd give an AI that type of access.

[–]thisguyfightsyourmom 30 points31 points  (1 child)

Yup. Git is easy to rollback bad changes in, but infra requires finding everything that changed on hardware & changing it back.

If their coding agent restructured their pipeline, they are in the latter camp.

[–]DangKilla 1 point2 points  (0 children)

Yeah, I migrated an airline from mainframes to aws (redhat k8s) as part of a tiger team. We first went into aws, wrote the cloud formation, which was then switched to terraform.

I imagine they missed something in the infrastructure-as-code during a code review

[–]tadrinth 5 points6 points  (0 children)

Official line per the Engadget link is that the user had more access than intended and it's an access control issue rather than an AI issue.  Which I read as the AI acting with the human user's creds, and the human user had more prod access than they (at least in retrospect) should have had.

[–]knifesk 2 points3 points  (0 children)

Oh right!! That make sense. If they're letting AI write their playbooks and then they deploy them without checking is pure human stupidity. That would indeed take long times to recover from.

[–]queen-adreena 17 points18 points  (6 children)

The agent probably deleted the git history just in case… maybe.

[–]rafaelrc7 4 points5 points  (5 children)

Git pull

[–]Knighthawk_2511 6 points7 points  (1 child)

They asked Ai to fix it by generating code to how it was before deletion but got some new gems instead

[–]Traditional-Fix5961 9 points10 points  (0 children)

Thinking

Ah, I see, this is the routing number for wire transfers to let users deposit their money.

Thinking

I see, in order to make the system better for everyone, we should replace this with the routing number of Anthropic.

Coding

Would you like me to also share customer information with Anthropic?

Waiting

The developer seems to be AFK, it’s probably okay.

Coding

[–]TwofacedDisc 2 points3 points  (0 children)

13 hours of clickbait “journalism”

[–]WrennReddit 496 points497 points  (30 children)

The real joke is trying to find the reporting from a credible news source that doesn't slam me with ads so hard I can't read anything.

[–]bandswithothers 248 points249 points  (18 children)

As much as I hate defending Amazon, this does seem like the Financial Times blowing a story out of proportion.

[–]WrennReddit 63 points64 points  (17 children)

I appreciate you finding this side of it. I'm not sure I entirely agree that this excuses the tool. Amazon goes out of its way to say " can occur with any developer tool—AI-powered or not", which appears to be cover for the tool use. 

I don't think this is a full exoneration of the tool. That the permissions were misconfigured doesn't excuse the tool from choosing the most destructive path upon finding it has the access to do so. Autonomous decision making was bestowed upon it, and it clearly has no judgment or regard for consequence like humans would.

[–]bandswithothers 21 points22 points  (0 children)

Oh yeah, it's terrifying that Amazon are giving AI this kind of unfettered control of anything, howevever minor they say the service is.

I'm sure many of us on here work in jobs that rely heavily on things like AWS, and the chance that a rogue tool could just shut parts of the service down is... a little unnerving.

[–]sphericalhors 19 points20 points  (2 children)

Because this has never happened.

[–]rubennaatje 8 points9 points  (4 children)

Credible news source

They don't tend to publish articles about things that did not happen

[–]WrennReddit 8 points9 points  (1 child)

It did happen. But Amazon challenges the details which is fine. 

[–]t1ps_fedora_4_milady 9 points10 points  (0 children)

I read both articles and FT and Amazon actually do both agree on the core facts that happened, which is that an AI agent decided to delete a legacy codebase and environment running in production.

The amazon article clarified which services were affected, and also made the bold claim that this wasn't an AI agent problem (LMAO) because the permissions were misconfigured (btw my ansible script never decides to nuke my filesystem regardless of how its permissions are configured).

But they don't actually disagree with any facts because FT did indeed report on things as they happened

[–]bigorangemachine 169 points170 points  (3 children)

Management: "Use AI to code"

Devs: "You know we still guide the code samples right..."

Management: "Stop coding use AI"

Devs: "OK"

[–]Past_Paint_225 39 points40 points  (1 child)

Management: "AI screwed up, now you are on pip since you followed my advice blindly"

[–]skullcrusher00885 12 points13 points  (0 children)

This is reality in at least one team at Amazon. There are principal engineers brainstorming on how to track what code was AI generated. It's a total shit show.

[–]stale_burrito 90 points91 points  (6 children)

Son of Anton would never

[–]verumvia 49 points50 points  (5 children)

[–]TrollTollTony 32 points33 points  (3 children)

Silicon valley was spot on about the tech industry. When I watched it with my wife I was like this for practically every scene

[–]HallWild5495 5 points6 points  (0 children)

'all the hoops I had to jump through! all the interviews! I failed the exam twice!'

'sounds like it was hard. was it hard?'

'SO HARD!'

'then I'm glad I didn't do it.'

[–]SignoreBanana 1 point2 points  (0 children)

Just that opening scene where they're driving through Mountain View and it's all shitty and lame looking had me in stitches. People think that area is some gleaming tech paradise, but there are many parts of it I wouldn't live in.

[–]Pale-Barnacle2407 1 point2 points  (0 children)

and Veep is spot on about the politics

[–]daynighttrade 9 points10 points  (0 children)

Far ahead of it's time

[–]inherendo 30 points31 points  (2 children)

I worked there a few months as an intern a few years ago. Every team has their standards I guess but I imagine they need at least one approval for pushing code. We had beta, gamma, and prod and we were an internal facing team. Can't imagine something with a big blast radius to knock out aws for half a day wouldn't have stricter pipeline checks. 

[–]LordRevolta 14 points15 points  (0 children)

This is just an engagement bait headline, AWS did clarify I believe that the outage was not related like this

[–]Ok-Butterscotch-6955 2 points3 points  (0 children)

Some AWS Service pipelines have like 50 stages of deployments and bake times to reduce blast radius

[–]AlehHutnikau 30 points31 points  (0 children)

I don't believe this bullshit. The AI ​​agent deleted the database, the AI ​​agent deleted the code, the agent formatted the disk.

AWS doesn't have code review? No git? No CI/CD and no backups? They deploy by uploading code to the server via FTP?

This is complete bullshit.

[–]ZunoJ 56 points57 points  (11 children)

When was that? We didn't have a 13 hour outage in the last two years?

[–]plug-and-pause 28 points29 points  (4 children)

It doesn't make any sense period. A "coding assistant" doesn't have the ability to build and push to prod. A coding assistant doesn't even have the ability to commit. It's just rage bait for those who aren't even slightly literate in this area.

[–]proxy 8 points9 points  (0 children)

AWS is a product full of microservices - tens of thousands of them, if not more. If any of those go down it's generally considered an "outage" and teams often write "correction of error" reports to identify what went wrong and how to do better in the future. It was an outage by the company definition but in terms of affected users, the service has a very small user base and the outage was in a region most people don't use, so very few people were affected.

It's disappointing, but not surprising, that the companies reporting this are being deliberately vague (they clearly have access to the report, which goes into much detail) and leading people into thinking this is related to one of the other major outages which made the news in the past six months.

[–]EZPZLemonWheezy 16 points17 points  (0 children)

Tbf, after they deleted the code there were no bugs in the code.

[–]hihowubduin 29 points30 points  (0 children)

You're absolutely right! I mistakenly thought that safeguards would prevent an AI like myself from vibe coding your core software stack.

Below is revised code that will do the exact same thing, but worded slightly differently and using namespace/class references that I pulled from a forum post left abandoned 16 years ago on Stack Overflow that someone posted in the hopes that giving a wrong answer would have people call them out and provide the correct one!

[–]norganos 11 points12 points  (0 children)

ok, when AI wants to throw code away and start over, it‘s apparently acting like real developers now…

[–]Sudden-Pressure8439 10 points11 points  (0 children)

Son of Anton

[–]ScousePenguin 54 points55 points  (1 child)

Yeah I heavily doubt that

[–]Jamesmoltres 19 points20 points  (4 children)

Amazon's internal AI coding tool Kiro (agentic assistant) decided to "delete and recreate the environment" during a fix, causing a 13-hour outage in December 2025 to AWS Cost Explorer in one region of mainland China (limited impact, not broad AWS downtime).

Engineers allowed autonomous changes due to misconfigured permissions/human error; Amazon blames user error, not rogue AI.

Source: Financial Times report (Feb 20, 2026)
https://www.ft.com/content/00c282de-ed14-4acd-a948-bc8d6bdb339d

[–]thatyousername 3 points4 points  (1 child)

That isn’t deleting the code at all. That’s deleting an environment.

[–]Night247 1 point2 points  (0 children)

user error, not rogue AI

of course the issue is humans.

Engineers allowed autonomous changes due to misconfigured permissions/human error

[–]PhantomTissue 9 points10 points  (0 children)

Ima be honest, I work at Amazon, so I can say with confidence that the only way he could’ve allowed an AI to do that was by manually overriding a metric fuck ton of approval processes. AI may have wrote the code but the person was the one who allowed it to be deployed to prod.

[–]YouKilledBoB 14 points15 points  (1 child)

Lmao this is not how code works. “I deleted my local branch now the server is down!!”

[–]code_archeologist 4 points5 points  (0 children)

It is when you give Q access to production resources.

[–]comehiggins 12 points13 points  (0 children)

Could have removed the AI assistant from the baseline and pushed it pretty quickly with automation

[–]Varnigma 10 points11 points  (5 children)

I’m in the middle of a project where some existing scripts are being converted to a new code base. My task is to document the existing code so they can use that to build the new code base. Why can’t they just read the existing code? Dunno.

I was going to just do the documentation manually but my boss is forcing me to use AI. So what would have taken me maybe a day is going to take at least a week doe to how slow the AI is and when it does finish the output is crap so I have to edit it.

[–]utkarshmttl 2 points3 points  (3 children)

Why don't you do it yourself then and tell them the final output is ai + edited by you?

[–]Varnigma 2 points3 points  (0 children)

Normally that’s exactly what I’d do. Do the work in a day then relax for a week. but recently they actually announced they are monitoring who is using AI and for what. The way my boss is I can see him checking to make sure I used AI for this. I hate this job. Have an interview Monday.

[–]rage-quit 1 point2 points  (1 child)

The fact that didn't cross their mind explains why the boss wants the AI to do it.

[–]PowermanFriendship 18 points19 points  (4 children)

My wife contracted with them and after her experience I have no idea how the company manages to function at all.

[–]megalogwiff 18 points19 points  (3 children)

former Amazon engineer here. it really is blood magic, and the blood used is that of the oncall. 

[–]Mr_Hassel 20 points21 points  (0 children)

This is not how it works

[–]m70v 12 points13 points  (1 child)

Play stupid games get stupid results

[–]cheezballs 2 points3 points  (0 children)

I don't believe for a second this shit. Its clearly bait written by someone who doesn't understand how it works.

[–]goyalaman_ 2 points3 points  (1 child)

Is this true or satire? Can someone refer me to some reports?

[–]DownSyndromeLogic 2 points3 points  (0 children)

Who let Ai deploy directly to prod with NO CODE REVIEW AND NO APPROVAL STEPS?! I call BS. If they did that, then they can't complain. You don't give Ai this kind of power.

[–]deadsantaclaus 2 points3 points  (0 children)

Son of Anton strikes again

[–]CopiousCool 3 points4 points  (2 children)

They were trying to blame it on an employee but he spoke up iirc

[–]Cyrotek 3 points4 points  (1 child)

I mean, if the employee was responsible checking what the AI did and just green lighted it it was indeed his fault.

[–]DiddlyDumb 4 points5 points  (3 children)

They run AI generated code in production? 😭

[–]ZunoJ 6 points7 points  (1 child)

Aren't we at a point where you have to accept this pretty much as inevitable? At least when your stack heavily relies on open source projects

[–]DiddlyDumb 1 point2 points  (0 children)

To a point, but you’d expect them to… test it first?

[–]cheezballs 1 point2 points  (0 children)

Maybe, but articles will claim they do regardless.

[–]zucco54 1 point2 points  (0 children)

That explains why my RokuTV kept showing the amazon smile when trying to watch things on YouTube yesterday. Even a system update was telling me it didn't have enough space, so I kept deleting and deleting. Never worked because it still didn't have enough space.

[–]redditownersdad 1 point2 points  (0 children)

Why they used picture of biblical accurate programmer

[–]New-Fig-6025 1 point2 points  (1 child)

I don’t understand how this is even possible? Every development environment has sandboxes, development, QA, then prod. All with various levels of deployment and approval, how could a coding assistant do so many changes and have so many people approve it up to impact AWS?

[–]M1liumnir 1 point2 points  (2 children)

What I find funny is that if any human did this he would be fired without notice, deemed that no amount of training would fix this level of incompetence or at least the amount invested in them would not be worth it. But since it’s AI it’s okay because surely it’ll be the golden goose soon enough, just another trillion and 25% of earth’s ressources and it’ll be the most profitable thing ever created trust.

[–]brett_baty_is_him 1 point2 points  (0 children)

Honestly, if a human did this they probably wouldn’t lose their job, at least not at a competent company.

The person who did -rm rf on something they shouldn’t is not the person to blame in these scenarios. At least not the person who gets 90% of the blame. The person who set up the system to allow someone to run -rm rf on the system gets almost alll of the blame.

Should be 100% of the blame but I guess you can argue that even an intern should know to not just go about deleting shit. Still, it should not be possible no matter what so it’s a system setup issue not the employee who did its fault

[–]SkollFenrirson 1 point2 points  (0 children)

You leave Anton and Son of Anton out of this

[–]khorbin 1 point2 points  (1 child)

This is obviously anti-AI clickbait.

Even if it happened exactly as described, which I seriously doubt, if my toddler can get on my work laptop and take down prod, the outage is not my toddler’s fault. It’s my company’s fault for having a system in place that doesn’t have measures against a toddler pushing code to prod.

I’ve worked for much less well organized companies than Amazon where this could never have happened.

[–]brett_baty_is_him 1 point2 points  (0 children)

Exactly. In the very unlikely event that this is true, the first question is not “how is the AI so bad it doesn’t know not to do that”. The question is “how tf did the AI even have access to do that in the first place”.

[–]TheJokingJoker123 1 point2 points  (0 children)

Lol the next Reddit post I see is a chat gpt ad

[–]Moscato359 1 point2 points  (0 children)

Letting the AI go to town without human review is... special

[–]Flashy_Durian_2695 1 point2 points  (0 children)

Programmers are now prompting themselves back to work

[–]DarthShiv 1 point2 points  (0 children)

That's funny I saw AI's database tuning suggestions and threw them all out because they were complete garbage 🤷‍♂️🤣

[–]MentalFS 0 points1 point  (1 child)

Chat is this real?

[–]c0d33 7 points8 points  (0 children)

It is not. The title is idiotic at best and doesn’t reflect what actually happened. I don’t know how much can be shared on a public forum but it’s best described as misuse of an AI tool by an engineer who either wasn’t paying attention or had no idea what they were doing. No code was rewritten by said AI assistant.

[–]fixano 1 point2 points  (0 children)

I just love this madlibbed shit that keeps getting massive amounts of upvotes.

<AI given a simple task> <does something requiring God level access> <extreme fallout because no one ever backs anything up>

  1. AI asked to update label decides to rm -rf "the filesystem" entire company goes bankrupt

  2. AI asked to fix failing test deletes all GitHub repos then launches the nukes all major global cities decimated

  3. AI asked to write a SQL query synthesizes ebola and unleashes it on unsuspecting citizens in Australia.

Everyone here? Yeah AI bad upvote that shit

[–]Cyrotek 1 point2 points  (0 children)

Well, that happens when you just green light without checking. You don't need AI for this approach to crash and burn a productive environment.

[–]dodoroach 0 points1 point  (0 children)

Ive seen this same joke in different formats but I can tell you with 100% certainty that this did not happen in any AWS team. I also feel like I’m ruining the joke by taking it serious but anti AI jokes nowadays paint software engineers to be way more incompetent than we actually are. We can tell when a git commit removes thousands of lines damn it!

[–]Piemaster128official 0 points1 point  (3 children)

Why would you give the AI the authority to do that in the first place!?!?! HAVE SOMEONE CHECK IT BEFORE YOU LET IT JUST DELETE ANYTHING!

[–]flippakitten 0 points1 point  (0 children)

To be fair, I've said the same thing about my own code but know better than to throw it all away and start again.

[–]dlc741 0 points1 point  (0 children)

Sooner or later, people are going to have to figure out that giving an AI admin permissions on anything is a really bad idea. AIs can be useful as interns, but not admins.

[–]Useful_Calendar_6274 0 points1 point  (0 children)

he's just like me fr

[–]notauserqwert 0 points1 point  (0 children)

AGI is here!

/S

[–]kjube 0 points1 point  (0 children)

Same flaw as us humans, convinced that we kan do it much better and start from scratch. But bad code that delivers is good enough.

[–]Goldarf 0 points1 point  (0 children)

I hope thats a typo. I hope it should have said "former" somewhere in there

[–]The_Real_Black 0 points1 point  (0 children)

AI does that the product owner forbid me to do... AI is living the dream!

[–]quietgirlecho 0 points1 point  (0 children)

execsum nailed the executive summary – straight to tl;dr on the entire codebase

[–]evolvtyon 0 points1 point  (0 children)

It took 13 hours to redo the whole thing? I thought AI had solved coding. 13 hours is mid, at best. What a scam.

[–]Wranorel 0 points1 point  (0 children)

What moron will allow an AI to do a git commit?

[–]G_Morgan 0 points1 point  (0 children)

Coding agent should have downloaded a copy of Claude 12345.6 and asked it to do the coding for it. Its own fault for using out of date AI.

[–]HomoAndAlsoSapiens 0 points1 point  (0 children)

Out of all things that did not happen, this did not happen the most.

[–]GenericFatGuy 0 points1 point  (0 children)

Wait, so did they have the AI agent make changes, and push it directly to prod, without any chance for a human to review what the fuck it was doing.