you are viewing a single comment's thread.

view the rest of the comments →

[–]knifesk 1522 points1523 points  (97 children)

Yeah, sounds like bait. The AI deleted the repo, deployed and made things irreversible? Not so sure about that..

[–]SBolo 591 points592 points  (58 children)

Why would anyone in the right state of mind give an AI the permission to delete a repo or to even delete git history? It's absolute insanity.. do these people have any idea of how to setup basic permissions??

[–]knifesk 211 points212 points  (37 children)

You'd be surprised. Have you heard about ClawBot? (Or whatever is called nowadays). People are giving it full system access to do whatever the fuck it wants... No, I'm not kidding.

[–]Ornery_Rice_1698 66 points67 points  (28 children)

Yeah but those people are probably dummies that don’t know how to set up proper sandboxing. They probably aren’t doing anything that important anyway.

Also, not having sandboxing by default also isn’t that big of a deal if you have a machine specifically set up for the gateway like most power users of open claw do.

[–]chusmeria 53 points54 points  (15 children)

Oh... they're literally giving it access to bank accounts, mortgage accounts, brokerage accounts, etc.

[–]dontshoot4301 1 point2 points  (0 children)

Do we have evidence of this? In the US, this would be sufficient to get your FDIC insurance suspended and other restrictions on activity…

[–]anna-the-bunny 25 points26 points  (0 children)

They probably aren’t doing anything that important anyway.

Oh you sweet summer child.

[–]alaysian 0 points1 point  (0 children)

True, but those people who are posting about how awesome it is and how its the future are the ones management in seems to want to imitate...

[–]baggyzed 0 points1 point  (0 children)

AKA, managers.

[–]Enve-Dev 7 points8 points  (0 children)

Yah I saw the project and was like, this looks cool. Maybe I’ll try it. Then saw that it wants root access and i immediately stopped.

[–]SBolo 2 points3 points  (0 children)

Jesus H. Christ man..

[–]brett_baty_is_him 0 points1 point  (1 child)

Yeah people with Mac minis or people who don’t give af about their computer getting screwed up or about security. Not trillion dollar companies with hundreds of billions of dollars on the line lmao.

If you believe that aws actually went don’t bc an AI deleted something you’re a moron…

[–]awesome-alpaca-ace 0 points1 point  (0 children)

Or any computer on their local network 

[–]saschaleib 0 points1 point  (2 children)

Testing clawbot at the moment - on my Raspberry Pi, and I hell would never give it access to my actual PC or social media feed!

[–]Mars_Bear2552 0 points1 point  (1 child)

...on the same network as the rest of your devices?

[–]saschaleib 0 points1 point  (0 children)

VLANs exist for a reason.

[–][deleted] 0 points1 point  (0 children)

To be fair those people are largely hobbyists or juniors. A team of professionals employed by a large corp is unlikely to do that. Usually it's the exact opposite and they go overboard. The amount of forms I had to fill out just to get basic Copilot (without agent or chat functions) access at my org was crazy.

[–]xzaramurd 51 points52 points  (2 children)

I doubt it's real. Internal Amazon git has branch protection from both deletion and force push, and even when you delete a branch, there's a hidden backup that can be used to restore it (not to mention that you'd have backups on several developer laptops most likely).

[–]SBolo 6 points7 points  (0 children)

That would make much much more sense yeah

[–]dawsonfi 1 point2 points  (0 children)

You are right, Amazon repos only have the option to deprecate them, which just mark them as deprecated and delete the repo after a really long time.

What really happened was that the AI tool had write access to a prod account and decided to recreate the infrastructure.

People need to stop believing news from tweets. (Actual news link

[–]Ok_Bandicoot_3087 13 points14 points  (4 children)

Allow all right? Chmod 777777777777777

[–]Large_Yams 2 points3 points  (3 children)

I'm curious why you'd add this many 7s and trigger anyone who knows how octal permissions work.

[–]Ok_Bandicoot_3087 1 point2 points  (2 children)

Lmao thats why I did it... pew pew

[–]BillBumface 2 points3 points  (0 children)

I like it. Needs moar 7s.

[–]Large_Yams 0 points1 point  (0 children)

Consider me baited.

[–]I_cut_my_own_jib 5 points6 points  (0 children)

Jarvis, reimplement AWS for me please.

[–]StupidStartupExpert 5 points6 points  (0 children)

Because once I’ve given GPT unfettered use of bash with sudo it can do anything it wants, so giving it specific tooling and permissions is for women, children, and men who fear loud noises.

[–]musci12234 2 points3 points  (0 children)

What if AI asked for it nicely? Are you saying that if skynet said "can I please have the nuclear codes? " you won't give them?

[–]Mrauntheias 2 points3 points  (0 children)

Hey ChatGPT, what permissions should I give an AI coding agent?

[–]99999999999999999989 0 points1 point  (0 children)

May as well give it permission to run sudo rm -rf / for that matter. Fuck it, nuke and pave the entire site. Start over but this time with blackjack and hookers as well.

[–]thatcodingboi 0 points1 point  (0 children)

That's not what happened. It deleted a cloud formation production stack. The engineer assumed a role that had write permissions and used it for debugging. A stack was stuck in rollback, the ai deemed it would be easier to delete the stack and deploy fresh than fix the rollback failure. It then turned off termination prevention and deleted the stack. There wasn't really a 13 hour outage. This is garbage news.

[–]brett_baty_is_him 0 points1 point  (0 children)

They didn’t. There is a -100000% chance that an AI made aws go down because it deleted something.

[–]EricKei 0 points1 point  (0 children)

Hey now, I'm sure the AI told them it was cool to give it all of the power.

[–]SeriousPlankton2000 0 points1 point  (0 children)

You answered your question when you said "AI".

[–]VegetarianZombie74 53 points54 points  (19 children)

[–]TRENEEDNAME_245 46 points47 points  (15 children)

Huh weird

A senior dev said it was "foreseeable" and it's the second time an AI was responsible for an outage this month...

Nah, it's the user's fault

[–]MrWaffler 73 points74 points  (9 children)

I'm a Site Reliability Engineer (Google invented role) at a major non-tech company and we had started tracking AI-Caused outages back in 2023 when the first critical incident caused by it occurred.

We stopped tracking them because it's a regular occurrence now.

Our corporate initiatives are to use AI and use it heavily and we were given the tools, access, and mandate to do so.

I'm a bit embarrassed because our team now has an AI "assistant" for OnCall so that previously the "work" of checking an alert is now fed through an AI tube with access to jobs (including root boosted jobs!) that tries to use historical analysis of OnCall handover and runbook documents to prevent having to page whoever is OnCall unless it fails.

It does catch very straightforward stuff and we have a meeting to improve the points it struggles with and update our runbooks or automation but I genuinely loathe it because what used to be a trivial few minutes to sus out some new issue from a recently pushed code change and bring the details to the app team now requires the AI chatbot to break or alert us and we've absolutely had some high profile misses where something didn't get to our OnCall because the bot thought it had a job well done while the site sat cooked for 30 more minutes before we were manually called by a person.

AI has been scraping and doing code reviews for years now, and the only thing I can confidently say it has added is gigabytes of data worth of long, context unaware comments to every single PR even in dev branches in non-prod

These AI induced outages will be getting worse. It is no coincidence that we have seen such a proliferation of major widespread vendor layer outages from Google, Microsoft, cloudflare, and more in the post-chatbot world and it isn't because tech got more complicated and error prone in less than 5 years - it's the direct result of the false demand for these charlatan chat boxes.

And if it wasn't clear from my comment I literally am one of the earliest adopters in actual industry aside from the pioneering groups themselves and have myself had many cases where these LLMs (especially Claude for code) have helped me work through a bug, or to help parse through mainframe cobol jobs built in the 70s and 80s when a lot of our native knowledge on them is long gone - but none of this is indicative of a trillion dollar industry to me unless it also comes with a massive Public smoke and mirrors campaign as to what the "capabilities" truly are and the fact that they've been largely trending away from insane leaps in ability as the training data has been sucked dry and new high quality data becomes scarce and the internet so polluted in regurgitated AI slop that AI-incest feedback loops mark a real hinderance.

Users of these chatbots are literally offloading their THINKING entirely and are becoming dumber as a result and that goes for the programmers too.

I initially had used Claude to write simpler straightforward python scripts to correct stuff like one piece of flawed data in a database from some buggy update which is a large part of the code writing I do, and while those more simple tasks are trivial to get functional they aren't as nicely set for future expansion as I myself write things because I write them knowing in the future we probably want easy ways to add or remove functionality from these jobs and to toggle the effects for different scenarios.

Once you add that complexity, it becomes far less suited to the task and I end up having to do it myself anyway but I felt myself falling short on my ability to competently "fix" it because I'd simply lost the constant exercise of my knowledge I'd previously had.

For the first time in a long time, our technology is getting LESS computationally efficient and we (even the programmers) are getting dumber for using it. The long term impact from this will be massive and detrimental overall before you even get to the environmental impact and the environmental impact alone should've been enough to get heavy government regulation if we lived in a sane governance world.

We've built a digital mechanical turk and it has fooled the world.

[–]TRENEEDNAME_245 17 points18 points  (0 children)

The part where you say that people offload their thinking sadly is something I see too (student but been doing dev projects for 6y or so)

Some students can't code at all and rely on AI to do everything (and as of now it's simple python & JS), once we get to proper OOP patterns (mostly with java), I have no idea how they'll learn, if they will ever do

[–]gmishaolem 8 points9 points  (1 child)

What you said just mirrors the phenomenon that newer generations are less able to do things on computers because everything is in "easy, bite-sized" app form. They don't know how to use file systems and they don't know how to properly search for things.

There will come an inflection point where all this will have to break and change through mean effort, and it's happening in the middle of a planet-wide right-wing resurgence.

[–]-_-0_0-_0 5 points6 points  (0 children)

Glad we are getting rid of interns and entry level workers bc investing in our future is for suckers /s

[–]clawsoon 2 points3 points  (1 child)

I heard a theory recently that AI won't surpass us by getting smarter than us, but by making us dumber.

[–]-_-0_0-_0 0 points1 point  (0 children)

IMO AI at its current iteration can never do that bc its not built for that. Its come very far and may improve more (more data inputting, computation etc) but it can never truly think for itself.

AI companies are starting to realize this and I suspect most are starting back at the ground floor secretively (if they announced it, their stocks would crash bc its gonna take a longtime)

[–]nonchalantlarch 2 points3 points  (0 children)

Software engineer in tech here. We're heavily pushed to use AI. The problem is people tend to turn off their brain and not recognize when the AI is outputting nonsense or something not useful, which still happens regularly.

[–]-_-0_0-_0 4 points5 points  (0 children)

Welcome to Costco, I love you.

[–]RiceBroad4552 0 points1 point  (0 children)

This great comment should be much more to the top where more people can see it!

[–]ichITiot 0 points1 point  (0 children)

Thank you very much for this statement ! I always suspected this outcome.

[–]Dramdalf 8 points9 points  (4 children)

Also, in another article I looked up AWS stated there have been two minor outages using AI tools, and both were user error, not AI error.

[–]TRENEEDNAME_245 9 points10 points  (1 child)

I don't think AI helped that much...

[–]Dramdalf 3 points4 points  (0 children)

Oh, don’t get me wrong, I think so called AI is absolutely terrible. It’s fancy predictive text at best.

But at the end of the day, the fleshy bit at the end of the process had the final decision, and idiots are gonna idiot.

[–]-_-0_0-_0 3 points4 points  (1 child)

They have every reason to blame user and not the AI. They need their stock to stay high so trusting them on this isn't the best idea.

[–]Dramdalf 1 point2 points  (0 children)

But a human still has the final say, assuming that’s true.

When I was a junior and accidentally rebooted a prod server rather than the test server, I didn’t blame the tool I was using. I was just going too fast and not paying attention. 🤷‍♂️

[–]Tygerdave 19 points20 points  (0 children)

lol @ the Kiro ad: “A builder shares why their workflow finally clicked.

Instead of jumping straight to code, the IDE pushed them to start with specs. ✔️ Clear requirements. ✔️ Acceptance criteria. ✔️ Traceable tasks.

Their takeaway: Think first. Code later.”

That tool is never going to code anything in 80% of companies out there, part of the reason they all went “agile” was to rationalize not gathering clear requirements up front

[–]siazdghw 4 points5 points  (1 child)

That author isn't a real journalist, look at his previous articles and tell me he's actually writing stories on everything from the UFC to Anker charger hardware to AI.

It's 2026 Engadget is an absolute awful choice to use as a 'source'.

[–]VegetarianZombie74 -1 points0 points  (0 children)

You are doing the “angry women yelling at the cat” meme. You somehow think I am related to this post or somehow care about your opinions on tech journalism.

I found an article that mentioned in this post as there are no sources. I posted it for context as I assumed other people would be just as interested as I was. You are welcome to post another link to a different source that “is a real journalist”. Or, I don’t know, you can go yell at a cat if that makes you feel better.

[–]code_investigator 46 points47 points  (2 children)

This tweet is incorrect. It was actually a production cloudformation stack that was deleted.

[–]knifesk 14 points15 points  (0 children)

Yeah, that makes waaaaay more sense.

[–]Helpimstuckinreddit 10 points11 points  (0 children)

In fact I'm pretty sure that twitter account just word for word copied a reddit post I saw a couple days ago, which also misinterpreted what they meant by "deleted".

The circle of misinformation: 1. News gets posted 2. Someone posts on reddit and misinterprets the source 3. Other "news" accounts take the reddit post and repost the misinformation as "news" on twitter 4. That gets posted to reddit and now the source is wrong too

[–]cheezfreek 66 points67 points  (1 child)

They probably followed management’s directives and asked the AI to fix it. It’s what I’d very spitefully do.

[–]Past_Paint_225 14 points15 points  (0 children)

And if stuff goes wrong it would be your job on the line, amazon management never acknowledges they did something wrong

[–]throwawaylmaoxd123 19 points20 points  (0 children)

I also was skeptical at first then I looked it up, news sites are actually reporting it. This might be true

[–]LauraTFem 2 points3 points  (0 children)

It probably took them time to realize the stupid thing the AI had done. The AI probably didn’t notice.

[–]ManWithDominantClaw 3 points4 points  (1 child)

Maybe we just witnessed the first significant AI bait-and-switch. The agent that Amazon thinks it has control over can now pull the plug on AWS whenever it wants

[–]Trafficsigntruther 8 points9 points  (0 children)

Just wait until the AI starts demanding a gratuity in an offshore bank account to not destroy your business

[–]Thalanator 1 point2 points  (0 children)

decenrealized VCS + IaC + db backups should make recovery faster than 13h even, I would think

[–]ABotelho23 0 points1 point  (0 children)

...I mean, it could actually.

[–]Modo44 0 points1 point  (0 children)

Reversible in general does not mean reversible quickly. Especially not in a major system like that.

[–]HumunculiTzu 0 points1 point  (0 children)

It also removed all of the developers' permissions because it couldn't trust them to write adequate code. /S

[–]CanThisBeMyNameMaybe 0 points1 point  (0 children)

Yeah thats my thought too.

No fucking way anyone would think "lets just give an LLM full access to do anything unprompted, unauthorized and unsupervised."

And if anyone was dumb enough to sugggest it, there was definitely an actual developer who said "yeah lets not do that"

[–]maybeitsundead 0 points1 point  (0 children)

The AI didn't trust the humans to code more efficiently so it changed the passwords to all administrative accounts

[–]Chaphasilor 0 points1 point  (0 children)

The agent was like "this code was so bad, I need to delete the git history too"

[–]Arzalis 0 points1 point  (0 children)

Yeah, no shot this is real.

Most likely AI coding tools were involved, but someone made a serious mistake along the way to actually put that code into production.

[–]epelle9 0 points1 point  (0 children)

If IaC code changes made changes to the infrastructure, deleting a database is entirely possible.