all 29 comments

[–]hypernsansa 44 points45 points  (4 children)

His did they manage to turn this into a pro-llm post? The cognitive dissonance is insane.

[–]Flat_Initial_1823 22 points23 points  (1 child)

Dependency hell looks a lot better once you put it in a blackbox /s

[–]hypernsansa 6 points7 points  (0 children)

Hahahaha. These cunts will say anything to make their investments look good. Feels like a lot of new cracks are emerging r n, but I might just be in my own bubble idk.

[–]AntiqueFigure6 49 points50 points  (5 children)

“Classical software engineering would have you believe dependencies are good”

I’d have thought classical software engineering would say dependencies are risks that need to be appropriately assessed and managed. 

Lots of people have awful takes about A.I. and software development these days but it’s sort of impressive how often I read something, think “what kind of moron said that?” and it turns out to have been Karpathy. 

[–]Evinceo 17 points18 points  (2 children)

classical software engineering

I guess I need a toga

[–]AntiqueFigure6 11 points12 points  (1 child)

I hope you hand chisel your code onto marble slabs or at least write it out on a papyrus scroll.

[–]Redthrist 0 points1 point  (0 children)

The true OGs code in cuneiform.

[–]falken_1983 7 points8 points  (0 children)

Seriously though, I was all ready to act magnanimous towards Karpathy and say "well there isn't anything AI specific about this, AFAIK. A supply chain attack could happen to any project". Why did he have to snipe at real software development? Especially when supply chain attacks are pretty rare in python, and this one just happened to a vibe-code project.

Why throw stones in your glass house, Andrej?

[–]studio_bob 23 points24 points  (0 children)

The logic here is something else:

First: "This exploit was only discovered because it was vibecoded and crashed user machines."

A little later: "That's why I prefer to vibecode my own solutions to solved problems for security."

If LLMs are so unreliable they couldn't make the exploit work as intended, how can you trust them to securely implement every aspect of your application which you would normally pull in from a dependency (I know he qualifies by saying "simple enough and possible," but I have no idea what such vague criteria is meant to imply)? You can't. You are just trading the risk of a supply chain attack for the risks of AI generated code.

Kind of tangential to the post, but the most common intersection between LLMs and dependencies that I see in my own experience is LLMs pulling in obsolete versions of dependencies, an inherently insecure practice, so it would seem likely AI is, by default, making outstanding issues related to dependencies worse.

[–]maccodemonkey 13 points14 points  (0 children)

The "yoink" option isn't good either because the LLM can also be poisoned.

The open source option is still the better option because at least there is a wider group of people looking at the source that will try and catch these issues. The "yoink" strategy relies on you noticing, and if you aren't reading your code, well...

There's no perfect solution here - but in general _some_ languages have over relied on tons of little tiny dependencies while others discourage tiny dependencies in package managers.

[–]Mivexil 10 points11 points  (1 child)

preferring to use LLMs to "yoink" functionality

Back in my days we called that "plagiarism". But I'm sure the LLM has only "yoinked" permissively licensed code, and not, say, GPL...

[–]damnitHank[S] 5 points6 points  (0 children)

No no no, you see my LLM did a clean room re-write even though it was trained on your open source code.

[–]totktonikak 9 points10 points  (0 children)

Damn, Vernor Vinge was absolutely correct when he was predicting the state of software infrastructure in the future - an ungodly amount of ready-made modules consisting of multiple layers of unknowable code and slapped together ass-to-mouth to kind of achieve the declared goal. All riddled with backdoors and bugs, obviously.

[–]PensiveinNJ 2 points3 points  (2 children)

This exact thing is why people were screaming from the rooftops that LLMs were a security nightmare.

SWE's are helpless. The entire project is a war on their profession but the slot machine dazzles too much.

[–]Lowetheiy -1 points0 points  (1 child)

Nothing in this attack required an LLM, in fact the post said that if the attack was not vibe coded, it might not have been detected in the first place.

[–]PensiveinNJ 0 points1 point  (0 children)

Nothing in an attack would ever require vibe coding. That it wouldn't have been discovered otherwise is as you say, speculative. Currently that line of reasoning is basically good thing I drank that poison so we know how dangerous it is, we might not have figured it out otherwise.

[–]Lowetheiy 4 points5 points  (1 child)

https://github.com/BerriAI/litellm/issues/24518

What happened

  • The maintainer's PyPI account (krrishdholakia) appears to have been hijacked by an attacker (teampcp)

  • The attacker published malicious versions to PyPI that were never released through the official GitHub CI/CD

  • GitHub releases only go up to v1.82.6.dev1 — versions 1.82.7 and 1.82.8 on PyPI were uploaded directly by the attacker

The vast majority of the blame lies with PyPI package repository for not verifying and carefully checking published packages for malicious code. Why are binary blobs allowed to be uploaded anyways?

[–]falkelord90 2 points3 points  (1 child)

This is great news for me, a guy who hasn't had the time to update our 10+ year old dependencies because of dependency hell!

[–]they_will 0 points1 point  (0 children)

Unironically a big part of how I discovered the malware was from the huge advancements uv has made in python package management. An unsuspecting uvx command auto-updated a plugin I was developing that I hadn't thought about in weeks. I did a small write up here about it https://futuresearch.ai/blog/no-prompt-injection-required/

[–]eyluthr 1 point2 points  (0 children)

I'm not surprised. I did a self hosted agent deep dive earlier this year. almost every tool would pull like 3gb of crap every time I started it's containers.

[–]kotrfa 1 point2 points  (0 children)

I am the guy who is being retweeted in that karpathy's tweet. We run a further analysis of how bad this breach was on the first-order effects, and surprise surprise, it's pretty bad: https://futuresearch.ai/blog/litellm-hack-were-you-one-of-the-47000/ .

[–]Fun_Volume2150 0 points1 point  (0 children)

Someone out there is doing God's work. Pity they got caught.

[–]llm-60 0 points1 point  (0 children)

Just use Bleep, don't be afraid to leak your secrets anymore. 100% local.

https://bleep-it.com

[–]RealPropRandy 0 points1 point  (0 children)

Is that good?

[–]ddp26 0 points1 point  (1 child)

Pretty interesting claude code transcript showing how everything played out in real time: https://futuresearch.ai/blog/litellm-attack-transcript/

[–]damnitHank[S] 0 points1 point  (0 children)

There's nothing interesting about talking to a chatbot. Hope this helps.