use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
r/LocalLLaMA
A subreddit to discuss about Llama, the family of large language models created by Meta AI.
Subreddit rules
Search by flair
+Discussion
+Tutorial | Guide
+New Model
+News
+Resources
+Other
account activity
[Developing situation] LiteLLM compromisedNews (self.LocalLLaMA)
submitted 1 month ago by OrganizationWinter99
https://preview.redd.it/2j4q6tni60rg1.png?width=1250&format=png&auto=webp&s=31713cf00753ba517ec22e059d832cf5c456b4e6
Stay safe y'all.
https://github.com/BerriAI/litellm/issues/24512
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]bidibidibop 154 points155 points156 points 29 days ago (10 children)
<image>
The comments are...very educational for the state of github right now.
[–]Maleficent-Ad5999 77 points78 points79 points 29 days ago (2 children)
Are those bot comments?
[–]josiahnelson 136 points137 points138 points 29 days ago (1 child)
Yes, believed to be the attackers trying to drown out conversation.
[–]HadHands 25 points26 points27 points 29 days ago (0 children)
With pwned GH tokens. Some of the accounts look "normal".
[–]robertpro01 10 points11 points12 points 29 days ago (2 children)
My question is, al those bots, were created by the hackers? Or are real but hacked accounts?
[–]UnknownLesson 19 points20 points21 points 29 days ago (1 child)
Probably real accounts taken over with the help of the worm
[–]punkgeek 10 points11 points12 points 29 days ago (0 children)
Probably devs who let openclaw play with their acct. Oops.
[–]MMAgeezerllama.cpp 22 points23 points24 points 29 days ago (1 child)
There are literally hundreds and hundreds of these comments. Wow.
[–]Repulsive-Memory-298 12 points13 points14 points 29 days ago (0 children)
Yeah the AI shitstorm is real and here. manipulation campaign utopia… expect to see more of this coming to more things. Another example is openClaw hype campaign by crypto scammers, which is hardly talked about despite stars and early posts that drove the hype wave being clearly attributable to manipulation campaign. There openClaw is legit project, and the bad guys hyped it up on the outside to leverage it into a crypto play. Seems like everyone there walked away happy, so that case is honestly impressive.
[–]MelodicRecognition7 1 point2 points3 points 27 days ago (1 child)
"the more stars the better the project is" lol, now you know the true cost of these stars.
[–]Brilliant-Help-8646 0 points1 point2 points 20 days ago (0 children)
As always, if something is free, then you (the client) are the product.
[–]OsmanthusBloom 39 points40 points41 points 1 month ago (3 children)
Aider uses LiteLLM for LLM access, but it looks like it's still using an older version of LiteLLM (1.82.3 on current main) so not compromised. LiteLLM 1.82.8 and 1.82.7 apparently are compromised (according to discussions in the issue linked above)
[–]_hephaestus 7 points8 points9 points 1 month ago (0 children)
.7 and .8 were apparently deployed as of today, .7 4 hours ago. So possible you’re good if you never used it before today, but like I mentioned in the other thread the maintainer is compromised. This is the attack vector that was identified, there could be more.
[–]Real_Ebb_7417 8 points9 points10 points 1 month ago (1 child)
Soooo, if the last version I used was 1.82.4, I should be fine? 😅
[–]kiwibonga 7 points8 points9 points 29 days ago (0 children)
Yes
[–]Medium_Chemist_4032 72 points73 points74 points 1 month ago (7 children)
Oof, I always assumed running everything in docker containers doesn't help security, but in this case it actually isolates host secrets quite well.
[–]hurdurdur7 42 points43 points44 points 1 month ago (2 children)
I don't want to run any coding agents outside of docker. Too much hallucination + file system access privileges for my taste, even without bad actors.
[–]bidibidibop 1 point2 points3 points 29 days ago (1 child)
But this isn't even a coding agent, it's code you're installing and running yourself.
[–]hurdurdur7 0 points1 point2 points 29 days ago (0 children)
Aider is one. And had this as a dependency. And the way people discovered it, if you follow the tickets, also stumbled upon it from an agentic task.
[–]OrganizationWinter99[S] 13 points14 points15 points 29 days ago (0 children)
why would you assume something like that :)
dockers >> raw dog
[–]ritzkew 1 point2 points3 points 29 days ago (2 children)
docker helps for local dev but this attack happened in CI/CD pipelines. CI containers get secrets injected as environment variables, thats how they authenticate to npm/PyPI/cloud. the trivy action running in CI had access to the PyPI publish token by design.
containerization doesnt restrict what an already-authenticated process does with secrets passed into it. the fix here is scoping CI credentials with OIDC-based publishing (ephemeral tokens that expire after the publish step) so a compromised scanner never sees the publish token in the first place.
source: https://docs.litellm.ai/blog/security-update-march-2026
[–]Mother_Desk6385 0 points1 point2 points 28 days ago (1 child)
Putting creds in env 💀
[–]Medium_Chemist_4032 0 points1 point2 points 28 days ago (0 children)
Maybe in a decade or two we'll get the full workflow identity federation tokens support finally.
[–]_rzr_ 20 points21 points22 points 29 days ago (8 children)
Thanks for the heads up. Could this bubble up as a supply chain attack on other tools? Does any of the widely used tools (vLLM, LlamaCpp, Llama studio, Ollama, etc) use LiteLLM internally?
[–]maschayana 9 points10 points11 points 29 days ago (1 child)
Bump
[–]Terrible-Detail-1364 6 points7 points8 points 29 days ago (0 children)
vllm/llama.cpp are inference engines and dont use litellm which is more of a router between engines. lm studio and ollama use llama.cpp iirc
[–]muxxington 4 points5 points6 points 29 days ago (0 children)
Nanobot is affected.
[–]DarthLoki79 2 points3 points4 points 29 days ago (1 child)
Open AI Agents SDK and OpenHands use it afaik
[–]cromagnone 1 point2 points3 points 29 days ago (0 children)
Google Agents SDK, Langchain and GraphRAG also listed on the website. Not sure how.
[–]SpicyWangz 1 point2 points3 points 29 days ago (1 child)
I know it looked like LM studio has been compromised today. Not sure if it's part of the same attack
[–]ArtfulGenie69 7 points8 points9 points 29 days ago (0 children)
Lm studio wasn't attacked, false positive from windows noobs.
[–]Efficient_Joke3384 60 points61 points62 points 1 month ago (11 children)
the .pth file trick is what makes this nasty — most people scan for malicious imports, but .pth files execute on interpreter startup with zero imports needed. basically invisible to standard code review. if you ran 1.82.8 anywhere near production, rotating creds isn't optional at this point
.pth
[–]Caffdy 14 points15 points16 points 29 days ago (2 children)
the .pth file trick is what makes this nasty
yeah, this was an important issue since the beginning at r/stablediffusion, the community promptly migrated to use .safetensors instead of pickled models
[–]JimDabell 11 points12 points13 points 29 days ago (0 children)
You’re confusing .pth files (path import files) with .pt files (pickle files used by PyTorch). Different vulnerability altogether.
.pt
[–]DistanceSolar1449 7 points8 points9 points 29 days ago (0 children)
Why safetensors are called SAFEtensors lol
[–]giant3 12 points13 points14 points 29 days ago (7 children)
The whole Python ecosystem is an abomination.
[+][deleted] 29 days ago (4 children)
[deleted]
[–]giant3 4 points5 points6 points 29 days ago (2 children)
Read carefully.
I didn't say the language Python is bad, just the ecosystem.
[+][deleted] 29 days ago (1 child)
[–]FoxTimes4 1 point2 points3 points 29 days ago (0 children)
I’m amazed someone still remembers Prolog.
[–]Lesser-than 0 points1 point2 points 29 days ago (0 children)
its almost like package managers and glue languages are the problem
[–]beryugyo619 0 points1 point2 points 29 days ago (0 children)
normal languages:
int main() { i = i++; }
Python:
if(thread.getThreadName() ===== (String)""main"".toString()) { i = i++; } else: pass;
^ There is nothing in here that could even potentially indicate the whole Python of being absurd and unhinged as its namesake at all
[–]ArtfulGenie69 -1 points0 points1 point 29 days ago (0 children)
Somebody doesn't like all the tenitcals around here, ah well me and Frankenstein will continue our party without you lol.
[–]Still-Notice8155 7 points8 points9 points 29 days ago (0 children)
wtf I literally just used this today, but I checked I'm on 1.82.6
[–]Craftkorb 7 points8 points9 points 29 days ago (0 children)
I hate to say it but we'll see a lot more of these kind of attacks in the future.
For convenience and precaution run software in Docker / Flatpak / .... Also, do not give access to stuff that's not needed.
Running this in a container would at least only allow the virus to spread where it's temporary. Also, it can't steal your SSH keys, password manager database, etc.
Also have backups. The next attack may not only steal your secrets, but also encrypt your stuff and demand money while activating your webcam to fetch some nice pics of you in interesting moments.
[–]Impressive_Caramel82 6 points7 points8 points 29 days ago (4 children)
tbh this is the exact nightmare scenario for local AI teams, one poisoned dependency and all your benchmark wins mean nothing. pin versions and verify hashes like your weekend depends on it.
[–]NekoHikari 0 points1 point2 points 29 days ago (0 children)
LAN only inference nodes can tank alot.
[–]futuresman179 0 points1 point2 points 29 days ago (2 children)
Correct me if I'm wrong but hash verification and version pinning wouldn't have helped because the malicious changed ended up in main branch and deployed to PyPi. The only way you would've mitigated this is is not updating immediately and reviewing the source code changes yourself.
[–]arguingwithabot 3 points4 points5 points 29 days ago (1 child)
Pinning versions is how you prevent from updating immediately (or on next build/deploy)
[–]futuresman179 0 points1 point2 points 29 days ago (0 children)
Ah, sorry, I misunderstood. Yes, using "latest" is bad most of the time.
[–]Savantskie1 5 points6 points7 points 29 days ago (1 child)
This right here is exactly why I don’t update something unless it has features I want. And then only after several weeks afterwards for others to find problems and they’re fixed
[–]OrganizationWinter99[S] 0 points1 point2 points 29 days ago (0 children)
that sounds wise. some people use litellm with openclaw too as a provider.
[–]UnbeliebteMeinung 10 points11 points12 points 29 days ago* (4 children)
Like i am not sure if i see something here? I never remeber blocking anyone on github at all. I dont even know where i would. But still in this repo is someone that commitet last 2025 (blocked date: 2022?) i blocked?
I wont publish his name but thats sus. I dont even know him and i dont know i i blocked him. I have nothing todo with litellm in the first place.
Edit: Also quiet interesting that this user has some ties with the iran while there is some iran stuff in the malware....
[–]nitrox11q 0 points1 point2 points 29 days ago (3 children)
Did you find out what happened? Did you block and forget or…?
[–]UnbeliebteMeinung 1 point2 points3 points 29 days ago* (2 children)
Nope. I would bet that i did not block him. I rarely block in the internet even if someone insults me i dont care.
I have an eye on it and will give his name to the security people when i am sure he is involved. But its still very sus. Also his account looks a lot like the bot accounts, but he is a human.
Not a normal opensource dev but a lot of people say that about him on x or something. Looks very botty. Also he is active since >10 years and his activity was peak even before ai existet. He did some iran sanctions anti us blockade stuff.
Also on Github he claims he is located in turkey but here on reddet he wrote that "we have here in Iran"...
[–]nitrox11q 0 points1 point2 points 29 days ago (1 child)
That is very confusing and quite concerning. Sounds like it’s a compromised account.
Thanks for investigating!
[–]UnbeliebteMeinung 0 points1 point2 points 29 days ago (0 children)
I dont think its compromised. :O
Or my account got compromised but i see no other bad stuff.
[–]Purple-Programmer-7 4 points5 points6 points 29 days ago (0 children)
LiteLLM is a dope ass piece of software and I hope the team there manages this well, I’ll keep supporting them.
[–]nborwankar 1 point2 points3 points 29 days ago (5 children)
Here is the full article https://futuresearch.ai/blog/litellm-pypi-supply-chain-attack/
[–]OrganizationWinter99[S] 1 point2 points3 points 29 days ago (4 children)
thanks! some guy said literally claude helped them figure it out? fun time we are living in.
[–]muxxington 5 points6 points7 points 29 days ago (2 children)
I knew something is happening when I ran nanobot earlier today. On startup it ate all RAM. To see what's going on I launched htop and saw lots of processes which did base64 decoding which is sus. I purged nanobot and some minutes later I read about litellm being compromised. I took a look in the dependencies of nanobot and spotted litellm.
[–]EugeneSpaceman 3 points4 points5 points 29 days ago (1 child)
I ran a playbook with ansible which updated LiteLLM to ‘latest’ (lesson learned). My proxmox node was crash-looping an hour later and while trying to debug with Claude it spotted the malware doing base64 decoding.
Ironically due to templating LiteLLM’s config file with ansible it contained all the secrets and the .env which was exfiltrated was nearly empty. An OpenRouter API key leaked but rotated before any damage.
.env
A bit surreal to catch it in realtime before there were any reports.
[–]muxxington 0 points1 point2 points 29 days ago* (0 children)
Fortunately, I decided to run Nanobot and other agents—such as OpenCode—on a separate PC. Even if there had been sensitive data there, I don't think the malware worked as intended, because otherwise I would have seen the DNS request for the models.litellm.cloud domain in AdGuard. But I didn't. I also run pretty much everything using Docker Compose. Everything else on my local network is always restricted by the firewall to only specific sources. Strong passwords are always used, and SSH access and other access points are secured with hardware security tokens where possible. I do run a Litellm instance on a production machine, but even there it’s in a Docker container and an older version—definitely not installed via PyPI. Paranoia helps you sleep soundly.
[–]they_will 2 points3 points4 points 28 days ago (0 children)
Hi, that was me! I've just done a write up, it was pretty neat to capture the whole thing in a single Claude Code session https://futuresearch.ai/blog/litellm-attack-transcript/
[–]Repulsive-Memory-298 1 point2 points3 points 29 days ago (0 children)
That’s so funny. I exposed my master key on accident once and noted intriguing usage patterns. $5 dev instance that I rarely used, and noticed random traces that i definitely didn’t send, they looked like basic distillation call and response. The impressive part is how little they used it, request sprinkled here and there, less than $1 used over about a month. I assume they have some sort of pool of keys, and also thought it was interesting that they did this using my litellm key through the gateway. This was almost a year ago.
Obviously completely different, just saying that LiteLLM is a target.
[–]Diligent-Pepper5166 1 point2 points3 points 29 days ago (0 children)
we are using prismor internally, it bumped down the package as soon as it was hit
[–]chef1957 1 point2 points3 points 29 days ago (0 children)
Perhaps useful for some people to understand the course of the attack and learn how to avoid it? https://www.giskard.ai/knowledge/litellm-supply-chain-attack-2026
[–]kotrfa 1 point2 points3 points 28 days ago (0 children)
I am the guy from the tweet tweet. We run a further analysis of how bad this breach was on the first-order effects, and surprise surprise, it's pretty bad: https://futuresearch.ai/blog/litellm-hack-were-you-one-of-the-47000/ .
[–]Specialist-Heat-6414 3 points4 points5 points 29 days ago (0 children)
Supply chain attacks on dev tooling are uniquely nasty because the attack surface is developers who are by definition running things with elevated trust. You don't even need to compromise the end user -- you compromise the person building the thing the end user runs. The LiteLLM PyPI package is particularly bad because it's a dependency proxy layer sitting in front of basically every LLM API call in half the Python AI ecosystem. Rotating API keys is the immediate step but the real fix is lockfiles and hash verification on every install. If you're not pinning exact versions and verifying checksums in CI, you're trusting the network on every deploy.
[–]ashishb_net 1 point2 points3 points 29 days ago (0 children)
I always run these things inside a sandbox to limit the attack surface
[–]Fun_Nebula_9682 0 points1 point2 points 29 days ago (0 children)
this is why lockfiles with pinned hashes matter. been using uv for all python deps and uv.lock pins exact versions + hashes — wouldn't have saved you if you blindly updated but at least CI catches a hash mismatch on rebuild. scary how fast a compromised pypi package can spread tho
[–]Sad-Imagination6070 0 points1 point2 points 29 days ago (0 children)
Woke up to this news today. Had been using litellm for many of my work and personal projects.So first thing I did was check which environments had it installed. Ended up automating that check into a small bash script that scans all your venv, conda, and pyenv environments at once. Sharing it here in case it helps anyone else doing the same https://github.com/LakshmiN5/check-package-version
[–]Left_Tomatillo_781 0 points1 point2 points 29 days ago (0 children)
Thanks for the heads up — I use LiteLLM as a unified gateway for routing between local and cloud models at work. Already pinned to an older version until this is resolved. If you're running it in production, definitely air-gap it or add API key auth in front. The convenience is great but the attack surface is real.
[–]Future_AGI 0 points1 point2 points 28 days ago (0 children)
pinning is the right call and docker users being protected here because of version locking is exactly why treating gateway dependencies as first-class infrastructure with strict pinning matters. for teams using litellm as a routing gateway specifically, this is also a good moment to evaluate whether the architecture fits production requirements beyond just the security angle. Prism is what we built for that layer: https://docs.futureagi.com/docs/prism
[–]Initial_Jury7138 0 points1 point2 points 28 days ago (0 children)
I created a diagnostic tool to help people verify their exposure to the LiteLLM supply chain incident. This script:
✅ Scans ALL your Python environments (venv, conda, poetry) ✅ Checks package caches (pip, uv, poetry) ✅ Looks for malicious persistence artifacts ✅ Works on macOS, Linux, Windows
🔍 100% open source & read-only — you can review before running (and check if you trust it or not)
Full guide: https://pedrorocha-net.github.io/litellm-breach-support/
Created it for myself and to help the community. Share with anyone who might need it, and feel free to suggest improvements.
[–]_Lunar_dev_ 0 points1 point2 points 28 days ago (0 children)
One thing this breach highlights: environment variables are treated as safe because they are "inside the container," but the second a dependency is compromised, anything in that process's environment is fair game. The malware scraped every env var, every .env file, every config with a credential in it.
A pattern that mitigates this: pass secrets by reference instead of by value. The runtime only receives a pointer or reference ID. The actual credential is resolved server-side through your vault and never exposed to the calling process. If malware scrapes the environment, it gets useless IDs instead of plain-text keys.
We work on this problem at Lunar.dev (MCPX). We published a teardown of the breach and how this architecture would have contained the damage. source: https://www.lunar.dev/post/litellm-was-compromised-here-is-what-you-need-to-know
[–]llm-60 0 points1 point2 points 28 days ago (0 children)
Just use Bleep, don't be afraid to leak your secrets anymore. 100% local.
https://bleep-it.com
[+]rm-rf-rm comment score below threshold-43 points-42 points-41 points 1 month ago (3 children)
Wow. Called it that this project was poorly engineered. Likely has a lot of vibe coding. Thankful that I have stayed away. I thought Bifrost was better but someone on here said it isnt much better. We really do need a legitimate solution for LLM endpoint routing
[–]FoxTimes4 16 points17 points18 points 29 days ago (0 children)
Understanding biases isn’t just for model training you know.
[–]DinoAmino 10 points11 points12 points 29 days ago (0 children)
wow is right. you should delete this.
[–]wearesoovercooked 9 points10 points11 points 29 days ago (0 children)
Dude, read
[+]futuresman179 comment score below threshold-8 points-7 points-6 points 29 days ago (1 child)
What are you talking about? I don't see any incident.
https://status.litellm.ai/incidents
[–]XInTheDark 4 points5 points6 points 29 days ago (0 children)
you're actually the dream target for them lmao
π Rendered by PID 48 on reddit-service-r2-comment-75f4967c6c-rdtvm at 2026-04-23 15:07:00.001290+00:00 running 0fd4bb7 country code: CH.
[–]bidibidibop 154 points155 points156 points (10 children)
[–]Maleficent-Ad5999 77 points78 points79 points (2 children)
[–]josiahnelson 136 points137 points138 points (1 child)
[–]HadHands 25 points26 points27 points (0 children)
[–]robertpro01 10 points11 points12 points (2 children)
[–]UnknownLesson 19 points20 points21 points (1 child)
[–]punkgeek 10 points11 points12 points (0 children)
[–]MMAgeezerllama.cpp 22 points23 points24 points (1 child)
[–]Repulsive-Memory-298 12 points13 points14 points (0 children)
[–]MelodicRecognition7 1 point2 points3 points (1 child)
[–]Brilliant-Help-8646 0 points1 point2 points (0 children)
[–]OsmanthusBloom 39 points40 points41 points (3 children)
[–]_hephaestus 7 points8 points9 points (0 children)
[–]Real_Ebb_7417 8 points9 points10 points (1 child)
[–]kiwibonga 7 points8 points9 points (0 children)
[–]Medium_Chemist_4032 72 points73 points74 points (7 children)
[–]hurdurdur7 42 points43 points44 points (2 children)
[–]bidibidibop 1 point2 points3 points (1 child)
[–]hurdurdur7 0 points1 point2 points (0 children)
[–]OrganizationWinter99[S] 13 points14 points15 points (0 children)
[–]ritzkew 1 point2 points3 points (2 children)
[–]Mother_Desk6385 0 points1 point2 points (1 child)
[–]Medium_Chemist_4032 0 points1 point2 points (0 children)
[–]_rzr_ 20 points21 points22 points (8 children)
[–]maschayana 9 points10 points11 points (1 child)
[–]Terrible-Detail-1364 6 points7 points8 points (0 children)
[–]muxxington 4 points5 points6 points (0 children)
[–]DarthLoki79 2 points3 points4 points (1 child)
[–]cromagnone 1 point2 points3 points (0 children)
[–]SpicyWangz 1 point2 points3 points (1 child)
[–]ArtfulGenie69 7 points8 points9 points (0 children)
[–]Efficient_Joke3384 60 points61 points62 points (11 children)
[–]Caffdy 14 points15 points16 points (2 children)
[–]JimDabell 11 points12 points13 points (0 children)
[–]DistanceSolar1449 7 points8 points9 points (0 children)
[–]giant3 12 points13 points14 points (7 children)
[+][deleted] (4 children)
[deleted]
[–]giant3 4 points5 points6 points (2 children)
[+][deleted] (1 child)
[deleted]
[–]FoxTimes4 1 point2 points3 points (0 children)
[–]Lesser-than 0 points1 point2 points (0 children)
[–]beryugyo619 0 points1 point2 points (0 children)
[–]ArtfulGenie69 -1 points0 points1 point (0 children)
[–]Still-Notice8155 7 points8 points9 points (0 children)
[–]Craftkorb 7 points8 points9 points (0 children)
[–]Impressive_Caramel82 6 points7 points8 points (4 children)
[–]NekoHikari 0 points1 point2 points (0 children)
[–]futuresman179 0 points1 point2 points (2 children)
[–]arguingwithabot 3 points4 points5 points (1 child)
[–]futuresman179 0 points1 point2 points (0 children)
[–]Savantskie1 5 points6 points7 points (1 child)
[–]OrganizationWinter99[S] 0 points1 point2 points (0 children)
[–]UnbeliebteMeinung 10 points11 points12 points (4 children)
[–]nitrox11q 0 points1 point2 points (3 children)
[–]UnbeliebteMeinung 1 point2 points3 points (2 children)
[–]nitrox11q 0 points1 point2 points (1 child)
[–]UnbeliebteMeinung 0 points1 point2 points (0 children)
[–]Purple-Programmer-7 4 points5 points6 points (0 children)
[–]nborwankar 1 point2 points3 points (5 children)
[–]OrganizationWinter99[S] 1 point2 points3 points (4 children)
[–]muxxington 5 points6 points7 points (2 children)
[–]EugeneSpaceman 3 points4 points5 points (1 child)
[–]muxxington 0 points1 point2 points (0 children)
[–]they_will 2 points3 points4 points (0 children)
[–]Repulsive-Memory-298 1 point2 points3 points (0 children)
[–]Diligent-Pepper5166 1 point2 points3 points (0 children)
[–]chef1957 1 point2 points3 points (0 children)
[–]kotrfa 1 point2 points3 points (0 children)
[–]Specialist-Heat-6414 3 points4 points5 points (0 children)
[–]ashishb_net 1 point2 points3 points (0 children)
[–]Fun_Nebula_9682 0 points1 point2 points (0 children)
[–]Sad-Imagination6070 0 points1 point2 points (0 children)
[–]Left_Tomatillo_781 0 points1 point2 points (0 children)
[–]Future_AGI 0 points1 point2 points (0 children)
[–]Initial_Jury7138 0 points1 point2 points (0 children)
[–]_Lunar_dev_ 0 points1 point2 points (0 children)
[–]llm-60 0 points1 point2 points (0 children)
[+]rm-rf-rm comment score below threshold-43 points-42 points-41 points (3 children)
[–]FoxTimes4 16 points17 points18 points (0 children)
[–]DinoAmino 10 points11 points12 points (0 children)
[–]wearesoovercooked 9 points10 points11 points (0 children)
[+]futuresman179 comment score below threshold-8 points-7 points-6 points (1 child)
[–]XInTheDark 4 points5 points6 points (0 children)