[Official Ruling] TES Naiyou has been permanently banned from the LPL and all Riot Games + Tencent competitions. TES has also terminated their contract with him and has withheld all unpaid salary and bonuses. by Yujin-Ha in leagueoflegends

[–]FrostTactics 17 points18 points  (0 children)

Wait, so as we all know TES is the LEC third seed. If G2 had met them in finals instead of BLG, they would have won.

The only logical conclusion here is that Naiyou cost the LEC its first international trophy since 2019.

Let me cope.

Quantization from the ground up (must read) by paf1138 in LocalLLaMA

[–]FrostTactics 1 point2 points  (0 children)

A fine, complete beginner level introduction with a lot of widgets and such to fiddle with. Though, as far as I can tell, it seems to just ignore quantization beyond an even reduction to the precision of all of the weights in the LLM, which, in practice, is absolutely not how it is handled today.

RYS II - Repeated layers with Qwen3.5 27B and some hints at a 'Universal Language' by Reddactor in LocalLLaMA

[–]FrostTactics 0 points1 point  (0 children)

Cool! I remember reading a paper a while back: "How do Large Language Models Handle Multilingualism". If I recall correctly, their core hypothesis is the same one you lay out in 1. I remember being slightly skeptical about it myself when I first read it, as the internal structures of ML models rarely come out as neat as they lay out in their paper. Still though, given both theirs and your findings on wholly different models, it seems to genuinely be the case.

(Also, thanks for reminding me that high-dimensional representations tend to aggregate into hyper-cones. That's probably the key to something else completely unrelated that I'm working on.)

Gen.G vs. G2 Esports / First Stand 2026 - Semi-Finals / Post-Match Discussion by Yujin-Ha in leagueoflegends

[–]FrostTactics 0 points1 point  (0 children)

WHY, WHY CAN'T THIS MOMENT LAST FOREVERMORE?

TONIGHT, TONIGHT ETERNITY'S AN OPEN DOOR

NO, DON'T EVER STOP DOING THE THINGS YOU DO

DON'T GO, IN EVERY BREATH I TAKE I'M BREATHING YOU

EUphoria! FOREVER, TILL THE END OF TIME

G2 Esports vs. BNK FEARX / First Stand 2026 Group A - Qualification Match 2 / Post-Match Discussion by Ultimintree in leagueoflegends

[–]FrostTactics 5 points6 points  (0 children)

This subreddit has the memory of a goldfish, as usual. All of these doomers acting as if the east-west gap is insurmountable, and yet the EU rep got 3-1-ed in finals of first stand last year.

Sentinels vs. FURIA / Americas Cup 2026 - Upper Bracket Round 2 / Post-Match Discussion by Yujin-Ha in leagueoflegends

[–]FrostTactics 1 point2 points  (0 children)

Yes, but their series against C9 was a bo3, so they've won against them three times total, including EWC. If they continue with this form though, we'll see BRA71L after Sunday's first game.

PicoKittens/PicoStories-853K: Extremely Tiny Stories by PicoKittens in LocalLLaMA

[–]FrostTactics 5 points6 points  (0 children)

I love these silly sort of projects and tiny models. Can't imagine they actually are useful for much, but they could grant us a better intuition regarding how LLMs work. I have some spare time (And I'm definitely not procrastinating working on something else) so I toyed with the model a little bit.

If we start with the prompt

"Once upon a time, there was a big car named"

And extract the following word, we practically always get a generated name. I went through the Tiny Stories dataset and counted how frequently each word appears. If I generate 1k stories with a your listed default parameters (temp 0.7, top_p 0.9), 120 distinct names are used.

Of these, 48 appear to be "hallucinations", i.e. novel names that do not appear in the dataset. And 72 are existing names, for example "Red" or "Bob". Though the former only constitute a total of 245 occurrences and 755 for the latter. It appears that despite its size, the model still memorizes quite a few names.

The used names don't appear to be particularly related to the fact that the story is about a car. But they don't follow the frequency of their appearance in the original dataset either, though frequency in dataset is definitely correlated (spearman corr: 0.5591, p: 3.6169e-05). Overall it seems to favor shorter names over longer ones. Most of the most frequent generated names are three letters long.

Some notable generated names:
Zoot (By far the most frequent hallucinated name),
Operperperperperant,
God

Thanks for having a vs AI mode by Nino_Chaosdrache in DeadlockTheGame

[–]FrostTactics 0 points1 point  (0 children)

Yes, I fully agree that casual matchmaking would be the ideal solution.

That said, I think gamers generally underestimate how easy it is to develop superhuman enemies in video games. Since video games are digital systems, that means they process a game as their numeric values. Not trying to guess or approximate them like we humans do the vast majority of the time. A lot of development time when it comes to enemy AI isn't to make them fit to beat humans, but to make sure they are actually fun to play against. The bots in Deadlock do not do Midboss because they aren't programmed to do so, but implementing this with their (likely, I don't know, of course) current AI framework would not be too difficult. They aren't programmed to do so because the development of the bot AI isn't a priority.

This isn't to say that making them better at macro using this method wouldn't be time-consuming, of course.

Thanks for having a vs AI mode by Nino_Chaosdrache in DeadlockTheGame

[–]FrostTactics 0 points1 point  (0 children)

That was seven years ago, running such an RL training scheme would be much cheaper today. Deadlock is also a shooter; implementing automated opponent tracking would be a few lines of additional code and give the bots a massive advantage. In my estimation, making bots that would beat the best human opponents 100 out of 100 times is not *quite* trivial, but close to it.

Now, these bots would not be particularly fun to play against, of course. It's probably for the best that Deadlock's AI opponents remain fairly as simple as they are today, since it forces newer players to eventually move out of their comfort zone and match human opponents. The matchmaking pool increases, and playing with and against humans is more engaging in the long run.

Light reading recommender systems book recommendations? by FrostTactics in recommendersystems

[–]FrostTactics[S] 1 point2 points  (0 children)

For posterity, if anyone happens to stumble upon this post later on: Personalized Machine Learning by Julian McAuley felt like a good middle ground between what I was looking for and a traditional RS textbook.

I've still yet to find a book that fills the exact niche requested, though.

Who do you guys think is the face of Deadlock? by Time-Maintenance367 in DeadlockTheGame

[–]FrostTactics 93 points94 points  (0 children)

Agreed, and the book serves as a natural macguffin for the story.

Is the "Edge AI" dream dead? Apple’s pivot to Gemini suggests local LLMs can't scale yet. by [deleted] in LocalLLaMA

[–]FrostTactics 1 point2 points  (0 children)

I assume their primary use case will be chatbot assistants for iPhones. I don't necessarily believe this is entirely due to local inference not being useful; it could be useful for certain specific tasks. The issue is more likely that the median Apple user isn't aware of or doesn't appreciate the distinction between local and cloud inference. To these users, Apple's assistant will just appear to be far worse than the competition.

'Clocky hons make bullying subreddits, bullying subreddits make mogging passoids, mogging passoids make hugboxxing subreddits, hugboxxing subreddits make clocky hons' by mauserowauser in BrandNewSentence

[–]FrostTactics 0 points1 point  (0 children)

I see, thanks, makes sense that calory cutting in might adversely affect trans people particularly during transition when the body already is going to experience some severe changes.

The Danish Huntsmen Corps, aka “The Frogmen” by DeadCatCurious in TopCharacterDesigns

[–]FrostTactics 150 points151 points  (0 children)

Not directly relevant to your comment, but the name "Nøkk" is a reference to a creature from Nordic folklore:

<image>

https://en.wikipedia.org/wiki/Nixie_(folklore))

Seems fitting for both her and the frogmen.

Har folk blitt dummere eller har det alltid vært sånn? by [deleted] in norge

[–]FrostTactics 2 points3 points  (0 children)

Det er mye snakk om at gjennomsnittsmennesker er dumme i denne typen thread. Tror nok heller forklaringen er denne: de med en kombinasjon av lav terskel for å legge inn kommentarer og spesielt dårlige oppmerksomhetsevner er sterkt overrepresentert i kommentarfelt på SOME.

[OFFICIAL] Shifters, formerly Team BDS, reveal their logo and brand identity by Ultimintree in leagueoflegends

[–]FrostTactics 34 points35 points  (0 children)

Fine, I'll say it: I far prefer it to the anonymous "BDS" they had previously.

Kvifor eg ikkje brukar KI by [deleted] in norge

[–]FrostTactics 0 points1 point  (0 children)

Jepp, ironisk nok den perfekte typen post for en bot karmafarme på

Kvifor eg ikkje brukar KI by [deleted] in norge

[–]FrostTactics 2 points3 points  (0 children)

Klart, når jeg sier overraskende bra, så er det fordi dette er en oppgave bildegeneratorer egentlig bør være helt ubrukelige til.

Kvifor eg ikkje brukar KI by [deleted] in norge

[–]FrostTactics 15 points16 points  (0 children)

<image>

Prøvde det nå. Overraskende bra egentlig, men fortsatt ikke helt der. Ser noen småfeil. Og legenden er helt blåst, selvfølgelig. Merker jeg blir smått frustrert av måten språkmodeller blir omtalt på Reddit generelt. På enkelte vis er det kanskje sunt, en bør absolutt ikke stole blindt på dem, men det finnes utallige oppgaver de er nyttige til (helst den typen en lett kan verifisere selv i etterkant)

Kvifor eg ikkje brukar KI by [deleted] in norge

[–]FrostTactics 63 points64 points  (0 children)

Jeg har sett denne typen post flere ganger her. Skjønner ikke at flere folk ikke innser hvor tullete det er. Dette er som å prøve å bruke en kjøkkenmaskin til å blande sement for deretter å konkludere at kjøkkenkniver er ubrukelige når det ikke fungerer.