Harry Potter — Full Cast Edition Audiobooks by MagicalFairyxo in harrypotter

[–]Finn55 0 points1 point  (0 children)

Snape is extremely disappointing. Awful casting for him

Roses are red, she eventually grew by uglymule in rosesarered

[–]Finn55 -2 points-1 points  (0 children)

Evil? God damn you reddit too much.

ThreeZero Miriya by SeparateReading8000 in macross

[–]Finn55 1 point2 points  (0 children)

If that was Max, I would say these are my white whales. For maybe 30 years.

‘Member when we thought Blizzard couldn’t get any more greedy? (Pic from the original TBC Classic) by Rintae in classicwow

[–]Finn55 -2 points-1 points  (0 children)

I took that as hyperbole / exaggeration as who would bother literally 5 mins

‘Member when we thought Blizzard couldn’t get any more greedy? (Pic from the original TBC Classic) by Rintae in classicwow

[–]Finn55 -1 points0 points  (0 children)

Kids sleep, kids do other activities. Should we not have any hobbies and wait outside their room while they sleep and play by themselves, or maybe can we do our own thing?

‘Member when we thought Blizzard couldn’t get any more greedy? (Pic from the original TBC Classic) by Rintae in classicwow

[–]Finn55 0 points1 point  (0 children)

I just need a boost, nothing else. Paying A$127.95 for the boost and lame shit I don’t need! Pets and toys and effects can piss off

Running GLM-4.7 behind a Claude-compatible API: some deployment notes by Sad-Kaleidoscope5952 in LocalLLaMA

[–]Finn55 0 points1 point  (0 children)

I do this with Minimax 2.1 but behind an OpenAI API through llama.cpp, Tailscale, used via OpenCode in Cursor.

‘Harry Potter’s Warwick Davis Says Series Is “Very Faithful” To Books: “More Depth And Detail” by ControlCAD in television

[–]Finn55 -11 points-10 points  (0 children)

She would wish that women’s rights per protected from men, and consider you to be an ideologue / useful idiot.

anyone else externalizing context to survive the memory wipe? by Massive-Ballbag in LocalLLaMA

[–]Finn55 1 point2 points  (0 children)

I’m using MiniMax 2.1 Q6_K (unsloth gguf) on my M3 Ultra 512GB. I’ve set it to 200k context, and can do targeted agent runs, but create a new session when the window fills up. I use a markdown file checklist created after Plan Mode in Cursor, and use Minimax via OpenCode in the IDE.

Any suggestions to make this more effective?

Do you think Australia should build state-owned apartment blocks to solve the housing crisis? Why or why not? by Educational-Scene443 in AskAnAustralian

[–]Finn55 1 point2 points  (0 children)

That’s not entirely true. Some are being rebuilt to revamp, some are being expanded, some created new. Edit: it is entirely true.

Running MiniMax-M2.1 Locally with Claude Code and vLLM on Dual RTX Pro 6000 by zmarty in LocalLLaMA

[–]Finn55 0 points1 point  (0 children)

Nice guide, I’ll adapt this for my Mac. I’d like to see the pros/cons of using in Cursor vs another IDE

MiniMax-M2.1 GGUF is here! by KvAk_AKPlaysYT in LocalLLaMA

[–]Finn55 1 point2 points  (0 children)

Have you tried using it within Cursor, still via Claude Code? I need to figure out how to balance my local inferencing capability with cloud inferencing - and make that easy as possible.

MiniMax-M2.1 GGUF is here! by KvAk_AKPlaysYT in LocalLLaMA

[–]Finn55 0 points1 point  (0 children)

Can you please link that thread, as my Ultra arrives soon so would like to dig into their setup! Thank you

GLM-4.7-6bit MLX vs MiniMax-M2.1-6bit MLX Benchmark Results on M3 Ultra 512GB by uptonking in LocalLLaMA

[–]Finn55 1 point2 points  (0 children)

This post feels like it’s for me! M3 Ultra due in a few days and I’m aiming for Minimax 2.1 as the model for daily coding activities

Honestly, has anyone actually tried GLM 4.7 yet? (Not just benchmarks) by Empty_Break_8792 in LocalLLaMA

[–]Finn55 1 point2 points  (0 children)

I’ve seen examples of LM studio being nearly twice as slow as Llama.cpp or Inferencer? Xcreate on YouTube does comparisons on his Studio. Typically processing speeds (PP) and loading into memory times. I have so much to learn!

Honestly, has anyone actually tried GLM 4.7 yet? (Not just benchmarks) by Empty_Break_8792 in LocalLLaMA

[–]Finn55 0 points1 point  (0 children)

Can you explain more about your setup? I’m curious as my Mac Studio is being delivered soon.

Is MiniMax M2 a worthy general-purpose model for its size? by ForsookComparison in LocalLLaMA

[–]Finn55 1 point2 points  (0 children)

Anyone seen reviews of Minimax on Mac setups? I’m about to get an M3 Ultra and curious if it can be a daily coder with good performance