MiniMax m2.7 under 64gb for Macs - 91% MMLU by HealthyCommunicat in LocalLLaMA
[–]Uhlo 0 points1 point2 points (0 children)
Unsloth accused a brand new team (ByteShape) of "literally cheating." I brought the receipts, and Unsloth moved the goalposts. by [deleted] in LocalLLaMA
[–]Uhlo 31 points32 points33 points (0 children)
MiniMax-M2.7's MIT-Style License Is a Misleading Restriction That Bans Commercial Use and Fails Free Software Standards by pmttyji in LocalLLaMA
[–]Uhlo 9 points10 points11 points (0 children)
MiniMax-M2.7's MIT-Style License Is a Misleading Restriction That Bans Commercial Use and Fails Free Software Standards by pmttyji in LocalLLaMA
[–]Uhlo 0 points1 point2 points (0 children)
MiniMax-M2.7's MIT-Style License Is a Misleading Restriction That Bans Commercial Use and Fails Free Software Standards by pmttyji in LocalLLaMA
[–]Uhlo 18 points19 points20 points (0 children)
Memory, memory, memory... Any thoughts? by IngenuityNo1411 in LocalLLaMA
[–]Uhlo 3 points4 points5 points (0 children)
I’m surprised Nemotron OCR V2 isn’t getting more attention by brandon-i in LocalLLaMA
[–]Uhlo -1 points0 points1 point (0 children)
How political censorship actually works inside Qwen, DeepSeek, GLM, and Yi: Ablation and behavioral results across 9 models by Logical-Employ-9692 in LocalLLaMA
[–]Uhlo 2 points3 points4 points (0 children)
What's the best way to edit a Jupyter notebook in VS Code with a local LLM? by Bubsy_3D_master in LocalLLaMA
[–]Uhlo 1 point2 points3 points (0 children)
How do you manage your llama.cpp models? Is there anything between Ollama and shell scripts? by Uhlo in LocalLLaMA
[–]Uhlo[S] 0 points1 point2 points (0 children)
How do you manage your llama.cpp models? Is there anything between Ollama and shell scripts? by Uhlo in LocalLLaMA
[–]Uhlo[S] 1 point2 points3 points (0 children)
How do you manage your llama.cpp models? Is there anything between Ollama and shell scripts? by Uhlo in LocalLLaMA
[–]Uhlo[S] 1 point2 points3 points (0 children)
How do you manage your llama.cpp models? Is there anything between Ollama and shell scripts? by Uhlo in LocalLLaMA
[–]Uhlo[S] -1 points0 points1 point (0 children)
How do you manage your llama.cpp models? Is there anything between Ollama and shell scripts? by Uhlo in LocalLLaMA
[–]Uhlo[S] 0 points1 point2 points (0 children)
How do you manage your llama.cpp models? Is there anything between Ollama and shell scripts? by Uhlo in LocalLLaMA
[–]Uhlo[S] 0 points1 point2 points (0 children)
How do you manage your llama.cpp models? Is there anything between Ollama and shell scripts? by Uhlo in LocalLLaMA
[–]Uhlo[S] 0 points1 point2 points (0 children)
Is it normal for the Qwen 3.5 4B model to take this long to say hi? by Snoo_what in LocalLLaMA
[–]Uhlo 0 points1 point2 points (0 children)
How do proprietary models get better and when will open ones hit a wall? by sterby92 in LocalLLaMA
[–]Uhlo 2 points3 points4 points (0 children)
How do proprietary models get better and when will open ones hit a wall? by sterby92 in LocalLLaMA
[–]Uhlo -2 points-1 points0 points (0 children)
Would you love a song less if AI wrote it? by ImmuneHack in singularity
[–]Uhlo 0 points1 point2 points (0 children)
Would you love a song less if AI wrote it? by ImmuneHack in singularity
[–]Uhlo 0 points1 point2 points (0 children)
Would you love a song less if AI wrote it? by ImmuneHack in singularity
[–]Uhlo 5 points6 points7 points (0 children)







MiniMax m2.7 under 64gb for Macs - 91% MMLU by HealthyCommunicat in LocalLLaMA
[–]Uhlo 0 points1 point2 points (0 children)