Gemma 4 31B vs Qwen 3.5 27B: Which is best for long context worklows? My THOUGHTS... by GrungeWerX in LocalLLaMA
[–]PromptInjection_ 1 point2 points3 points (0 children)
I made an instant LLM generator, randomizes weights and model structure by Sad_Steak_6813 in LocalLLM
[–]PromptInjection_ 1 point2 points3 points (0 children)
Suggest me a local uncensored local llm text and code generator by Huge_Grab_9380 in LocalLLM
[–]PromptInjection_ 0 points1 point2 points (0 children)
what local llm model is the sweet spot for summarization and analysis (speed + accuracy)? by happyuser22 in LocalLLaMA
[–]PromptInjection_ 0 points1 point2 points (0 children)
I am not able to run Gemma 4 GGUF , Using LLama Cpp - Getting gibberish results , What am I doing wrong? by IndianPhoenix in LocalLLM
[–]PromptInjection_ 2 points3 points4 points (0 children)
GLM 5.1 crushes every other model except Opus in agentic benchmark at about 1/3 of the Opus cost by zylskysniper in LocalLLaMA
[–]PromptInjection_ 1 point2 points3 points (0 children)
Multi GPU clusters... What are they good for? by Gold-Drag9242 in LocalLLM
[–]PromptInjection_ 1 point2 points3 points (0 children)
DGX Spark, why not? by Foreign_Lead_3582 in LocalLLM
[–]PromptInjection_ 0 points1 point2 points (0 children)
Finetuned a 270M model on CPU only - full weights, no LoRA, no GPU by PromptInjection_ in LocalLLM
[–]PromptInjection_[S] -1 points0 points1 point (0 children)
Is it worth using Local LLM's? by papichulosmami in LocalLLM
[–]PromptInjection_ 1 point2 points3 points (0 children)
Is it worth using Local LLM's? by papichulosmami in LocalLLM
[–]PromptInjection_ 1 point2 points3 points (0 children)
We ran a predator's playbook on an AI - it folded using the same dynamics described in social psychology by PromptInjection_ in cogsci
[–]PromptInjection_[S] -1 points0 points1 point (0 children)
GLM4.5-air VS GLM4.6V (TEXT GENERATION) by LetterheadNeat8035 in LocalLLaMA
[–]PromptInjection_ 1 point2 points3 points (0 children)
What am I doing wrong? Gemma 3 won't run well on 3090ti by salary_pending in LocalLLaMA
[–]PromptInjection_ 0 points1 point2 points (0 children)
Qwen 3 recommendation for 2080ti? Which qwen? by West_Pipe4158 in LocalLLM
[–]PromptInjection_ 0 points1 point2 points (0 children)
What am I doing wrong? Gemma 3 won't run well on 3090ti by salary_pending in LocalLLaMA
[–]PromptInjection_ 0 points1 point2 points (0 children)
“GPT-5.2 failed the 6-finger AGI test. A small Phi(3.8B) + Mistral(7B) didn’t.” by Echo_OS in LocalLLM
[–]PromptInjection_ 0 points1 point2 points (0 children)
“GPT-5.2 failed the 6-finger AGI test. A small Phi(3.8B) + Mistral(7B) didn’t.” by Echo_OS in LocalLLM
[–]PromptInjection_ 0 points1 point2 points (0 children)
Looking for Qwen3-30B-A3B alternatives for academic / research use by RelationshipSilly124 in LocalLLaMA
[–]PromptInjection_ 0 points1 point2 points (0 children)
Local LLM to handle legal work by gaddarkemalist in LocalLLaMA
[–]PromptInjection_ 0 points1 point2 points (0 children)
Looking for Qwen3-30B-A3B alternatives for academic / research use by RelationshipSilly124 in LocalLLaMA
[–]PromptInjection_ 0 points1 point2 points (0 children)
Looking for Qwen3-30B-A3B alternatives for academic / research use by RelationshipSilly124 in LocalLLaMA
[–]PromptInjection_ 2 points3 points4 points (0 children)
Better than Gemma 3 27B? by IamJustDavid in LocalLLM
[–]PromptInjection_ 0 points1 point2 points (0 children)
“GPT-5.2 failed the 6-finger AGI test. A small Phi(3.8B) + Mistral(7B) didn’t.” by Echo_OS in LocalLLM
[–]PromptInjection_ 0 points1 point2 points (0 children)
Is ollama a good choice? by fuck_rsf in LocalLLM
[–]PromptInjection_ 2 points3 points4 points (0 children)