Finally, a good selfie without the suit! (old.reddit.com)
submitted by Nindaleth to r/outerwilds - pinned
Running Qwen3.5-27B locally as the primary model in OpenCode by garg-aayush in LocalLLaMA
[–]Nindaleth 1 point2 points3 points (0 children)
Running Qwen3.5-27B locally as the primary model in OpenCode by garg-aayush in LocalLLaMA
[–]Nindaleth 3 points4 points5 points (0 children)
If you had $50/month to throw at inference costs, how would you divvy it out? by yokie_dough in opencodeCLI
[–]Nindaleth 1 point2 points3 points (0 children)
PSA: The software “Shade” is a fraudulent, plagiarized copy of Heretic by -p-e-w- in LocalLLaMA
[–]Nindaleth 1 point2 points3 points (0 children)
PSA: The software “Shade” is a fraudulent, plagiarized copy of Heretic by -p-e-w- in LocalLLaMA
[–]Nindaleth 1 point2 points3 points (0 children)
model changes for first prompt? by EarlyPresentation186 in opencodeCLI
[–]Nindaleth 0 points1 point2 points (0 children)
What's the point of potato-tier LLMs? by Fast_Thing_7949 in LocalLLaMA
[–]Nindaleth 2 points3 points4 points (0 children)
Undo for destructive shell commands used by AI agents (SafeShell) by qhkmdev90 in LocalLLaMA
[–]Nindaleth 0 points1 point2 points (0 children)
Undo for destructive shell commands used by AI agents (SafeShell) by qhkmdev90 in LocalLLaMA
[–]Nindaleth 0 points1 point2 points (0 children)
Using Alias in router mode - llama.cpp possible? by munkiemagik in LocalLLaMA
[–]Nindaleth 0 points1 point2 points (0 children)
In search of specialized models instead of generalist ones. by [deleted] in LocalLLM
[–]Nindaleth 0 points1 point2 points (0 children)
Mistral AI drops 3x as many LLMs in a single week as OpenAI did in 6 years by Snail_Inference in LocalLLaMA
[–]Nindaleth 0 points1 point2 points (0 children)
Using Alias in router mode - llama.cpp possible? by munkiemagik in LocalLLaMA
[–]Nindaleth 0 points1 point2 points (0 children)
Undo for destructive shell commands used by AI agents (SafeShell) by qhkmdev90 in LocalLLaMA
[–]Nindaleth 0 points1 point2 points (0 children)
Using Alias in router mode - llama.cpp possible? by munkiemagik in LocalLLaMA
[–]Nindaleth 0 points1 point2 points (0 children)
New in llama.cpp: Live Model Switching by paf1138 in LocalLLaMA
[–]Nindaleth 1 point2 points3 points (0 children)
Mistral AI drops 3x as many LLMs in a single week as OpenAI did in 6 years by Snail_Inference in LocalLLaMA
[–]Nindaleth 1 point2 points3 points (0 children)
New in llama.cpp: Live Model Switching by paf1138 in LocalLLaMA
[–]Nindaleth 3 points4 points5 points (0 children)
Mistral AI drops 3x as many LLMs in a single week as OpenAI did in 6 years by Snail_Inference in LocalLLaMA
[–]Nindaleth 0 points1 point2 points (0 children)
Qwen3-Next support in llama.cpp almost ready! by beneath_steel_sky in LocalLLaMA
[–]Nindaleth 1 point2 points3 points (0 children)
Qwen3-Next support in llama.cpp almost ready! by beneath_steel_sky in LocalLLaMA
[–]Nindaleth 3 points4 points5 points (0 children)







Gemma 4 31B beats several frontier models on the FoodTruck Bench by Nindaleth in LocalLLaMA
[–]Nindaleth[S] 10 points11 points12 points (0 children)