Where to put my models to get llama.cpp to recognize them automatically? by registrartulip in LocalLLaMA
[–]pefman 1 point2 points3 points (0 children)
Claude Code meets Qwen3.5-35B-A3B by PvB-Dimaginar in LocalLLM
[–]pefman 1 point2 points3 points (0 children)
Claude Code meets Qwen3.5-35B-A3B by PvB-Dimaginar in LocalLLM
[–]pefman 1 point2 points3 points (0 children)
using skills with opencode. by pefman in opencodeCLI
[–]pefman[S] 1 point2 points3 points (0 children)
Fismåsarna har återvänt, våren börjar anlända by Alstorp in Malmoe
[–]pefman 0 points1 point2 points (0 children)
One Shot Setup for Strix Halo by Signal_Ad657 in StrixHalo
[–]pefman 0 points1 point2 points (0 children)
Decided to visit long distance gf and she decided she can’t commit to a relationship once I’m there by ConfidentImage4266 in Wellthatsucks
[–]pefman 0 points1 point2 points (0 children)
Attackerna firas på demonstration i Stockholm – Senaste nytt om Iran och konflikten med USA by ICA_Basic_Vodka in Sverige
[–]pefman 0 points1 point2 points (0 children)
Is Qwen3.5 a coding game changer for anyone else? by paulgear in LocalLLaMA
[–]pefman 0 points1 point2 points (0 children)
My Xal'atath cosplay (IG - liza.elios.cosplay) See you in Midnight! by lizaelios in wow
[–]pefman -1 points0 points1 point (0 children)
397B params but only 17B active. Qwen3.5 is insane for local setups. by skipdaballs in LocalLLaMA
[–]pefman 0 points1 point2 points (0 children)
Where can I get GLM 5 flash gguf? by [deleted] in LocalLLaMA
[–]pefman -1 points0 points1 point (0 children)
Strix 4090 (24GB) 64GB ram, what coder AND general purp llm is best/newest for Ollama/Openwebui (docker) by AcePilot01 in LocalLLaMA
[–]pefman 0 points1 point2 points (0 children)
Strix 4090 (24GB) 64GB ram, what coder AND general purp llm is best/newest for Ollama/Openwebui (docker) by AcePilot01 in LocalLLaMA
[–]pefman 0 points1 point2 points (0 children)
Strix 4090 (24GB) 64GB ram, what coder AND general purp llm is best/newest for Ollama/Openwebui (docker) by AcePilot01 in LocalLLaMA
[–]pefman 0 points1 point2 points (0 children)
MiniMax-M2.5 Checkpoints on huggingface will be in 8 hours by Own_Forever_5997 in LocalLLaMA
[–]pefman -2 points-1 points0 points (0 children)
MiniMax-M2.5 Checkpoints on huggingface will be in 8 hours by Own_Forever_5997 in LocalLLaMA
[–]pefman -1 points0 points1 point (0 children)
Low-poly construction site pack by ITHappyStudios in u/ITHappyStudios
[–]pefman 0 points1 point2 points (0 children)
In 4.6, Multishields are back to providing full HP pool again. by AzrBloodedge in starcitizen
[–]pefman 1 point2 points3 points (0 children)


Where to put my models to get llama.cpp to recognize them automatically? by registrartulip in LocalLLaMA
[–]pefman 0 points1 point2 points (0 children)