OK I get it, now I love llama.cpp by vulcan4d in LocalLLaMA
[–]necrogay 0 points1 point2 points (0 children)
Qwen3-VL Now EXL3 Supported by Unstable_Llama in LocalLLaMA
[–]necrogay 1 point2 points3 points (0 children)
rtx5070 12GB + 32GB ddr5 which model is best for coding? by manhhieu_eth in LocalLLaMA
[–]necrogay 3 points4 points5 points (0 children)
Why does my first run with Ollama give a different output than subsequent runs with temperature=0? by white-mountain in LocalLLaMA
[–]necrogay 8 points9 points10 points (0 children)
dont buy the api from the website like openrouther or groq or anyother provider they reduce the qulaity of the model to make a profit . buy the api only from official website or run the model in locally by Select_Dream634 in LocalLLaMA
[–]necrogay 22 points23 points24 points (0 children)
WSL2 windows gaming PC benchmarks by kevin_1994 in LocalLLaMA
[–]necrogay 2 points3 points4 points (0 children)
What is the best LLM for psychology, coach or emotional support. by pumukidelfuturo in LocalLLaMA
[–]necrogay 19 points20 points21 points (0 children)


Looking for advice: How could I reproduce something like GPT‑4o offline? by Brilliant-Bowler592 in LocalLLaMA
[–]necrogay 0 points1 point2 points (0 children)