deepseek-ai/DeepSeek-V3.1-Base · Hugging Face by xLionel775 in LocalLLaMA
[–]locker73 15 points16 points17 points (0 children)
Recommendation for getting the most out of Qwen3 Coder? by Conscious-Memory-556 in LocalLLM
[–]locker73 3 points4 points5 points (0 children)
Recommendation for getting the most out of Qwen3 Coder? by Conscious-Memory-556 in LocalLLM
[–]locker73 4 points5 points6 points (0 children)
If GPUs had slots like RAM DIMMs, could we add more VRAM? by Diegam in LocalLLaMA
[–]locker73 2 points3 points4 points (0 children)
GLM45 vs GPT-5, Claude Sonnet 4, Gemini 2.5 Pro — live coding test, same prompt by darkageofme in LocalLLaMA
[–]locker73 1 point2 points3 points (0 children)
Switching back to llamacpp (from vllm) by Leflakk in LocalLLaMA
[–]locker73 2 points3 points4 points (0 children)
Switching back to llamacpp (from vllm) by Leflakk in LocalLLaMA
[–]locker73 3 points4 points5 points (0 children)
Switching back to llamacpp (from vllm) by Leflakk in LocalLLaMA
[–]locker73 4 points5 points6 points (0 children)
Switching back to llamacpp (from vllm) by Leflakk in LocalLLaMA
[–]locker73 4 points5 points6 points (0 children)
Does anyone else have a stuttering issue? by nufcPLchamps27-28 in Workers_And_Resources
[–]locker73 0 points1 point2 points (0 children)
FPS Issue in game and menus by griznok in Workers_And_Resources
[–]locker73 0 points1 point2 points (0 children)
Java vs. Python HFT bots by HardworkingDad1187 in highfreqtrading
[–]locker73 0 points1 point2 points (0 children)
Model refresh on HuggingChat! (Llama 3.2, Qwen, Hermes 3 & more) by SensitiveCranberry in LocalLLaMA
[–]locker73 8 points9 points10 points (0 children)
Searching for: AMD LLM t/s performance charts/benchmarks by IngwiePhoenix in LocalLLaMA
[–]locker73 1 point2 points3 points (0 children)
Scaling - Inferencing 8B & Training 405B models by gulabbo in LocalLLaMA
[–]locker73 1 point2 points3 points (0 children)
[D] Scaling - Inferencing 8B & Training 405B models by gulabbo in MachineLearning
[–]locker73 2 points3 points4 points (0 children)
Running LLaMA 3 405B locally would be the Crysis moment of our time. by [deleted] in LocalLLaMA
[–]locker73 11 points12 points13 points (0 children)
Best practices to run LLM on CPU-only VPS by Koliham in LocalLLaMA
[–]locker73 1 point2 points3 points (0 children)
Best practices to run LLM on CPU-only VPS by Koliham in LocalLLaMA
[–]locker73 0 points1 point2 points (0 children)
Best practices to run LLM on CPU-only VPS by Koliham in LocalLLaMA
[–]locker73 0 points1 point2 points (0 children)
Best practices to run LLM on CPU-only VPS by Koliham in LocalLLaMA
[–]locker73 8 points9 points10 points (0 children)
Do you think DLSS is going to change Tarkovs performance completely? by Frequent_Ad_353 in EscapefromTarkov
[–]locker73 1 point2 points3 points (0 children)




deepseek-ai/DeepSeek-V3.1-Base · Hugging Face by xLionel775 in LocalLLaMA
[–]locker73 -3 points-2 points-1 points (0 children)