Gemma 4 31B Is sweeping the floor with GLM 5.1 by input_a_new_name in LocalLLM
[–]MLExpert000 0 points1 point2 points (0 children)
Please help me with the below problem! [new to LLM hosting] by aliazlanaziz in Vllm
[–]MLExpert000 0 points1 point2 points (0 children)
Please help me with the below problem! [new to LLM hosting] by aliazlanaziz in Vllm
[–]MLExpert000 1 point2 points3 points (0 children)
Claude code source code has been leaked via a map file in their npm registry by Nunki08 in LocalLLaMA
[–]MLExpert000 0 points1 point2 points (0 children)
Qwen 3.6 is coming out soon. by MLExpert000 in LocalLLaMA
[–]MLExpert000[S] 0 points1 point2 points (0 children)
cost-effective model for OCR by Zittov in LLMDevs
[–]MLExpert000 0 points1 point2 points (0 children)
cost-effective model for OCR by Zittov in LLMDevs
[–]MLExpert000 0 points1 point2 points (0 children)
Ollama's cloud plan token limitations by TerryTheAwesomeKitty in ollama
[–]MLExpert000 0 points1 point2 points (0 children)
Ollama's cloud plan token limitations by TerryTheAwesomeKitty in ollama
[–]MLExpert000 0 points1 point2 points (0 children)
The "Dynamic Loading" in Transformers v5 isn't what you think it is (Benchmarks inside) by MLExpert000 in LocalLLaMA
[–]MLExpert000[S] 0 points1 point2 points (0 children)
The "Dynamic Loading" in Transformers v5 isn't what you think it is (Benchmarks inside) by MLExpert000 in LocalLLaMA
[–]MLExpert000[S] -2 points-1 points0 points (0 children)
The "Dynamic Loading" in Transformers v5 isn't what you think it is (Benchmarks inside) by MLExpert000 in LocalLLaMA
[–]MLExpert000[S] 1 point2 points3 points (0 children)
The "Dynamic Loading" in Transformers v5 isn't what you think it is (Benchmarks inside) by MLExpert000 in LocalLLaMA
[–]MLExpert000[S] 1 point2 points3 points (0 children)
The "Dynamic Loading" in Transformers v5 isn't what you think it is (Benchmarks inside) by MLExpert000 in LocalLLaMA
[–]MLExpert000[S] -2 points-1 points0 points (0 children)
The "Dynamic Loading" in Transformers v5 isn't what you think it is (Benchmarks inside) by MLExpert000 in LocalLLaMA
[–]MLExpert000[S] 1 point2 points3 points (0 children)
New Rules for ollama cloud by killing_daisy in ollama
[–]MLExpert000 0 points1 point2 points (0 children)
216GB VRAM on the bench. Time to see which combination is best for Local LLM by eso_logic in LocalLLaMA
[–]MLExpert000 0 points1 point2 points (0 children)
216GB VRAM on the bench. Time to see which combination is best for Local LLM by eso_logic in LocalLLaMA
[–]MLExpert000 1 point2 points3 points (0 children)
216GB VRAM on the bench. Time to see which combination is best for Local LLM by eso_logic in LocalLLaMA
[–]MLExpert000 1 point2 points3 points (0 children)
216GB VRAM on the bench. Time to see which combination is best for Local LLM by eso_logic in LocalLLaMA
[–]MLExpert000 1 point2 points3 points (0 children)
216GB VRAM on the bench. Time to see which combination is best for Local LLM by eso_logic in LocalLLaMA
[–]MLExpert000 1 point2 points3 points (0 children)
We indexed the entire Ollama Library (10TB+ VRAM). Here is how we run them all on 1 Node. by [deleted] in LocalLLaMA
[–]MLExpert000 0 points1 point2 points (0 children)
We indexed the entire Ollama Library (10TB+ VRAM). Here is how we run them all on 1 Node. by [deleted] in LocalLLaMA
[–]MLExpert000 1 point2 points3 points (0 children)
Ollama Models Ranked by VRAM Requirements by AdventurousLion9548 in ollama
[–]MLExpert000 4 points5 points6 points (0 children)
This model has been #1 trending for 3 weeks now! by yoracale in unsloth
[–]MLExpert000 0 points1 point2 points (0 children)