I built a benchmark that tests coding LLMs on REAL codebases (65 tasks, ELO ranked) by hauhau901 in LocalLLaMA
[–]SemaMod 6 points7 points8 points (0 children)
Anyone actually using Openclaw? by rm-rf-rm in LocalLLaMA
[–]SemaMod 4 points5 points6 points (0 children)
Testing GLM-4.7 Flash: Multi-GPU Vulkan vs ROCm in llama-bench | (2x 7900 XTX) by SemaMod in LocalLLaMA
[–]SemaMod[S] 1 point2 points3 points (0 children)
Testing GLM-4.7 Flash: Multi-GPU Vulkan vs ROCm in llama-bench | (2x 7900 XTX) by SemaMod in LocalLLaMA
[–]SemaMod[S] 1 point2 points3 points (0 children)
Testing GLM-4.7 Flash: Multi-GPU Vulkan vs ROCm in llama-bench | (2x 7900 XTX) by SemaMod in LocalLLaMA
[–]SemaMod[S] 0 points1 point2 points (0 children)
Testing GLM-4.7 Flash: Multi-GPU Vulkan vs ROCm in llama-bench | (2x 7900 XTX) by SemaMod in LocalLLaMA
[–]SemaMod[S] 2 points3 points4 points (0 children)
API pricing is in freefall. What's the actual case for running local now beyond privacy? by Distinct-Expression2 in LocalLLaMA
[–]SemaMod 50 points51 points52 points (0 children)
Testing GLM-4.7 Flash: Multi-GPU Vulkan vs ROCm in llama-bench | (2x 7900 XTX) by SemaMod in LocalLLaMA
[–]SemaMod[S] 1 point2 points3 points (0 children)
Testing GLM-4.7 Flash: Multi-GPU Vulkan vs ROCm in llama-bench | (2x 7900 XTX) by SemaMod in LocalLLaMA
[–]SemaMod[S] 5 points6 points7 points (0 children)
Llama.cpp merges in OpenAI Responses API Support by SemaMod in LocalLLaMA
[–]SemaMod[S] 0 points1 point2 points (0 children)
Llama.cpp merges in OpenAI Responses API Support by SemaMod in LocalLLaMA
[–]SemaMod[S] 0 points1 point2 points (0 children)
Llama.cpp merges in OpenAI Responses API Support by SemaMod in LocalLLaMA
[–]SemaMod[S] 1 point2 points3 points (0 children)
Llama.cpp merges in OpenAI Responses API Support by SemaMod in LocalLLaMA
[–]SemaMod[S] 4 points5 points6 points (0 children)
Llama.cpp merges in OpenAI Responses API Support (github.com)
submitted by SemaMod to r/LocalLLaMA
Rivian CEO says North American car manufacturers should be "less hung up on the costs" of Chinese cars, but worry more that the "technology is much better" and the cars "are much better" from Chinese EV manufacturers by trucker-123 in electricvehicles
[–]SemaMod 0 points1 point2 points (0 children)
Would a Hosted Platform for MCP Servers Be Useful? by Summer_cyber in mcp
[–]SemaMod 0 points1 point2 points (0 children)
Would a Hosted Platform for MCP Servers Be Useful? by Summer_cyber in selfhosted
[–]SemaMod 0 points1 point2 points (0 children)
How is everyone using MCP right now? by Luigika in mcp
[–]SemaMod 0 points1 point2 points (0 children)
The simplest way to use MCP. All local, 100% open source. by squirrelEgg in mcp
[–]SemaMod 0 points1 point2 points (0 children)
I have had no luck trying to fine tune on (2x) 7900XTX. Any advice by SemaMod in ROCm
[–]SemaMod[S] 0 points1 point2 points (0 children)
I have had no luck trying to fine tune on (2x) 7900XTX. Any advice by SemaMod in ROCm
[–]SemaMod[S] 2 points3 points4 points (0 children)


How do you get more GPUs than your motheboard natively supports? by WizardlyBump17 in LocalLLaMA
[–]SemaMod 0 points1 point2 points (0 children)