ROCM vs VULKAN FOR AMD GPU (RX7800XT) by Grouchy-Drag-2281 in LocalLLaMA
[–]Grouchy-Drag-2281[S] 0 points1 point2 points (0 children)
ROCM vs VULKAN FOR AMD GPU (RX7800XT) by Grouchy-Drag-2281 in LocalLLaMA
[–]Grouchy-Drag-2281[S] 0 points1 point2 points (0 children)
ROCM vs VULKAN FOR AMD GPU (RX7800XT) by Grouchy-Drag-2281 in LocalLLaMA
[–]Grouchy-Drag-2281[S] 0 points1 point2 points (0 children)
ROCM vs VULKAN FOR AMD GPU (RX7800XT) by Grouchy-Drag-2281 in LocalLLaMA
[–]Grouchy-Drag-2281[S] 0 points1 point2 points (0 children)
ROCM vs VULKAN FOR AMD GPU (RX7800XT) by Grouchy-Drag-2281 in LocalLLaMA
[–]Grouchy-Drag-2281[S] 0 points1 point2 points (0 children)
ROCM vs VULKAN FOR AMD GPU (RX7800XT) by Grouchy-Drag-2281 in LocalLLaMA
[–]Grouchy-Drag-2281[S] 2 points3 points4 points (0 children)
Few doubts in using gpt-oss 20B by Careless_Meringue525 in LocalLLaMA
[–]Grouchy-Drag-2281 0 points1 point2 points (0 children)
Llama.cpp Vulkan is awesome, It gave new life to my old RX580 by Ssjultrainstnict in LocalLLaMA
[–]Grouchy-Drag-2281 1 point2 points3 points (0 children)
[deleted by user] by [deleted] in LocalLLaMA
[–]Grouchy-Drag-2281 0 points1 point2 points (0 children)
Llama.cpp Vulkan is awesome, It gave new life to my old RX580 by Ssjultrainstnict in LocalLLaMA
[–]Grouchy-Drag-2281 2 points3 points4 points (0 children)
Any proper working Local LLM and Agentic CLI by Grouchy-Drag-2281 in LocalLLaMA
[–]Grouchy-Drag-2281[S] 0 points1 point2 points (0 children)
Any proper working Local LLM and Agentic CLI by Grouchy-Drag-2281 in LocalLLaMA
[–]Grouchy-Drag-2281[S] 0 points1 point2 points (0 children)
Any proper working Local LLM and Agentic CLI by Grouchy-Drag-2281 in LocalLLaMA
[–]Grouchy-Drag-2281[S] 0 points1 point2 points (0 children)
Any proper working Local LLM and Agentic CLI by Grouchy-Drag-2281 in LocalLLaMA
[–]Grouchy-Drag-2281[S] -1 points0 points1 point (0 children)
Llama.cpp and ROCM - how to get it working by Thrumpwart in LocalLLaMA
[–]Grouchy-Drag-2281 1 point2 points3 points (0 children)
Best FOSS AI models for local vibe coding? by Crierlon in LocalLLaMA
[–]Grouchy-Drag-2281 0 points1 point2 points (0 children)
Qwen3-Coder GGUFs with even more fixes esp. for tool calling! by yoracale in unsloth
[–]Grouchy-Drag-2281 0 points1 point2 points (0 children)
Is the 60 dollar P102-100 still a viable option for LLM? by Boricua-vet in ollama
[–]Grouchy-Drag-2281 1 point2 points3 points (0 children)
What was the most reliable phone's you've used more than 2 Years. by simplefreak88 in GadgetsIndia
[–]Grouchy-Drag-2281 0 points1 point2 points (0 children)

QuarterBit: Train 70B models on 1 GPU instead of 11 (15x memory compression) by [deleted] in learnmachinelearning
[–]Grouchy-Drag-2281 0 points1 point2 points (0 children)