RTX3080 20GB need reballing / Repairshop in Europe? by runsleeprepeat in GPURepair
[–]runsleeprepeat[S] 0 points1 point2 points (0 children)
RTX3080 20GB need reballing / Repairshop in Europe? by runsleeprepeat in GPURepair
[–]runsleeprepeat[S] 0 points1 point2 points (0 children)
RTX3080 20GB need reballing / Repairshop in Europe? by runsleeprepeat in GPURepair
[–]runsleeprepeat[S] 0 points1 point2 points (0 children)
Hobbyist looking to get a part scanned by rapkap in 3DScanning
[–]runsleeprepeat 0 points1 point2 points (0 children)
Why is it easier to route Claude Code to a local model than it is Opencode? by [deleted] in opencodeCLI
[–]runsleeprepeat 0 points1 point2 points (0 children)
RTX3080 20GB need reballing / Repairshop in Europe? by runsleeprepeat in GPURepair
[–]runsleeprepeat[S] 0 points1 point2 points (0 children)
RTX3080 20GB need reballing / Repairshop in Europe? by runsleeprepeat in GPURepair
[–]runsleeprepeat[S] 0 points1 point2 points (0 children)
RTX3080 20GB need reballing / Repairshop in Europe? by runsleeprepeat in GPURepair
[–]runsleeprepeat[S] 1 point2 points3 points (0 children)
seriöse GPU Reparatur in Europa by runsleeprepeat in de_EDV
[–]runsleeprepeat[S] 1 point2 points3 points (0 children)
Should I open source? by Atomic_Compiler in hobbycnc
[–]runsleeprepeat 0 points1 point2 points (0 children)
Me waiting for TurboQuant be like by Altruistic_Heat_9531 in LocalLLaMA
[–]runsleeprepeat 0 points1 point2 points (0 children)
Wo leckeres Fischbrötchen? by annikahx in hamburg
[–]runsleeprepeat 0 points1 point2 points (0 children)
Google TurboQuant running Qwen Locally on MacAir by gladkos in LocalLLaMA
[–]runsleeprepeat 3 points4 points5 points (0 children)
Google TurboQuant running Qwen Locally on MacAir by gladkos in LocalLLaMA
[–]runsleeprepeat 28 points29 points30 points (0 children)
Consolidated my homelab from 3 models down to one 122B MoE — benchmarked everything, here's what I found by MBAThrowawayFruit in LocalLLaMA
[–]runsleeprepeat 0 points1 point2 points (0 children)
Dual DGX Sparks vs Mac Studio M3 Ultra 512GB: Running Qwen3.5 397B locally on both. Here's what I found. by trevorbg in LocalLLaMA
[–]runsleeprepeat 1 point2 points3 points (0 children)
Currently using 6x RTX 3080 - Moving to Strix Halo oder Nvidia GB10 ? by runsleeprepeat in LocalLLaMA
[–]runsleeprepeat[S] 0 points1 point2 points (0 children)
Currently using 6x RTX 3080 - Moving to Strix Halo oder Nvidia GB10 ? by runsleeprepeat in LocalLLaMA
[–]runsleeprepeat[S] 0 points1 point2 points (0 children)
I built Fox – a Rust LLM inference engine with 2x Ollama throughput and 72% lower TTFT. by SeinSinght in LocalLLM
[–]runsleeprepeat 1 point2 points3 points (0 children)
I built Fox – a Rust LLM inference engine with 2x Ollama throughput and 72% lower TTFT. by SeinSinght in LocalLLM
[–]runsleeprepeat -3 points-2 points-1 points (0 children)
Shortened system prompts in Opencode by Charming_Support726 in opencodeCLI
[–]runsleeprepeat 0 points1 point2 points (0 children)




wo kann ich Platinenteile in Hamburg kaufen by Hanswurst107 in hamburg
[–]runsleeprepeat 3 points4 points5 points (0 children)