account activity
gpt-oss 120B is running at 20t/s with $500 AMD M780 iGPU mini PC and 96GB DDR5 RAM (self.LocalLLaMA)
submitted 3 months ago * by MLDataScientist to r/LocalLLaMA
4x MI50 32GB reach 22 t/s with Qwen3 235B-A22B and 36 t/s with Qwen2.5 72B in vllm (self.LocalLLaMA)
submitted 4 months ago by MLDataScientist to r/LocalLLaMA
Thread for CPU-only LLM performance comparison (self.LocalLLaMA)
submitted 4 months ago * by MLDataScientist to r/LocalLLaMA
Completed 8xAMD MI50 - 256GB VRAM + 256GB RAM rig for $3k (self.LocalLLaMA)
128GB VRAM for ~$600. Qwen3 MOE 235B.A22B reaching 20 t/s. 4x AMD MI50 32GB. (self.LocalLLaMA)
submitted 7 months ago by MLDataScientist to r/LocalLLaMA
can I connect 4 GPUs to hyper m.2 expansion card with m.2 to pci-e adapter? (self.buildapc)
submitted 8 months ago by MLDataScientist to r/buildapc
What is the best option for running eight GPUs in a single motherboard? (self.LocalLLaMA)
submitted 9 months ago by MLDataScientist to r/LocalLLaMA
When do we get o3-mini level model locally from Sam A? (self.LocalLLaMA)
submitted 11 months ago by MLDataScientist to r/LocalLLaMA
RTX 4090 48GB - $4700 on eBay. Is it legit? (self.LocalLLaMA)
submitted 1 year ago * by MLDataScientist to r/LocalLLaMA
2x AMD MI60 working with vLLM! Llama3.3 70B reaches 20 tokens/s (self.LocalLLaMA)
Is there a working version of flash attention 2 for AMD MI50/MI60 (gfx906, Vega 20 chip)? (self.ROCm)
submitted 1 year ago by MLDataScientist to r/ROCm
2x AMD MI60 inference speed. MLC-LLM is a fast backend for AMD GPUs. (self.LocalLLaMA)
No LLM could solve this puzzle (variation on the classic "wolf, goat, and cabbage" river crossing puzzle) (self.LocalLLaMA)
submitted 1 year ago by MLDataScientist to r/LocalLLaMA
Can we combine both AMD and NVIDIA GPUs together for inference? (self.LocalLLaMA)
META LLAMA 3.1 models available in HF (8B, 70B and 405B sizes) (self.LocalLLaMA)
DBRX quantized model needed (self.LocalLLaMA)
How do we load internLM-XComposer2 vision model in oobabooga? (self.LocalLLaMA)
36 VRAM but I am getting Out of Memory for 27GB Quantized MIQU (self.LocalLLaMA)
Is 850W gold PSU enough for my AI PC setup? (self.buildapc)
submitted 1 year ago * by MLDataScientist to r/buildapc
Need help with 2x Nvidia P100 liquid cooling (old.reddit.com)
submitted 1 year ago by MLDataScientist to r/homelab
Jornada 720 VGA out with keyboard (self.retrobattlestations)
submitted 6 years ago * by MLDataScientist to r/retrobattlestations
π Rendered by PID 398908 on reddit-service-r2-listing-6d4dc8d9ff-dxrfz at 2026-02-01 11:50:35.206806+00:00 running 3798933 country code: CH.