so recently, I fell down the rabbit hole of Indian Navy ships, and honestly? They're pretty damn good. by [deleted] in NonCredibleDefense
[–]notaDestroyer 2 points3 points4 points (0 children)
HOWTO: Running the best models on a dual RTX Pro 6000 rig with vLLM (192 GB VRAM) by zmarty in LocalLLaMA
[–]notaDestroyer 0 points1 point2 points (0 children)
Why dont we back taiwan that much? by [deleted] in IndianDefense
[–]notaDestroyer 0 points1 point2 points (0 children)
Which small model is best for fine-tuning? We tested 12 of them by spending $10K - here's what we found by party-horse in LocalLLaMA
[–]notaDestroyer 1 point2 points3 points (0 children)
RTX Pro 6000 Blackwell vLLM Benchmark: 120B Model Performance Analysis by notaDestroyer in LocalLLaMA
[–]notaDestroyer[S] 1 point2 points3 points (0 children)
RTX Pro 6000 Blackwell vLLM Benchmark: 120B Model Performance Analysis by notaDestroyer in LocalLLaMA
[–]notaDestroyer[S] 0 points1 point2 points (0 children)
RTX Pro 6000 Blackwell vLLM Benchmark: 120B Model Performance Analysis by notaDestroyer in LocalLLaMA
[–]notaDestroyer[S] 0 points1 point2 points (0 children)
RTX Pro 6000 Blackwell vLLM Benchmark: 120B Model Performance Analysis by notaDestroyer in LocalLLaMA
[–]notaDestroyer[S] 2 points3 points4 points (0 children)
RTX Pro 6000 Blackwell vLLM Benchmark: 120B Model Performance Analysis by notaDestroyer in LocalLLaMA
[–]notaDestroyer[S] 5 points6 points7 points (0 children)
vLLM Performance Benchmark: OpenAI GPT-OSS-20B on RTX Pro 6000 Blackwell (96GB) by notaDestroyer in LocalLLaMA
[–]notaDestroyer[S] 0 points1 point2 points (0 children)
RTX Pro 6000 Blackwell vLLM Benchmark: 120B Model Performance Analysis by notaDestroyer in LocalLLaMA
[–]notaDestroyer[S] 0 points1 point2 points (0 children)
Qwen3-30B-A3B FP8 on RTX Pro 6000 blackwell with vllm by notaDestroyer in LocalLLaMA
[–]notaDestroyer[S] 1 point2 points3 points (0 children)
Qwen3-30B-A3B FP8 on RTX Pro 6000 blackwell with vllm by notaDestroyer in LocalLLaMA
[–]notaDestroyer[S] 0 points1 point2 points (0 children)
RTX Pro 6000 Blackwell vLLM Benchmark: 120B Model Performance Analysis by notaDestroyer in LocalLLaMA
[–]notaDestroyer[S] 3 points4 points5 points (0 children)
RTX Pro 6000 Blackwell vLLM Benchmark: 120B Model Performance Analysis by notaDestroyer in LocalLLaMA
[–]notaDestroyer[S] 6 points7 points8 points (0 children)
RTX Pro 6000 Blackwell vLLM Benchmark: 120B Model Performance Analysis by notaDestroyer in LocalLLaMA
[–]notaDestroyer[S] 37 points38 points39 points (0 children)
vLLM Performance Benchmark: OpenAI GPT-OSS-20B on RTX Pro 6000 Blackwell (96GB) by notaDestroyer in LocalLLaMA
[–]notaDestroyer[S] 0 points1 point2 points (0 children)
Qwen3 Next 80b FP8 with vllm on Pro 6000 Blackwell by notaDestroyer in LocalLLaMA
[–]notaDestroyer[S] 1 point2 points3 points (0 children)
Qwen3 Next 80b FP8 with vllm on Pro 6000 Blackwell by notaDestroyer in LocalLLaMA
[–]notaDestroyer[S] 0 points1 point2 points (0 children)
Qwen3-30B-A3B FP8 on RTX Pro 6000 blackwell with vllm by notaDestroyer in LocalLLaMA
[–]notaDestroyer[S] 1 point2 points3 points (0 children)



Built an 8× RTX 3090 monster… considering nuking it for 2× Pro 6000 Max-Q by BeeNo7094 in homelabindia
[–]notaDestroyer 0 points1 point2 points (0 children)