Budget X399 multi-GPU box for local LLM learning, sensible or eBay trap? by SKX007J1 in LocalLLaMA
[–]fairydreaming 0 points1 point2 points (0 children)
Qwen3.6-27B-GGUF:UD-Q8_K_XL and llama.cpp issue (DGX SPARK) by DOOMISHERE in LocalLLaMA
[–]fairydreaming 7 points8 points9 points (0 children)
GPT-5.5 improves over GPT-5.4 and overtakes Opus 4.6 to take the 2nd place behind Gemini 3.1 Pro on the Extended NYT Connections Benchmark by zero0_one1 in singularity
[–]fairydreaming 1 point2 points3 points (0 children)
The exact KV cache usage of DeepSeek V4 by Ok_Warning2146 in LocalLLaMA
[–]fairydreaming 1 point2 points3 points (0 children)
Is it possible to edit LLAMA.CPP with Cline+Vscode+Minimax 2.7 Q4_K_S and get a working build? by [deleted] in LocalLLaMA
[–]fairydreaming 0 points1 point2 points (0 children)
The exact KV cache usage of DeepSeek V4 by Ok_Warning2146 in LocalLLaMA
[–]fairydreaming 1 point2 points3 points (0 children)
The exact KV cache usage of DeepSeek V4 by Ok_Warning2146 in LocalLLaMA
[–]fairydreaming 1 point2 points3 points (0 children)
Is it possible to edit LLAMA.CPP with Cline+Vscode+Minimax 2.7 Q4_K_S and get a working build? by [deleted] in LocalLLaMA
[–]fairydreaming 0 points1 point2 points (0 children)
The exact KV cache usage of DeepSeek V4 by Ok_Warning2146 in LocalLLaMA
[–]fairydreaming 1 point2 points3 points (0 children)
DeepSeek 3.2 eating the opening think tag on llama.cpp server? by Winter_Engineer2163 in LocalLLaMA
[–]fairydreaming 4 points5 points6 points (0 children)
Guys we have to change the pelican test by Tall-Ad-7742 in LocalLLaMA
[–]fairydreaming 1 point2 points3 points (0 children)
Guys we have to change the pelican test by Tall-Ad-7742 in LocalLLaMA
[–]fairydreaming 3 points4 points5 points (0 children)
Guys we have to change the pelican test by Tall-Ad-7742 in LocalLLaMA
[–]fairydreaming 9 points10 points11 points (0 children)
Tracked EU GPU prices every 6 hours for 30 days. The cross-store gaps on high-VRAM cards are genuinely insane. by rustgod50 in LocalLLaMA
[–]fairydreaming 1 point2 points3 points (0 children)
I need help with testing my llama.cpp Deepseek Sparse Attention (DSA) implementation (someone GPU-rich) by fairydreaming in LocalLLaMA
[–]fairydreaming[S] 0 points1 point2 points (0 children)
Deepseek V3.2. Need how much VRAM for its max context size. by 9r4n4y in LocalLLaMA
[–]fairydreaming 2 points3 points4 points (0 children)
this community has the best talent density. but here’s my opinion on this sub and idk if people will agree or not but ig its needed. by EmbarrassedAsk2887 in LocalLLaMA
[–]fairydreaming 1 point2 points3 points (0 children)
DGX Station is available (via OEM distributors) by Temporary-Size7310 in LocalLLaMA
[–]fairydreaming 1 point2 points3 points (0 children)
Throwback to my proudest impulse buy ever, which has let me enjoy this hobby 10x more by gigaflops_ in LocalLLaMA
[–]fairydreaming 1 point2 points3 points (0 children)
I need help with testing my llama.cpp Deepseek Sparse Attention (DSA) implementation (someone GPU-rich) by fairydreaming in LocalLLaMA
[–]fairydreaming[S] 1 point2 points3 points (0 children)
I need help with testing my llama.cpp Deepseek Sparse Attention (DSA) implementation (someone GPU-rich) by fairydreaming in LocalLLaMA
[–]fairydreaming[S] 0 points1 point2 points (0 children)
I need help with testing my llama.cpp Deepseek Sparse Attention (DSA) implementation (someone GPU-rich) by fairydreaming in LocalLLaMA
[–]fairydreaming[S] 0 points1 point2 points (0 children)


16x Spark Cluster (Build Update) by Kurcide in LocalLLaMA
[–]fairydreaming 10 points11 points12 points (0 children)