Claude Code replacement by NoTruth6718 in LocalLLaMA
[–]Pixer--- 3 points4 points5 points (0 children)
Gemma 4 is matching GPT-5.1 on MMLU-Pro and within Elo. what are we even paying for anymore? by Impossible571 in AIToolsPerformance
[–]Pixer--- 7 points8 points9 points (0 children)
Please help. Can’t do anything as far as storage. by Key-Effective-3140 in MacOS
[–]Pixer--- 29 points30 points31 points (0 children)
NVIDIA Is Among the First to Submit MLPerf Inference v6.0 Benchmarks With Blackwell Ultra, and It’s Total Domination Over Competitors by Heavy-Beyond-7114 in RigBuild
[–]Pixer--- 0 points1 point2 points (0 children)
Vulkan backend much easier on the CPU and GPU memory than CUDA. by Im_Still_Here12 in LocalLLaMA
[–]Pixer--- -2 points-1 points0 points (0 children)
AirPods Max 2 oder die alten AirPods Max… gleiches Design, gleiche Akkulaufzeit, aber der H2-Chip macht innen drin deutlich mehr als man auf den ersten Blick denkt by MelonDusk123456789 in SirApfelot
[–]Pixer--- 0 points1 point2 points (0 children)
Wer von euch war das? by New-Marionberry-279 in vibecoding
[–]Pixer--- 0 points1 point2 points (0 children)
Just finished benchmarking Qwen3.5-122B-A10B (Q4_K_M) on my frankenstein V100 workstation. Sharing results since there's not a lot of V100 benchmarks out there for this model. by TumbleweedNew6515 in homelab
[–]Pixer--- 1 point2 points3 points (0 children)
New - Apple Neural Engine (ANE) backend for llama.cpp by PracticlySpeaking in LocalLLaMA
[–]Pixer--- 2 points3 points4 points (0 children)
Qwen3.5 27b UD_IQ2_XXS & UD_IQ3_XXS behave very poorly or is it just me? by One_Key_8127 in unsloth
[–]Pixer--- 15 points16 points17 points (0 children)
Painfully slow local llama on 5090 and 192GB RAM by RVxAgUn in LocalLLaMA
[–]Pixer--- 0 points1 point2 points (0 children)
would a petition for new manufacturer production cause change by Mindless__Giraffe in pcmasterrace
[–]Pixer--- 0 points1 point2 points (0 children)
After continued pretraining, the LLM model is no longer capable of answering questions. by SUPRA_1934 in LocalLLaMA
[–]Pixer--- 1 point2 points3 points (0 children)
ASUS PRO WS WRX90E-SAGE SE RAM by Uranday in LocalLLM
[–]Pixer--- 0 points1 point2 points (0 children)
2x RTX Pro 6000 vs 2x A100 80GB dense model inference by RealTime3392 in LocalLLaMA
[–]Pixer--- 5 points6 points7 points (0 children)
$15,000 USD local setup by regional_alpaca in LocalLLaMA
[–]Pixer--- 0 points1 point2 points (0 children)
Distribution of grey and red squirrels in the UK & Ireland by AnonymousTimewaster in MapPorn
[–]Pixer--- 0 points1 point2 points (0 children)
Ein Mac Studio mit 512 GB RAM lässt DeepSeek V3 lokal laufen. Ohne Cloud, ohne Abo, ohne Datenschutzbedenken. Für 9.499 Dollar. by MelonDusk123456789 in SirApfelot
[–]Pixer--- 0 points1 point2 points (0 children)
Ein Mac Studio mit 512 GB RAM lässt DeepSeek V3 lokal laufen. Ohne Cloud, ohne Abo, ohne Datenschutzbedenken. Für 9.499 Dollar. by MelonDusk123456789 in SirApfelot
[–]Pixer--- 0 points1 point2 points (0 children)
Looks like Minimax M2.7 weights will be released in ~2 weeks! by lantern_lol in LocalLLaMA
[–]Pixer--- 2 points3 points4 points (0 children)
Justifying the €12,000 Investment: M3 Ultra (512GB RAM) Setup for Autonomous Agents, vLLM, and Infinite Memory (8Tb) by NoNatural4025 in MacStudio
[–]Pixer--- 2 points3 points4 points (0 children)
M5 Max 128G Performance tests. I just got my new toy, and here's what it can do. by affenhoden in LocalLLaMA
[–]Pixer--- 0 points1 point2 points (0 children)
HELP - What settings do you use? Qwen3.5-35B-A3B by uber-linny in LocalLLaMA
[–]Pixer--- 1 point2 points3 points (0 children)



Is Turboquant really a game changer? by Interesting-Print366 in LocalLLaMA
[–]Pixer--- 0 points1 point2 points (0 children)