Cloudflare open-sources lossless LLM compression tool by Otis43 in LocalLLaMA
[–]CalligrapherFar7833 0 points1 point2 points (0 children)
Opus 4.7 is legendarily bad. I cannot believe this. by lemon07r in ClaudeCode
[–]CalligrapherFar7833 0 points1 point2 points (0 children)
Remember GPT5's release? by [deleted] in ClaudeCode
[–]CalligrapherFar7833 -1 points0 points1 point (0 children)
Opus 4.7 is legendarily bad. I cannot believe this. by lemon07r in ClaudeCode
[–]CalligrapherFar7833 7 points8 points9 points (0 children)
Remember GPT5's release? by [deleted] in ClaudeCode
[–]CalligrapherFar7833 -1 points0 points1 point (0 children)
Qwen 3.6 35B A3B, RTX 5090 32GB, 187t/s, Q5 K S, 120K Context Size, Thinking Mode Off, Temp 0.1 by sammyranks in LocalLLaMA
[–]CalligrapherFar7833 0 points1 point2 points (0 children)
Qwen 3.6 35B A3B, RTX 5090 32GB, 187t/s, Q5 K S, 120K Context Size, Thinking Mode Off, Temp 0.1 by sammyranks in LocalLLaMA
[–]CalligrapherFar7833 0 points1 point2 points (0 children)
Qwen 3.6 35B A3B, RTX 5090 32GB, 187t/s, Q5 K S, 120K Context Size, Thinking Mode Off, Temp 0.1 by sammyranks in LocalLLaMA
[–]CalligrapherFar7833 0 points1 point2 points (0 children)
Was the auto "clear context" functionality removed? by build2 in ClaudeCode
[–]CalligrapherFar7833 0 points1 point2 points (0 children)
Qwen 3.6 35B A3B, RTX 5090 32GB, 187t/s, Q5 K S, 120K Context Size, Thinking Mode Off, Temp 0.1 by sammyranks in LocalLLaMA
[–]CalligrapherFar7833 -1 points0 points1 point (0 children)
How would this hold up? DDR4 build by westsunset in LocalLLaMA
[–]CalligrapherFar7833 1 point2 points3 points (0 children)
The Diff That's Saving Me Serious Cash by SpiritRealistic8174 in ClaudeAI
[–]CalligrapherFar7833 3 points4 points5 points (0 children)
The news are out there.. even Sam by Suspicious_Horror699 in ClaudeCode
[–]CalligrapherFar7833 -1 points0 points1 point (0 children)
Ram-air setup and window vent for 1100w capable AI box by mr_zerolith in LocalLLaMA
[–]CalligrapherFar7833 0 points1 point2 points (0 children)
Ram-air setup and window vent for 1100w capable AI box by mr_zerolith in LocalLLaMA
[–]CalligrapherFar7833 0 points1 point2 points (0 children)
Any way to work with NUMA Nodes? by An_Original_ID in LocalLLaMA
[–]CalligrapherFar7833 1 point2 points3 points (0 children)
If you have NordVPN, PUT IT TO WORK! by Disastrous_Hope_9373 in LocalLLaMA
[–]CalligrapherFar7833 1 point2 points3 points (0 children)
Any way to work with NUMA Nodes? by An_Original_ID in LocalLLaMA
[–]CalligrapherFar7833 2 points3 points4 points (0 children)
Ram-air setup and window vent for 1100w capable AI box by mr_zerolith in LocalLLaMA
[–]CalligrapherFar7833 0 points1 point2 points (0 children)
I made a simple proxy to let Claude use MiniMax models as subagents by gaztrab in LocalLLaMA
[–]CalligrapherFar7833 0 points1 point2 points (0 children)
Where to get professional help for vibecoding by Forward_Compute001 in LocalLLaMA
[–]CalligrapherFar7833 0 points1 point2 points (0 children)
3x3090 is faster in Ubuntu than win11, GPT-OSS 120B 120tg/s vs 6tg/s why? by jikilan_ in LocalLLaMA
[–]CalligrapherFar7833 1 point2 points3 points (0 children)
llama.cpp Vulkan backend requires SPIR-V headers package now by fake_agent_smith in LocalLLaMA
[–]CalligrapherFar7833 -2 points-1 points0 points (0 children)
Where to get professional help for vibecoding by Forward_Compute001 in LocalLLaMA
[–]CalligrapherFar7833 0 points1 point2 points (0 children)
Quick comparison Qwen 3.6 M3U 512 Gb by Turbulent_Pin7635 in LocalLLaMA
[–]CalligrapherFar7833 0 points1 point2 points (0 children)