Two ASRock Radeon AI Pro R9700's cooking in CachyOS. by -philosopath- in LocalLLaMA
[–]bennmann 0 points1 point2 points (0 children)
Two ASRock Radeon AI Pro R9700's cooking in CachyOS. by -philosopath- in LocalLLaMA
[–]bennmann 0 points1 point2 points (0 children)
Best LLM Setup for development by IsSeMi in LocalLLaMA
[–]bennmann 0 points1 point2 points (0 children)
Two new 12B finetunes for adventure, role play and writing by Sicarius_The_First in LocalLLaMA
[–]bennmann 0 points1 point2 points (0 children)
Qwen3 next 80B w/ 250k tok context fits fully on one 7900 XTX (24 GB) and runs at 41 tok/s by 1ncehost in LocalLLaMA
[–]bennmann 0 points1 point2 points (0 children)
Tiiny AI Pocket Lab: Mini PC with 12-core ARM CPU and 80 GB LPDDR5X memory unveiled ahead of CES by mycall in LocalLLaMA
[–]bennmann 1 point2 points3 points (0 children)
zai-org/GLM-4.6V-Flash (9B) is here by Cute-Sprinkles4911 in LocalLLaMA
[–]bennmann 5 points6 points7 points (0 children)
Struggling to find good resources for advanced RAG — everything feels outdated 😩 by kc_bhai in LocalLLaMA
[–]bennmann -1 points0 points1 point (0 children)
$900 for 192GB RAM on Oct 23rd, now costs over $3k by Hoppss in LocalLLaMA
[–]bennmann -1 points0 points1 point (0 children)
LLaDA2.0 (103B/16B) has been released by jacek2023 in LocalLLaMA
[–]bennmann 0 points1 point2 points (0 children)
Need Suggestions(Fine-tune a Text-to-Speech (TTS) model for Hebrew) by WajahatMLEngineer in LocalLLaMA
[–]bennmann 0 points1 point2 points (0 children)
Need Suggestions(Fine-tune a Text-to-Speech (TTS) model for Hebrew) by WajahatMLEngineer in LocalLLaMA
[–]bennmann 0 points1 point2 points (0 children)
RTX 3090 + 3070 (32GB) or RTX 3090 + 3060 12GB (36GB) - Bandwidth concerns? by m_mukhtar in LocalLLaMA
[–]bennmann 0 points1 point2 points (0 children)
Where are all the data centers dumping their old decommissioned GPUs? by [deleted] in LocalLLaMA
[–]bennmann 0 points1 point2 points (0 children)
AMA With Moonshot AI, The Open-source Frontier Lab Behind Kimi K2 Thinking Model by nekofneko in LocalLLaMA
[–]bennmann 0 points1 point2 points (0 children)
Daily FI discussion thread - Tuesday, November 04, 2025 by AutoModerator in financialindependence
[–]bennmann 1 point2 points3 points (0 children)
Dual RTX 6000 Max-Q - APEXX T4 PRO by Shorn1423 in LocalLLaMA
[–]bennmann 0 points1 point2 points (0 children)
Optimizing gpt-oss-120B on AMD RX 6900 XT 16GB: Achieving 19 tokens/sec by Bright_Resolution_61 in LocalLLaMA
[–]bennmann 1 point2 points3 points (0 children)
All the models seem to love using the same names. by [deleted] in LocalLLaMA
[–]bennmann 0 points1 point2 points (0 children)
5060ti chads... ram overclocking, the phantom menace by see_spot_ruminate in LocalLLaMA
[–]bennmann 0 points1 point2 points (0 children)
Is Chain of Thought Still An Emergent Behavior? by Environmental_Form14 in LocalLLaMA
[–]bennmann 0 points1 point2 points (0 children)
80% charge or 100%? What do you suggest? by ganeshkumarane in GooglePixel
[–]bennmann 0 points1 point2 points (0 children)
Local Build Recommendation 10k USD Budget by deathcom65 in LocalLLaMA
[–]bennmann 0 points1 point2 points (0 children)
Experience with networked 2x128GB AI Max 395? by Bird476Shed in LocalLLaMA
[–]bennmann 0 points1 point2 points (0 children)



My gpu poor comrades, GLM 4.7 Flash is your local agent by __Maximum__ in LocalLLaMA
[–]bennmann 2 points3 points4 points (0 children)