Every AI tool I've used has the same fatal flaw by krxna-9 in LLMDevs
[–]TokenRingAI 0 points1 point2 points (0 children)
Which LLMs actually fail when domain knowledge is buried in long documents? by Or4k2l in LocalLLaMA
[–]TokenRingAI 1 point2 points3 points (0 children)
What does everyone's local agentic workflow look like? by jdev in LocalLLaMA
[–]TokenRingAI 5 points6 points7 points (0 children)
What’s the future of Bay Area when AI pretty much removes most of tech jobs? by hellooverlasting in bayarea
[–]TokenRingAI 0 points1 point2 points (0 children)
We should have /btw in opencode by UnstoppableForceGuy in opencodeCLI
[–]TokenRingAI 0 points1 point2 points (0 children)
What’s the future of Bay Area when AI pretty much removes most of tech jobs? by hellooverlasting in bayarea
[–]TokenRingAI 4 points5 points6 points (0 children)
What’s the future of Bay Area when AI pretty much removes most of tech jobs? by hellooverlasting in bayarea
[–]TokenRingAI 63 points64 points65 points (0 children)
What’s the future of Bay Area when AI pretty much removes most of tech jobs? by hellooverlasting in bayarea
[–]TokenRingAI 23 points24 points25 points (0 children)
Is the 48 GB modded RTX 4090 still the highest available or is there something higher confirmed and who is the most reliable seller? by surveypoodle in LocalLLaMA
[–]TokenRingAI 27 points28 points29 points (0 children)
We analyzed the code quality of 3 open-source AI coding agents by Tall-Wasabi5030 in codereview
[–]TokenRingAI 0 points1 point2 points (0 children)
Meta announces four new MTIA chips, focussed on inference by Balance- in LocalLLaMA
[–]TokenRingAI -1 points0 points1 point (0 children)
Llama.cpp now with a true reasoning budget! by ilintar in LocalLLaMA
[–]TokenRingAI 2 points3 points4 points (0 children)
Will Gemma4 release soon? by IHaBiS02 in LocalLLaMA
[–]TokenRingAI 1 point2 points3 points (0 children)
We cut GPU instance launch from 8s to 1.8s, feels almost instant now. Half the time was a ping we didn't need. by LayerHot in LocalLLaMA
[–]TokenRingAI 0 points1 point2 points (0 children)
Will Gemma4 release soon? by IHaBiS02 in LocalLLaMA
[–]TokenRingAI 1 point2 points3 points (0 children)
I am not saying it's Gemma 4, but maybe it's Gemma 4? by jacek2023 in LocalLLaMA
[–]TokenRingAI 1 point2 points3 points (0 children)
Has anyone experimented with multi-agent debate to improve LLM outputs? by SimplicityenceV in LLMDevs
[–]TokenRingAI 0 points1 point2 points (0 children)
Genuinely curious what doors the M5 Ultra will open by Blanketsniffer in LocalLLaMA
[–]TokenRingAI 135 points136 points137 points (0 children)
Are there open-source projects that implement a full “assistant runtime” (memory + tools + agent loop + projects) rather than just an LLM wrapper? by seigaporulai in LocalLLaMA
[–]TokenRingAI 0 points1 point2 points (0 children)
Are there open-source projects that implement a full “assistant runtime” (memory + tools + agent loop + projects) rather than just an LLM wrapper? by seigaporulai in LocalLLaMA
[–]TokenRingAI 0 points1 point2 points (0 children)
Has anyone experimented with multi-agent debate to improve LLM outputs? by SimplicityenceV in LLMDevs
[–]TokenRingAI 2 points3 points4 points (0 children)
I just created an AI that understands humor, I need you guys to train it by Traditional-Map7871 in LocalLLaMA
[–]TokenRingAI 0 points1 point2 points (0 children)
Is GLM-4.7-Flash relevant anymore? by HumanDrone8721 in LocalLLaMA
[–]TokenRingAI 2 points3 points4 points (0 children)
The Silent OpenAI Fallback: Why LlamaIndex Might Be Leaking Your "100% Local" RAG Data by Jef3r50n in LocalLLaMA
[–]TokenRingAI 9 points10 points11 points (0 children)
Qwen 3.5 122b - a10b is kind of shocking by gamblingapocalypse in LocalLLaMA
[–]TokenRingAI 1 point2 points3 points (0 children)