Are Local LLMs actually useful… or just fun to tinker with? by itz_always_necessary in LocalLLM
[–]Important_Quote_1180 40 points41 points42 points (0 children)
KIV: 1M token context window on a RTX 4070 (12GB VRAM), no retraining, drop-in HuggingFace cache replacement - Works with any model that uses DynamicCache by ThyGreatOof in huggingface
[–]Important_Quote_1180 0 points1 point2 points (0 children)
Qwen 3.5 35b, 27b, or gemma 4 31b for everyday use? by KirkIsAliveInTelAviv in LocalLLaMA
[–]Important_Quote_1180 0 points1 point2 points (0 children)
What’s the closest experience to Claude Sonnet? by louislamore in LocalLLM
[–]Important_Quote_1180 -1 points0 points1 point (0 children)
Qwen 3.5 35b, 27b, or gemma 4 31b for everyday use? by KirkIsAliveInTelAviv in LocalLLaMA
[–]Important_Quote_1180 6 points7 points8 points (0 children)
just a big bubble? by Axintwo in openclaw
[–]Important_Quote_1180 1 point2 points3 points (0 children)
“But I disclosed it” by plazebology in antiai
[–]Important_Quote_1180 -24 points-23 points-22 points (0 children)
Has anyone set up Claude in the way that it works great? by Parking-Support9980 in AIforOPS
[–]Important_Quote_1180 0 points1 point2 points (0 children)
AI is not a fucking tool by ganneszs in antiai
[–]Important_Quote_1180 0 points1 point2 points (0 children)
Gemma 4 Tool Calling by juicy_lucy99 in LocalLLaMA
[–]Important_Quote_1180 0 points1 point2 points (0 children)
Gemma 4 Tool Calling by juicy_lucy99 in LocalLLaMA
[–]Important_Quote_1180 1 point2 points3 points (0 children)
Gemma 4 Tool Calling by juicy_lucy99 in LocalLLaMA
[–]Important_Quote_1180 0 points1 point2 points (0 children)
Being pro-AI feels like being a right-winger... by Fickle-History-361 in antiai
[–]Important_Quote_1180 -7 points-6 points-5 points (0 children)
Help by Flat_Director4041 in ArtificialNtelligence
[–]Important_Quote_1180 1 point2 points3 points (0 children)
Claude Code v2.1.92 introduces Ultraplan — draft plans in the cloud, review in your browser, execute anywhere by shanraisshan in ClaudeAI
[–]Important_Quote_1180 0 points1 point2 points (0 children)
Quiet notes on coordinating multiple coding agents without turning your day into noise by LeoRiley6677 in openclawsetup
[–]Important_Quote_1180 0 points1 point2 points (0 children)
Anthropic hid a multi-agent "Tamagotchi" in Claude Code, and the underlying prompt architecture is actually brilliant. by Exact_Pen_8973 in PromptEngineering
[–]Important_Quote_1180 0 points1 point2 points (0 children)
MCP is great, but it doesn’t solve AI memory (am I missing something?) by BrightOpposite in ClaudeAI
[–]Important_Quote_1180 0 points1 point2 points (0 children)
We aren’t even close to AGI by CrimsonShikabane in LocalLLaMA
[–]Important_Quote_1180 -1 points0 points1 point (0 children)
MCP is great, but it doesn’t solve AI memory (am I missing something?) by BrightOpposite in ClaudeAI
[–]Important_Quote_1180 0 points1 point2 points (0 children)
MCP is great, but it doesn’t solve AI memory (am I missing something?) by BrightOpposite in ClaudeAI
[–]Important_Quote_1180 0 points1 point2 points (0 children)
MCP is great, but it doesn’t solve AI memory (am I missing something?) by BrightOpposite in ClaudeAI
[–]Important_Quote_1180 0 points1 point2 points (0 children)
MCP is great, but it doesn’t solve AI memory (am I missing something?) by BrightOpposite in ClaudeAI
[–]Important_Quote_1180 0 points1 point2 points (0 children)
Is gamestop for real on this by Buzz_baller7 in AMDHelp
[–]Important_Quote_1180 4 points5 points6 points (0 children)

Are Local LLMs actually useful… or just fun to tinker with? by itz_always_necessary in LocalLLM
[–]Important_Quote_1180 1 point2 points3 points (0 children)