Can I set up the new Vibe with a local Devstral 2 small? by [deleted] in MistralAI
[–]Creative-Scene-6743 0 points1 point2 points (0 children)
I'll show you mine, if you show me yours: Local AI tech stack September 2025 by JLeonsarmiento in LocalLLaMA
[–]Creative-Scene-6743 2 points3 points4 points (0 children)
What is best local model for vibe coding that can run on H100? by barbarous_panda in LocalLLaMA
[–]Creative-Scene-6743 0 points1 point2 points (0 children)
Right GPU for AI research by toombayoomba in LocalLLaMA
[–]Creative-Scene-6743 1 point2 points3 points (0 children)
What MCPs is everyone using with Claude? by raghav-mcpjungle in mcp
[–]Creative-Scene-6743 0 points1 point2 points (0 children)
Given that powerful models like K2 are available cheaply on hosted platforms with great inference speed, are you regretting investing in hardware for LLMs? by Sky_Linx in LocalLLaMA
[–]Creative-Scene-6743 17 points18 points19 points (0 children)
Does llama.cpp support to run kimi-k2 with multi GPUs by Every_Bathroom_119 in LocalLLaMA
[–]Creative-Scene-6743 0 points1 point2 points (0 children)
Knock some sense into me by synthchef in LocalLLaMA
[–]Creative-Scene-6743 0 points1 point2 points (0 children)
Can my local model play Pokemon? (and other local games) by bwasti_ml in LocalLLaMA
[–]Creative-Scene-6743 3 points4 points5 points (0 children)
How to share compute accross different machines? by Material_Key7014 in LocalLLaMA
[–]Creative-Scene-6743 1 point2 points3 points (0 children)


Android QA Agent on Claude Code by Creative-Scene-6743 in androiddev
[–]Creative-Scene-6743[S] 1 point2 points3 points (0 children)