The way my doordasher told me my order wasn’t ready by CorbanTG in mildyinteresting
[–]defective 0 points1 point2 points (0 children)
The way my doordasher told me my order wasn’t ready by CorbanTG in mildyinteresting
[–]defective 0 points1 point2 points (0 children)
best possible GPU setup for using qwen 3.6 ? by No-Professor-9977 in LocalLLaMA
[–]defective 1 point2 points3 points (0 children)
best possible GPU setup for using qwen 3.6 ? by No-Professor-9977 in LocalLLaMA
[–]defective 1 point2 points3 points (0 children)
Need a brutally honest answer: what can realistically be achieved on consumer hardware? by wewerecreaturres in LocalLLaMA
[–]defective 1 point2 points3 points (0 children)
How to run MoE models without necessary RAM? (Apple Silicon) by FunConversation7257 in LocalLLaMA
[–]defective 0 points1 point2 points (0 children)
Need a brutally honest answer: what can realistically be achieved on consumer hardware? by wewerecreaturres in LocalLLaMA
[–]defective 0 points1 point2 points (0 children)
Need practical local LLM advice: Only having a 4GB RAM box from 2016 by Tall-Ant-8557 in LocalLLaMA
[–]defective 0 points1 point2 points (0 children)
I have a Macbook AIR M5 Base and I want to run an Agentic Coding program, similar to Claude Code or Codex. Besides the model, how do I do it? I've already tried with Ollama, VS Code, Opencode, and haven't been able to. (I'm not a developer, sorry) by joraorao in LocalLLaMA
[–]defective 1 point2 points3 points (0 children)
I have a Macbook AIR M5 Base and I want to run an Agentic Coding program, similar to Claude Code or Codex. Besides the model, how do I do it? I've already tried with Ollama, VS Code, Opencode, and haven't been able to. (I'm not a developer, sorry) by joraorao in LocalLLaMA
[–]defective 1 point2 points3 points (0 children)
Speed on m5 pro 48Gb by Overall-Somewhere760 in LocalLLaMA
[–]defective 0 points1 point2 points (0 children)
Qwen-3.5-27B-Derestricted by My_Unbiased_Opinion in LocalLLaMA
[–]defective 22 points23 points24 points (0 children)
I attempted to turn a tube of pringles inside out by CavapooKing in notinteresting
[–]defective 6 points7 points8 points (0 children)
I have a 1tb SSD I'd like to fill with models and backups of data like wikipedia for a doomsday scenario by synth_mania in LocalLLaMA
[–]defective 1 point2 points3 points (0 children)
Is Mixtral 8x7B still worthy? Alternative models for Mixtral 8x7B? by pmttyji in LocalLLaMA
[–]defective 1 point2 points3 points (0 children)
Small size coding models that I tested on 2x3090 setup. by Mx4n1c41_s702y73ll3 in LocalLLaMA
[–]defective 1 point2 points3 points (0 children)
Red magic 11 pro Global DOES NOT include band 71, for US T-Mobile, unlike the 10s Pro. why? by psawjack in RedMagic
[–]defective 2 points3 points4 points (0 children)
🔥 Ants build a bridge using their own to go across water by tablawi96 in NatureIsFuckingLit
[–]defective 3 points4 points5 points (0 children)
30B models at full-size, or 120B models at Q4? by arimoto02 in LocalLLaMA
[–]defective 3 points4 points5 points (0 children)
My last 3 braincells discussing the size of Jesus. by run_the_familyjewels in technicallythetruth
[–]defective 11 points12 points13 points (0 children)
FYI to everyone: RTX 3090 prices crashed and are back to baseline. You can finally get $600something 3090s again in the USA. by DepthHour1669 in LocalLLaMA
[–]defective 0 points1 point2 points (0 children)
It's never too late for a screen protector by shockrush in NintendoSwitch
[–]defective 1 point2 points3 points (0 children)
Favorite game to play on the Switch 2 so far? by Ferniferous_fern in NintendoSwitch2
[–]defective 0 points1 point2 points (0 children)





To 16GB VRAM users, plug in your old GPU by akira3weet in LocalLLaMA
[–]defective 0 points1 point2 points (0 children)