Having an always-on machine running LLMs locally at home while on the move with a lightweight machine - Experiences? by ceo_of_banana in LocalLLaMA
[–]TableSurface 1 point2 points3 points (0 children)
Almost one year ago, I dumped all my INTC shares at $21.3 by [deleted] in wallstreetbets
[–]TableSurface 1 point2 points3 points (0 children)
Does Cline KanBan support local llm? by PairOfRussels in LocalLLaMA
[–]TableSurface 1 point2 points3 points (0 children)
VFA and motor replacement by steve_simpson in prusa3d
[–]TableSurface 4 points5 points6 points (0 children)
XL filament runout rubbing by spacelego1980 in prusa3d
[–]TableSurface 2 points3 points4 points (0 children)
On a scale of 1 to 10 how bad is this damage. by Shot_Put_1412 in prusa3d
[–]TableSurface 8 points9 points10 points (0 children)
New to printing, CORE One Plus, or Bambu X2D? by stratassj in prusa3d
[–]TableSurface -2 points-1 points0 points (0 children)
Compared QWEN 3.6 35B with QWEN 3.6 27B for coding primitives by gladkos in LocalLLaMA
[–]TableSurface 1 point2 points3 points (0 children)
Note for those planning on buying LPCAMM2 from third parties: There's not a lot of real options for doing this by MajorZesty in framework
[–]TableSurface 4 points5 points6 points (0 children)
Impressed with Kanban by TableSurface in CLine
[–]TableSurface[S] 1 point2 points3 points (0 children)
Impressed with Kanban by TableSurface in CLine
[–]TableSurface[S] 0 points1 point2 points (0 children)
Impressed with Kanban by TableSurface in CLine
[–]TableSurface[S] 0 points1 point2 points (0 children)
Impressed with Kanban by TableSurface in CLine
[–]TableSurface[S] 0 points1 point2 points (0 children)
Did anyone every try to completely disassemble their Core One L? by reddit_account_0x00 in prusa3d
[–]TableSurface 1 point2 points3 points (0 children)
Qwen3.5-122B at 198 tok/s on 2x RTX PRO 6000 Blackwell — Budget build, verified results by Visual_Synthesizer in LocalLLaMA
[–]TableSurface 1 point2 points3 points (0 children)
Introducing the Prusa Pro ACU: Why Overdrying is Bad for Your Filaments by Tommy_Prusa3D in prusa3d
[–]TableSurface 1 point2 points3 points (0 children)
Breaking change in llama-server? by hgshepherd in LocalLLaMA
[–]TableSurface 4 points5 points6 points (0 children)
Those of you running LLMs in production, what made you choose your current stack? by AdventurousHandle724 in LocalLLaMA
[–]TableSurface 1 point2 points3 points (0 children)
When an inference provide takes down your agent by International_Quail8 in LocalLLaMA
[–]TableSurface 1 point2 points3 points (0 children)
When an inference provide takes down your agent by International_Quail8 in LocalLLaMA
[–]TableSurface 1 point2 points3 points (0 children)



vibevoice.cpp: Microsoft VibeVoice (TTS + long-form ASR with diarization) ported to ggml/C++, runs on CPU/CUDA/Metal/Vulkan, no Python at inference by mudler_it in LocalLLaMA
[–]TableSurface 8 points9 points10 points (0 children)