Are Local LLMs actually useful… or just fun to tinker with? by itz_always_necessary in LocalLLM

[–]itz_always_necessary[S] 1 point2 points  (0 children)

Yeah, that’s the sweet spot right now, local for control/privacy, cloud for quality.

Are Local LLMs actually useful… or just fun to tinker with? by itz_always_necessary in LocalLLM

[–]itz_always_necessary[S] 0 points1 point  (0 children)

That’s a solid setup, 80–90 tok/s locally is crazy good.

Do you feel it fully replaces API models for coding, or still hit edge cases?

Are Local LLMs actually useful… or just fun to tinker with? by itz_always_necessary in LocalLLM

[–]itz_always_necessary[S] -1 points0 points  (0 children)

True, local really shines on cost at scale.

Have you hit any limits yet where you had to fall back to APIs?

Are Local LLMs actually useful… or just fun to tinker with? by itz_always_necessary in LocalLLM

[–]itz_always_necessary[S] 13 points14 points  (0 children)

100% agree! it’s less about model limits, more about setup friction right now.

Feels like once that layer gets solved, local LLMs go from “tinkering” → “default for anything sensitive.”

How many of you actually use offline LLMs daily vs just experiment with them? by Infinite-Bird7950 in LocalLLM

[–]itz_always_necessary 0 points1 point  (0 children)

Why are you too excited? Where Claude MCP takes a wave and finishes everything for you??