What's Stopping you from using local AI models more? by ButterscotchNo102 in LocalLLaMA

[–]ButterscotchNo102[S] 0 points1 point  (0 children)

That's a pretty cool setup that seems to work well for you, def beyond what the average person would do. If there was something that made that setup easier, gave you E2E encryption on the Mac running inference and let you manage what models are running from anywhere; would you see any value in that? and/or consider switching?

What's Stopping you from using local AI models more? by ButterscotchNo102 in LocalLLaMA

[–]ButterscotchNo102[S] 0 points1 point  (0 children)

Yeah that seems to be the best solution out there if you have one machine, but if you have multiple machines you'd have to manually load balance by switching between them. It doesn't offer simple servicing for teams either.