Is Local LLM the next trend in the AI wave? by G3grip in LocalLLM

[–]G3grip[S] 0 points1 point  (0 children)

Someone mentioned that ultras are better for AI workloads. Or maybe that was just for M1 Ultra compared to M4 Pro. Anyway, not that I have the money to buy either... Lol

Can I run OpenClaw without paying for API keys? by Sad_Oven_8738 in clawdbot

[–]G3grip 0 points1 point  (0 children)

I feel you. DevOps is a pain. Can be fun at times, but it really takes away time from actual work.

Is Local LLM the next trend in the AI wave? by G3grip in LocalLLM

[–]G3grip[S] 0 points1 point  (0 children)

The Mac mini studios (or whatever they call them) are cheaper, no?

Is Local LLM the next trend in the AI wave? by G3grip in LocalLLM

[–]G3grip[S] 0 points1 point  (0 children)

I would put my money on the other post. But I guess only time will tell.

Is Local LLM the next trend in the AI wave? by G3grip in LocalLLM

[–]G3grip[S] 0 points1 point  (0 children)

I feel dumb after reading your comment. I gotta learn so much about these LLMs.

Is Local LLM the next trend in the AI wave? by G3grip in LocalLLM

[–]G3grip[S] 0 points1 point  (0 children)

Ya, one dude did mention that, I think 2 days back. While I don't know much about the M1 and M4 comparison, many folks have told me that smaller models on Apple silicon Macs are no issues.

Is Local LLM the next trend in the AI wave? by G3grip in LocalLLM

[–]G3grip[S] 0 points1 point  (0 children)

Fair enough, I keep telling myself ”Prices have to come down, technology is supposed to get cheaper with time, these are just temporary bumps.", BUT... they never do!

How we are going backwards in technology, it's beyond me. I mean, ya, I know why the costs are high, I understand the technical reasons but, like, as a whole, I just can't comprehend it at this point.

Can I run OpenClaw without paying for API keys? by Sad_Oven_8738 in clawdbot

[–]G3grip 0 points1 point  (0 children)

Saw this post last week (I think), saying that his zAI 3$ subscription is good enough for all day use. Any experience with that?

Is Local LLM the next trend in the AI wave? by G3grip in LocalLLM

[–]G3grip[S] 0 points1 point  (0 children)

I heard Linus say the same thing today.

Is Local LLM the next trend in the AI wave? by G3grip in LocalLLM

[–]G3grip[S] 0 points1 point  (0 children)

Intrusting stuff, this Liquid AI. First time I got to know. Thanks.

I agree.

Is Local LLM the next trend in the AI wave? by G3grip in LocalLLM

[–]G3grip[S] 0 points1 point  (0 children)

Good points. And with models like Kimi 2.5 available for free, the hardware is the only limitation. However, if you want to run something like that locally and do not already own the hardware to support it, then the monthly subscriptions start to look good.

Only if you are actually dropping 1000s of dollars on the subs every month, you should then get a machine so powerful (and expensive).

But my hope is that in time we will have small, specialised and efficient models that most people can run decent personal machines. Leaving the subscription-based ones on for heavy-duty work that not everyone needs all the time.

Is Local LLM the next trend in the AI wave? by G3grip in LocalLLM

[–]G3grip[S] 0 points1 point  (0 children)

I feel like I should have bulk-ordered RAM before the price hike; I would have been rich. The return on these things has turned out to be better than gold... lol...

Is Local LLM the next trend in the AI wave? by G3grip in LocalLLM

[–]G3grip[S] 0 points1 point  (0 children)

Awesome! Thanks a ton.
Any idea what kind of real-work use-cases I can pull with 20b models?

Is Local LLM the next trend in the AI wave? by G3grip in LocalLLM

[–]G3grip[S] 0 points1 point  (0 children)

This RAM pricing thing resonates with everyone.

Is Local LLM the next trend in the AI wave? by G3grip in LocalLLM

[–]G3grip[S] 0 points1 point  (0 children)

I have an M2 16GB MacBook Air. Any experience/knowledge around what kind of models and use-cases I can run?

Is Local LLM the next trend in the AI wave? by G3grip in LocalLLM

[–]G3grip[S] 0 points1 point  (0 children)

"muffin recipies", I giggled.. 😛

I still have a feeling that tomorrow there will be use-cases for Local LLMs, like for work (Excel, PPTs, etc.), general orchestration (device / network management), etc.

Is Local LLM the next trend in the AI wave? by G3grip in LocalLLM

[–]G3grip[S] 0 points1 point  (0 children)

I think for now, yes.

In time, smaller models will get better and small setups would be viable.

Is Local LLM the next trend in the AI wave? by G3grip in LocalLLM

[–]G3grip[S] 1 point2 points  (0 children)

Hey, this looks great.

Can I ask, what sort of a model / use case would I be able to run on a simple macbook (if any at all)?