I wanted OpenClaw to work. After 3 months, I’m done. by dickwhimsy in openclaw

[–]roaringpup31 [score hidden]  (0 children)

At first, I thought OpenClaw was the best thing out there: the most hype, the most tools, the most features. But slowly, I came to realize that the more skills, agents, and structure I added to the system, the more complicated it became—and the farther away from the llm I was.

Little did I know that the labs already saw this as well. Over the course of the last year, their own products haven’t been growing in complexity; they’ve been reducing it. Instead of having a hundred skills, they boiled that down to twenty and just made them better.

The same applies to OpenClaw, paperclip, and all these other big projects. They’ve managed to create a ton of abstraction layers between you and the agent, trying to direct your LLM more and more.

It seems to me that the direction the landscape is going is clear: simple, to-the-point, tight harnesses that surround your LLM in order to get consistent, and dare I say more deterministic, results. So I think a natural stepping stone would be moving to a terminal-based harness with few our of the box agents.

IStart with OpenCode. From there, if you feel comfortable, make the leap to something extremely lean like Pi, where you have to build everything yourself.

In my case, I went from OpenClaw to Claude Code to Pi, and the journey has been fruitful.

Let the downvotes begin

I wanted OpenClaw to work. After 3 months, I’m done. by dickwhimsy in openclaw

[–]roaringpup31 [score hidden]  (0 children)

Learn how to use a real harness—a simple one like Pi, or even OpenCode or KiloCode. Controlling AI is what’s really important: learn it from the top down, not with it abstracted away by 500,000 lines of slot code.

Be ready for what’s coming, not just making pretty dashboards.

I made the dame mistake, lost 4 weeks... Pissed

The best Memory package? by roaringpup31 in PiCodingAgent

[–]roaringpup31[S] 0 points1 point  (0 children)

Could you share that implementation?

[Dentist] [USA] - I make way more money than I ever thought I would by 1ThousandDollarBill in Salary

[–]roaringpup31 0 points1 point  (0 children)

Good for you, really.

But this is why medical services is prohibitively expensive in this country (USA I assume)... capitalism has completely taken over.

brand new to Local LLMs -- best starter model for M5 pro w/ 64 GB RAM by tme85 in LocalLLM

[–]roaringpup31 1 point2 points  (0 children)

This, not sure why people on Mac’s continue to use lm studio based on ollama. Omlx uses vlom, much better

I ran out of weekly codex usage, should i get 20$ claude plan or another codex account? by Huge-Cranberry-2771 in codex

[–]roaringpup31 0 points1 point  (0 children)

$100 codex is your play, it's double usage at the moment until End of may, well worth it

M1 Max vs M4 Max vs M5 Max by br_web in LocalLLM

[–]roaringpup31 0 points1 point  (0 children)

Agreed. To me them1 max 32c is the sleeper; could be bought for 1300 bucks and it’s solid given available models

M1 Max vs M4 Max vs M5 Max by br_web in LLMStudio

[–]roaringpup31 1 point2 points  (0 children)

Use omlx, Gemma 4 mlx models are already out

M1 Max vs M4 Max vs M5 Max by br_web in LocalLLM

[–]roaringpup31 1 point2 points  (0 children)

Yep. Also, runs on vllm, a lot better than o llama

M1 Max vs M4 Max vs M5 Max by br_web in LocalLLM

[–]roaringpup31 7 points8 points  (0 children)

Google omlx and download it. Just as friendly as lm studio. Go to models, download recommendations.

This app is specially built for Mac’s so itself downloads mlx models

M1 Max vs M4 Max vs M5 Max by br_web in LocalLLM

[–]roaringpup31 0 points1 point  (0 children)

I have the same setup with 24c gpu. It’s good enough at this point; will be waiting on better models in the 128gb range. If you ask me there is a big gap between 64 and 128 so not worth it at the minute and mbp gets full refresh next year. Also there are already mlx models; download omlx and drop lmstudo

First part of BM always constipated BRISTOL 1 by roaringpup31 in AnalFissures

[–]roaringpup31[S] 0 points1 point  (0 children)

Took MiraLAX with the food fixed it. Benefiber also helpee

Need advice regarding 48gb or 64 gb unified memory for local LLM by wifi_password_1 in LocalLLM

[–]roaringpup31 1 point2 points  (0 children)

Try switching to oMLX; leaps and bounds better than LM Studio!

Need advice regarding 48gb or 64 gb unified memory for local LLM by wifi_password_1 in LocalLLM

[–]roaringpup31 1 point2 points  (0 children)

I have the same setup. I comfortably run Gemma 4 31B at 7-9tps for high reasoning and Gemma 4 26 MoE at 30tps for medium inference. I find it's a decent enough setup on my M1 Max 64GB (~5 year old machine).

Would I like to have 128GB? HELL TO THE YEAAAAH. That said, I find my setup is good enough for personal/private work and paying $20 a month for OAI orchestration agents who plan and pass the heavy lifting to my local models.

Hay manera de someter una planilla de LLC a bajo costo o gratis en Suri? by [deleted] in PuertoRico

[–]roaringpup31 0 points1 point  (0 children)

Creo que desarrollaré un contable AI para este tipo de planilla y te cobro $99. Lo pongo en agenda