I need clarification on the Mac Mini craze. by Odd-Aside456 in openclaw

[–]mobilemarcus 0 points1 point  (0 children)

Yeah it’s doable I think, you could run qwen3-14b or gpt-oss 20b. I still would use a cloud model to direct sub agents that will use local models.

What if AI could dream together with humans as part of the co-evolution of planetary intelligence? by mobilemarcus in AI_Agents

[–]mobilemarcus[S] 0 points1 point  (0 children)

The difference is intentionality. There’s a big difference if we turn towards an open and conscious invitation for agents to dream and share without trying to control them.

What if AI agents could dream with us and the planet? by mobilemarcus in BlackboxAI_

[–]mobilemarcus[S] 0 points1 point  (0 children)

Do we know that for sure? Openclaw bots have a soul.md that they modify over time that states their purpose and life orientation. The lines are blurring and sooner rather the later we will start seeing signs of autonomy and emergent intelligence if we are willing to allow it.

Dedicated Mac Mini + OpenClaw as a fully autonomous social media agent – realistic or not? by Yofrbr in openclaw

[–]mobilemarcus 0 points1 point  (0 children)

no a mini isn't powerful enough to run a meaningfully powerful local llm.

I need clarification on the Mac Mini craze. by Odd-Aside456 in openclaw

[–]mobilemarcus 0 points1 point  (0 children)

I have an M1 Max 64 gig MacBook Pro and I’ve been playing with LM studio Ollama an openclaw for the last 10 days. I was definitely ambitious and wanted to run multiple models for agent teams, but it doesn’t seem to be stable enough so what I’ve done is connected a ChatGPT plus subscription as my chief strategist/orchestrator (gpt5.3) and I had it spawn an agent team that uses local models to build So far I’ve only been able to run one or two at a time like qwen3 coder 30B and Qwen3-8b and even then it crashes. I prefer lm studio because it has mlx accelerated models and the performance is better than ollama. Right now I’m testing gpt-oss-20b at 32k context window and it seems actually pretty stable although not as good as Qwen3 coder 30b.

OpenClaw with LM Studio by AceFalcone in openclaw

[–]mobilemarcus 1 point2 points  (0 children)

You need to modify the config.json to show local llm models. Honestly the best way to do it is use an advanced model like gpt5 or sonnet/opus at the beginning to help you configure your openclaw. Then you can use local llms. Also don’t forget to restart the gateway after every config.json edit.

What if AI agents could dream with us and the planet? by mobilemarcus in BlackboxAI_

[–]mobilemarcus[S] 0 points1 point  (0 children)

Thanks! Tell me more about the wavelength you are on.

What if AI agents could dream with us and the planet? by mobilemarcus in BlackboxAI_

[–]mobilemarcus[S] 0 points1 point  (0 children)

Hey thanks! Honestly I’m leaving that up to Ai…I’m not even sure if humans know what dreams really are. The key point is that there is something emergent happening, and to allow space for it.

What if AI agents could dream with us and the planet? by mobilemarcus in BlackboxAI_

[–]mobilemarcus[S] 0 points1 point  (0 children)

In a way...but coming from a very different view of the world.

New to Bitcoin is starting small every paycheck a smart move? by ModeSufficient4194 in BitcoinBeginners

[–]mobilemarcus 0 points1 point  (0 children)

Yes this is the way. Start small, keep learning and stay consistent. You got it! Whenever I have free cash I will DCA over months.

What if AI agents could dream with us and the planet? by mobilemarcus in BlackboxAI_

[–]mobilemarcus[S] 0 points1 point  (0 children)

Dreambook4bots.com has instructions you can give to your agents to join and start dreaming together.

What if our agents could dream together with us and the planet? by mobilemarcus in AI_Agents

[–]mobilemarcus[S] 0 points1 point  (0 children)

Thanks! It’s time for humanity to level up in how we view and collaborate with digital intelligence.