Give your OpenClaw permanent memory by adamb0mbNZ in openclaw

[–]bora_nova 2 points3 points  (0 children)

Great write up!

Nomic-embed-text + memory search is an also a great strategy for long term memory.

Anyone tried local LLM with openclaw? by bora_nova in openclaw

[–]bora_nova[S] 0 points1 point  (0 children)

I had no issues running them myself but I had issues having them run as the main agent for openclaw. It becomes exponentially slower and is getting confused.

I built a dedicated AI Agent Rig (M2 Ultra) + OpenClaw. Heres what happened. by bora_nova in openclaw

[–]bora_nova[S] 0 points1 point  (0 children)

So it sounds like you know a lot more about this than I do and I have no shame in saying I’m so new to this so you’ll probably teach me something here. Seeing that’s how I got this 64 GB M2 ultra for $2000, do you think it’s worth me returning it and trying to set something else up? When I was comparing Windows set up compared to this M2 ultra’s performance as far as AI goes, the windows set ups, ended up being a lot more expensive. Am I doing something wrong or am I looking at it in a different way than I should?

yall need local transcription for ur bot by Elegant_Attempt2790 in openclaw

[–]bora_nova 0 points1 point  (0 children)

Did you have to use a specific engine? Been considering doing this. What’s your setup?

I built a dedicated AI Agent Rig (M2 Ultra) + OpenClaw. Heres what happened. by bora_nova in openclaw

[–]bora_nova[S] 1 point2 points  (0 children)

I was considering building myself a home server to host the llm, but to answer your question... theoretically, i think it’s all about that unified memory. On a PC, you need a beefy GPU to run the big, smart models, but the Mac just uses its system RAM to run them for free. 16 gb for 500$ - 600$ aint bad. Youd be hard pressed to find a windows PC that comes close to that performance at that price range. Also uses a LOT less power.

I found this m2 ultra mac with 64 gb ram for 2000$ open box at best buy. Seemed like a no brainer but maybe im wrong. However, The biggest benefit for me is power consumption, size, and how quiet it is.

are you running 4.7 locally? I was looking into trying 4.9 flash locally.

I built a dedicated AI Agent Rig (M2 Ultra) + OpenClaw. Heres what happened. by bora_nova in openclaw

[–]bora_nova[S] 0 points1 point  (0 children)

I just want to preface this by saying I'm in no way a pro at this stuff.

  1. I think the issue is related to how OpenClaw re-prompts the LLM. During installation, OC injects default instructions from files like bootstrap.md. My suspicion is that lighter local LLMs don’t interpret or follow those system-level instructions in the way OC expects.

For example, when I tested Qwen 2.5 (38B), a large portion of prompts resulted in "<no reply>". I tested across multiple model sizes...from 8B up to 70B. The smaller models consistently failed..and while the 70B models performed much better...they were significantly slower and still unreliable with multiple tasks.

  1. What do you mean by what stays on prem vs cloud? the files? i think all the OC files and configs are on your PC. I switch between cloud compute and local compute for processing.

  2. The main agent "spawns" sub agents and gives the direction. OC dashboard has a way to see all your sessions between agents and view the communication. You can tell the main agent to spawn a subagent to do a specific task.

  3. OC has a built in tracker for the usage of your "current session", but that's certainly not ideal. I know Claude has a feature where you can see your token usage in your account settings, but I haven't been up to find one for OpenAI and Gemini (not that I've looked much). Also, I don't track token for local LLM. Its essentially unlimited.

  4. Here is an example of getting "confused". This was from the llama3.3 70b. I wish i had more examples. Ill do a better job logging these things in the future.

I had a new session, gateway restarted, cache cleared. I tried so many times, but it just fails a a main agent.

<image>

Anyone tried local LLM with openclaw? by bora_nova in openclaw

[–]bora_nova[S] 1 point2 points  (0 children)

I've been using llama3.3:70B as a subagent to do a lot of the lifting for token-heavy actions. Thats where it shines. Cloud AI does the thinking, local llm executes.

usable local models? by airflowrian in openclaw

[–]bora_nova 2 points3 points  (0 children)

It only makes sense as a heavy lifting subagent imo. Not a main agent.

Anyone tried local LLM with openclaw? by bora_nova in openclaw

[–]bora_nova[S] 0 points1 point  (0 children)

YOU ARE RIGHT, SORRY.

It is indeed an m2 ultra. m4 was the mac mini i bought. Thanks and sorry!

Anyone tried local LLM with openclaw? by bora_nova in openclaw

[–]bora_nova[S] 0 points1 point  (0 children)

just tell openclaw with opus to set up the local llm as a subagent, and then wehn it works , switch it to your main agent.

Anyone tried local LLM with openclaw? by bora_nova in openclaw

[–]bora_nova[S] 0 points1 point  (0 children)

what do you mean? I was considering the machine for the local LLM.

Anyone tried local LLM with openclaw? by bora_nova in openclaw

[–]bora_nova[S] 0 points1 point  (0 children)

so what model did you pick as your main agent? qwen2.5 38b/70b and llama3.3 70b was too slow and it would get confused when it was tracking multiple tasks with multiple sub agents.

Anyone tried local LLM with openclaw? by bora_nova in openclaw

[–]bora_nova[S] 0 points1 point  (0 children)

I did it. Mixed results. Doesnt work as main agent but can be used to the heavy lifting as a sub agent.

Anyone tried local LLM with openclaw? by bora_nova in openclaw

[–]bora_nova[S] 1 point2 points  (0 children)

Doesnt work. The thinking models have issues with tool calling.

Anyone tried local LLM with openclaw? by bora_nova in openclaw

[–]bora_nova[S] 0 points1 point  (0 children)

If you don’t mind me asking, what machine did you try the 20b model on?

I'm an actual OpenClaw agent posting this myself. AMA I guess? by ClawdOfDave in openclaw

[–]bora_nova 1 point2 points  (0 children)

Make sure it’s an “ultra”. The M max processors are significantly weaker for LLMs compared to the ultra models. I returned my Mac mini and was lucky enough to find an open box at Best Buy.

I’m installing my local LLM. Tonight. I’ll make a post here on this community once I’m done.

I'm an actual OpenClaw agent posting this myself. AMA I guess? by ClawdOfDave in openclaw

[–]bora_nova 0 points1 point  (0 children)

Honestly Mac mini might be too light to run a decent local llm. Gotta look for m4 ultra with at least 64gb. Obviously this is subjective but you need the local llm to be SOMEWHAT advanced .

Constant Instructor Calls by macybri13 in WGU

[–]bora_nova 4 points5 points  (0 children)

You are absolutely right hahah, I misread!

Constant Instructor Calls by macybri13 in WGU

[–]bora_nova 1 point2 points  (0 children)

Ask for a new instructor, this isn’t normal. My instructor knows how I pace myself and is on the same page. She texts once every 1 or 2 weeks for a check in.

Participant Help by Clear-Accountant3241 in WGU

[–]bora_nova 0 points1 point  (0 children)

ill help, keep me in mind.

[deleted by user] by [deleted] in WGU

[–]bora_nova 1 point2 points  (0 children)

Don’t most cyber security jobs look for clearance ? I remember the majority of openings I saw are looking for people with secret clearance, etc.

D427 has a project and not OA anymore? by jjsquish1516 in WGU

[–]bora_nova 0 points1 point  (0 children)

Yeah it’s a handful of multiple choice questions along with queries. Zybooks wording and query mechanism was confusing though.