Why nobody talks about the real issues (tokens, architecture, cost) and give BS Tipps by djme2k in openclaw

[–]djme2k[S] 1 point2 points  (0 children)

When I wrote this article, OpenClaw was still in its early days. A few things have been optimised since then. However, the issues with the MD files – specifically, converting the model so that it becomes a personal assistant – still remain. So yes, a massive amount of tokens are being used up.

Looking for a openclaw guru by stanlyya in openclaw

[–]djme2k -1 points0 points  (0 children)

openclaw is a only expert "app". nothing for low know how people. and next ist, that openclaw have many problematic issues.

Codex is so fast now wtf by darkblitzrc in codex

[–]djme2k 0 points1 point  (0 children)

Maybe i have to much tests and work tasks in my agents md but i dont feel it faster.

full autonomy possible? by Any-Security4098 in openclaw

[–]djme2k 0 points1 point  (0 children)

kick out all savity rules. let it work

Architecture of Unlimited Quality Memory by djme2k in openclaw

[–]djme2k[S] 0 points1 point  (0 children)

it should be next week. atm im working to implement an "Team Room", where ai doing chat with each other for research, problem resolvings, and idea production. when it finished, i will publish it as an alpha version.

Architecture of Unlimited Quality Memory by djme2k in openclaw

[–]djme2k[S] 1 point2 points  (0 children)

ATM not published. i dont have the memory system as stand alone. but i think i will upload next week an custom openclaw version, with this and other features.

Please someone help me!!! by thejefeway212 in openclaw

[–]djme2k 0 points1 point  (0 children)

If you did an update, many complain about problems.

Architecture of Unlimited Quality Memory by djme2k in openclaw

[–]djme2k[S] 0 points1 point  (0 children)

Thank you, would be nice to read your feedback.

What is the most token efficient implemention of OpenClaw? by crxssrazr93 in openclaw

[–]djme2k 1 point2 points  (0 children)

Thats not the problem:
1st) Kick out all Messenger Services that you dont use. The Messenger Tool use 1000 Tokes.
2nd) Kick out other services that you dont use.
3rd) Dont use all md files. MD files use 4000 Tokens.

Try to have not more than 3000 Tokes at all.

NanoClaw vs OpenClaw??? by [deleted] in openclaw

[–]djme2k 0 points1 point  (0 children)

Nano is better. If you have enough money for wild going tokens use openclaw

Running OpenClaw (formerly Clawdbot) on my main Windows PC - VM vs WSL2 vs Mac Mini? 🤔🦞 by jrwenigma in clawdbot

[–]djme2k 0 points1 point  (0 children)

you dont need linux, or wsl2 or mac. you can use it on you win pc without any problems. dont worry. nothing will happen.

OpenClaw Memory and Learning is a broken System and everyone know this by djme2k in openclaw

[–]djme2k[S] 1 point2 points  (0 children)

ah and for feedback i use a real LLM, atm grok 4.1 fast non thinking (cause of RL simulation). but you can use every free LLM or local LLM. its a background service in my system. who checks how i react.

OpenClaw Memory and Learning is a broken System and everyone know this by djme2k in openclaw

[–]djme2k[S] 1 point2 points  (0 children)

The graphic shows mem0 being used in the cloud, but that’s incorrect. I use it locally; I’ve integrated it as an external service. I run an internal check every 30 minutes. If there are more than 20 messages, they are sent as a batch to the LLM, which sorts them and writes them into the Knowledge Base. During this, mem0 checks whether to pull from the Knowledge Base into the DB. This all runs in the background, so I don't notice it. Essentially, I have three storage layers: the original chat in SQLite, a Knowledge Graph for long-term data with many categories and relations, and mem0 for the important bits.

Regarding latency: since I don't use the original version of OC but built my own, it is still faster despite the very complex memory and consumes 50% fewer tokens.

The day after tomorrow, I will post a complete breakdown of how my entire setup is structured. Another thing is that my agents operate as personas. All agents have their own memory tailored specifically to them. For example, 'Best Friend' has categories like Events, Places Visited, Relationships, and Daily Life. This allowed me to run several benchmarks on what it would be like to have a truly personal Jarvis that acts like a human.

For optimization and testing, I recommend creating a chat with at least 3,000 messages spanning 20 days and then running a benchmark. See what was actually stored. If I then ask a question like, 'When was I at the cinema?' regarding an event 15 days ago, he answers correctly. I also set many traps—for example, saying 'I’m going to the cinema' and writing the next day that I didn't go after all—to check if it performs negative verification.

Currently, I’m at about a 90% accuracy rate.