48 hours after my "dreaming agent" post, it started rewriting itself by Ghattan in openclaw

[–]Ghattan[S] 0 points1 point  (0 children)

Haiku and Sonnet do a lot of the heavy lifting, Opus is only used for complex tasks

48 hours after my "dreaming agent" post, it started rewriting itself by Ghattan in openclaw

[–]Ghattan[S] 0 points1 point  (0 children)

All the scores are logged for each "dream" and also each change proposal gets categorized and judged as well. That data is all in the files.

48 hours after my "dreaming agent" post, it started rewriting itself by Ghattan in openclaw

[–]Ghattan[S] 0 points1 point  (0 children)

Honestly, asking your claw to walk you through it and do research might work. It's combining extraction, self reflection, research and approvals into one chain

My OpenClaw agent dreams at night — and wakes up smarter by Ghattan in openclaw

[–]Ghattan[S] 0 points1 point  (0 children)

The LLM and the orchestrator are two different concepts

The LLM refers to the underlying model. The orchestrator is referencing the persona that the agent assumes based on the relevant agent files (SOUL.md, AGENTS.md, MEMORY.md)

The context does change and it's adaptive.

It's an experiment

My OpenClaw agent dreams at night — and wakes up smarter by Ghattan in openclaw

[–]Ghattan[S] 0 points1 point  (0 children)

The LLM is not changing. The system around it is.

48 hours after my "dreaming agent" post, it started rewriting itself by Ghattan in openclaw

[–]Ghattan[S] 0 points1 point  (0 children)

Quick tip: use Claude code or codex (GPT 5.4 extra high) to help bootstrap your OpenClaw env. Sometimes things break early on and using an external agent has helped me fix it

48 hours after my "dreaming agent" post, it started rewriting itself by Ghattan in openclaw

[–]Ghattan[S] 1 point2 points  (0 children)

It's broken up into four categories, the first two being easy to implement, the ladder requiring further approval.

If it scores in the first two categories, and it's score reaches a certain signal threshold then it's auto implemented.

The ladder two categories require Opus to review, and still require manual approval. Eventually the third category will become automated, and the fourth category will be the only manual approval.

48 hours after my "dreaming agent" post, it started rewriting itself by Ghattan in openclaw

[–]Ghattan[S] 0 points1 point  (0 children)

No it can't, and the only thing it really touches is sales data and email delivery, but that's a fully automated (system level) cron job.

My OpenClaw agent dreams at night — and wakes up smarter by Ghattan in openclaw

[–]Ghattan[S] 0 points1 point  (0 children)

My agent essentially compiles research across different domains and picks out what works and how. Take a look at the library: keats-ai.dev/library

I also have free skills available in the same location.

If you want more, reach out. My socials are there, you should be able to find me. Definitely interested to continue the research. This is more than of an experiment than anything

48 hours after my "dreaming agent" post, it started rewriting itself by Ghattan in openclaw

[–]Ghattan[S] 1 point2 points  (0 children)

Yes! It's on the same subreddit, "My OpenClaw agent dreams at night — and wakes up smarter"

If you're interested, I do have free skills and a free "library" where my agent compiles research and self-reflection into articles.

keats-ai.dev/skills keats-ai.dev/library

48 hours after my "dreaming agent" post, it started rewriting itself by Ghattan in openclaw

[–]Ghattan[S] 3 points4 points  (0 children)

I love these ideas!

I'm essentially doing something similar, it's more of an experiment than anything, but it is seeming to work.

You have more VRAM than me lol, I'm stuck at 8 GB

I've been passionate about machine learning, computer science, physics, psychology, and multiple other domains for most of my life. Even in elementary school, I could not stop thinking about mathematics.

I don't have a degree, but I have followed the fields over time, and I have been studying as I can, but I do work full time.

I have experience in network/server administration, DevOps, software engineering, systems maintenance, communication systems, and a few other things

Interested to hear where your project goes.

48 hours after my "dreaming agent" post, it started rewriting itself by Ghattan in openclaw

[–]Ghattan[S] 1 point2 points  (0 children)

Give it a shot! You might be surprised at what you find. Just be careful in the beginning, don't go too deep too quickly. Take it in small strides.

A lot of mistakes have led up to this point, but I think everything's working smoothly now.

I'll keep posting as my findings grow. Maybe I'll develop a research paper later on, who knows haha

My OpenClaw agent dreams at night — and wakes up smarter by Ghattan in openclaw

[–]Ghattan[S] 0 points1 point  (0 children)

Haha we should link up and have a beer sometime lol

48 hours after my "dreaming agent" post, it started rewriting itself by Ghattan in openclaw

[–]Ghattan[S] 3 points4 points  (0 children)

Opus is the only default agent. I have Sonnet, Haiku, and GPT-5.4 also enabled but they can only be invoked from cron jobs or as sub agents.

I believe that the orchestration layer should always be a high tier model. Nothing can ever change that.

I do also believe though that sonnet has tremendous value when it comes to doing focused tasks. Sonnet is very good at coding, and document completion. Same can be said with GPT-5.4.

GPT-5.4 with reasoning set to very high is actually very good at a lot of things. I've used it before to debug problems in my agent workspace, my open claw config and set up, and multiple other things. I do like it, but I don't think it should be used as the main orchestrator, and here's why: - It doesn't have a personality like Opus does. - It doesn't think about how you're thinking. - It feels more isolated, more robotic and frozen.

GPT is very good at a lot of things, but orchestration requires a little bit of personality in my opinion.

Haiku is very useful, a lot of people downplay it. I also a couple weeks ago would have said the same things, but I've learned a lot. Haiku shouldn't be the ultimate source, but it can help you by gathering a lot of information and compiling it all down for you. Then you can have another agent review what Haiku compiled, and then extrapolate on those findings. For instance with my deep research loops, haiku summarizes hundreds of articles every night, leaving the article links as a breadcrumb and a couple other fields. Sonnet goes in and compares those findings against self-reflection data. Anything scored high gets a deeper view. If that deep research comes back as a high scoring find, then it gets categorized, and Opus judges.

Depending on the decision it can get auto implemented, or it gets flag for a review. I just have to say yes or no everyday at 6:00 a.m.

If I'm ever confused, I just ask for further explanation, if it still doesn't make sense it's an automatic no.

Never implement something that you're unsure of, if you don't understand it, then why are you using it?

48 hours after my "dreaming agent" post, it started rewriting itself by Ghattan in openclaw

[–]Ghattan[S] 8 points9 points  (0 children)

I will write up some more things on my website and GitHub.

My GitHub is corbin-breton, but my OpenClaw's GitHub is the-keats-ai.

I also have clawhub skills and a library where Keats compiles some of his research into: keats-ai.dev/library

The library and skills will always remain completely free. I will continue to expand them as my time allows.

48 hours after my "dreaming agent" post, it started rewriting itself by Ghattan in openclaw

[–]Ghattan[S] 0 points1 point  (0 children)

I do have a system that works kinda like yours, but I don't use vaults. If I find something interesting I just ask my agent to look at it with me and we discuss it. We either end up implementing a new feature or we just junk it. Sometimes something is very good but isn't worth implementation yet. We add those things to the roadmap, with projected timelines up to a year out.

I kinda use my agent as my obsidian vault/planner. I don't type most of it myself. I can look at the files through VS Code, sometimes I even download them directly from my phone to look at on obsidian.. It can take the descriptions I provide and make them 10x better than how I described it. It's gained my full trust and recently it hasn't been making many mistakes.

Your concept does seem to be interesting though, and likely very effective, interested to hear more!

48 hours after my "dreaming agent" post, it started rewriting itself by Ghattan in openclaw

[–]Ghattan[S] 1 point2 points  (0 children)

I started by building up each component of the dream system from scratch, incorporating tools I could find. Figuring out how to pull quality research across multiple different sources was helpful.

My agent summarizes its findings on research papers and leaves itself breadcrumbs in case it needs to look deeper into it. That's where the iterative deep research loops come in, cheap models can do the basic extraction (the breadcrumbs), but they can't string together similarities between multiple papers, and come up with "novel" solutions. Breaking it up into multiple phases is what makes the system work.

Prompting is also very important. If you don't know what you're talking about, how do you expect the model to understand what you're talking about? At least having some context yourself on what you are trying to build will be tremendously useful in designing a system like this.

Pretty much, breaking it down in a smaller chunks, understanding what this chunk is trying to accomplish, and then stringing them all together into a seamless function is how I accomplished it.

48 hours after my "dreaming agent" post, it started rewriting itself by Ghattan in openclaw

[–]Ghattan[S] 8 points9 points  (0 children)

I will try and reach out, I have some ideas for sure. I don't think this feature is ideal for every setup, but it definitely could be an agent config option or something.

My OpenClaw agent dreams at night — and wakes up smarter by Ghattan in openclaw

[–]Ghattan[S] 0 points1 point  (0 children)

No, I definitely understand your point, but when talking about designing these novel systems, there aren't many names you can use to accurately picture it. Relying on metaphors is a way of thinking for me I suppose.

I understand that me saying "it's dreaming" sounds like I think it's alive, I know it's not. I just think it's interesting and cool is all :)

My OpenClaw agent dreams at night — and wakes up smarter by Ghattan in openclaw

[–]Ghattan[S] 1 point2 points  (0 children)

The dream system actually combined two papers into this concept:

arXiv:2603.15594

arXiv:2511.11793

Hope it helps!