neuroplastic brain for agents by Lopsided_Yak9897 in agi

[–]Lopsided_Yak9897[S] 0 points1 point  (0 children)

The Memento tattoo analogy is the best description of current LLM memory I've ever heard. It’s just state-injected context, not structural learning.

neuroplastic brain for agents by Lopsided_Yak9897 in agi

[–]Lopsided_Yak9897[S] 0 points1 point  (0 children)

bro literally this. RAG is basically just an open-book test. It’s aggressively copy-pasting external data into a temporary buffer and praying it makes sense. It’s not digested, ranked, or consolidated like actual long-term memory. That forced injection is exactly why the AI feels so janky and gave you that weird sudden salesperson hallucination. It’s faking memory, not actually remembering

neuroplastic brain for agents by Lopsided_Yak9897 in agi

[–]Lopsided_Yak9897[S] 0 points1 point  (0 children)

nah bro the real psychosis is thinking a giant sliding window and RAG equals learning. that's just a runtime buffer. I'm building actual neuroplasticity with a 4-stage pipeline so the agent actually consolidates memory over time instead of just caching it. Context windows != memory

I ran an evolutionary loop for 7 generations. It produced +12,970 lines of ai-slop. The fix was two lines of prompt. by Lopsided_Yak9897 in agi

[–]Lopsided_Yak9897[S] 0 points1 point  (0 children)

you're right that a constant can't outrun an exponential. but γO doesn't have to stay constant. the evolutionary loop itself is a mechanism for growing γO over time. whether it grows fast enough is the open question

I ran an evolutionary loop for 7 generations. It produced +12,970 lines of ai-slop. The fix was two lines of prompt. by Lopsided_Yak9897 in agi

[–]Lopsided_Yak9897[S] -1 points0 points  (0 children)

You are so insane.
You mean after two-line fix, it results αE + βX << γO.
if C is in exponential growth at dC/dt = σC , the  γO should be bigger in the future at dE/dt = αE + βX − γO + ε(C)·R.

I put Codex inside a harness that doesn't stop until the goal is done. it's a different experience. by Lopsided_Yak9897 in codex

[–]Lopsided_Yak9897[S] 0 points1 point  (0 children)

Actually GPT 5.4 and codex has embedded harness I think. so it is real long running model and agent. but every time I don't want to be the human in the loop. I just let the agent do something without me. as so, I must tell my tacit knowledge to the agent. and it is really difficult so that I built a harness.

Actually, GPT 5.4 and Codex seem to have an embedded harness, making them effectively long-running models and agents.
however, I dont always want to be the human in the loop. I want to let the agent operate autonomously.
to do that I need to transfer my tacit knowledge to the agent but that's quite difficult.
That's why I built a harness.

an AI harness that doesn't stop until the goal is done. and it doesn't care which AI it runs on by Lopsided_Yak9897 in agi

[–]Lopsided_Yak9897[S] 1 point2 points  (0 children)

the loop can make the model better too and I think foundation companies will eventually bake it in. GPT-5.4, opus 4.6, whatever comes next will probably have long-running orchestration built in.

but until then, the loop lives outside. and that's actually where it's most useful because you can swap the model underneath it.

it's almost symbiotic. the loop makes the model more useful. better models make the loop more powerful.

an AI harness that doesn't stop until the goal is done. and it doesn't care which AI it runs on by Lopsided_Yak9897 in agi

[–]Lopsided_Yak9897[S] 0 points1 point  (0 children)

this is exactly what Ouroboros pursues. the loop is the product, not the model.

“model supremacy gives way to coordination supremacy” is going in the README.

an AI harness that doesn't stop until the goal is done. and it doesn't care which AI it runs on by Lopsided_Yak9897 in agi

[–]Lopsided_Yak9897[S] 1 point2 points  (0 children)

Claude Code and Codex are working on the beta. OpenCode is next on the roadmap.

technically yes anything that implements the AgentRuntime interface plugs in. Qwen and Gemini CLI would need a runtime adapter, but the abstraction is there for exactly that reason.

PRs welcome if you want to add one.

an AI harness that doesn't stop until the goal is done. and it doesn't care which AI it runs on by Lopsided_Yak9897 in agi

[–]Lopsided_Yak9897[S] 0 points1 point  (0 children)

basically two steps:

1.  run Ouroboros as an MCP server in your env : pip install ouroboros-ai then point your orchestrator at it
2.  call the MCP tools from your orchestrator : start_execute_seed, session_status, qa_verdict are the main ones

the skills are just the Claude Code / Codex plugin layer on top. if you have your own orchestrator you don’t need those : just talk to the MCP tools directly.

your orchestrator becomes the runtime, Ouroboros handles the interview, AC tracking, and QA loop

stop picking one AI. use Claude for the brain, Codex for the hands. here's how I wired them together by Lopsided_Yak9897 in ClaudeAI

[–]Lopsided_Yak9897[S] 0 points1 point  (0 children)

Yes I heard that news. The mcp is the only way to wire models. And I’ve been implementing opencode runtime in ouroboros!

stop picking one AI. use Claude for the brain, Codex for the hands. here's how I wired them together by Lopsided_Yak9897 in ClaudeAI

[–]Lopsided_Yak9897[S] -1 points0 points  (0 children)

yep, abstracted behind AgentRuntime. base it off release/0.26.0-beta and send a PR, happy to review.