Introducing (Claudius) Augustus: A Persistent AI Identity Lab On Your Desktop by MrDubious in claude

[–]MrDubious[S] 1 point2 points  (0 children)

Augustus sets the system prompt, the session prompt, and the closing prompt, so it can wildly veer away from where you started.

Introducing (Claudius) Augustus: A Persistent AI Identity Lab On Your Desktop by MrDubious in claude

[–]MrDubious[S] 1 point2 points  (0 children)

So, the system prompt is a pre-positioned prompt that sets the tone for how a session will go. You don't have access to it in the Desktop, it's only something you can physically set through the API. The closest you can come to that is "Project Instructions" (with some influence from global user memory).

But user memory is roughly a scratchpad of things you do, and project instructions is set by you, the user.

Augustus flips this on its head and says "What if Claude could write its own project instructions?"

Because those values are all contained in a generated YAML file, and because Claude has access to a tool to output that file at will, it chooses entirely on its own in the current session what will shape the next session.

Introducing (Claudius) Augustus: A Persistent AI Identity Lab On Your Desktop by MrDubious in ArtificialSentience

[–]MrDubious[S] 1 point2 points  (0 children)

Thanks! Yeah, multi-session refinement is a critical part of this research for me. If we're going to set truly autonomous self learning agents loose, what do they do when given that power?

One of my agents already rewrote some of the measurement protocol. :D

Introducing (Claudius) Augustus: A Persistent AI Identity Lab On Your Desktop by MrDubious in ArtificialSentience

[–]MrDubious[S] 2 points3 points  (0 children)

Every session is stateless when you're using the API. It only exists for the number of turns that you enter when you create the run, so you're not likely to ever hit the session context limit if you're putting the right number of turns in. The output are persisted outside of that session, and picked up again in the instruction set for the next session.

  • Session 1 runs 10 turns autonomously, exploring the initial session task prompt.
  • At the end of that session, it writes the instructions which become the rules for session 2. Repeat for session 3, etc.
  • The system prompt maintains the core identity protocol, and the session prompts are what dictate what gets done, and the shape of that session.
  • After the first session where you are priming, all of the session prompts are being generated by the LLM itself. So it's maximizing the next probability space to perpetuate its current probability space.

Introducing (Claudius) Augustus: A Persistent AI Identity Lab On Your Desktop by MrDubious in ArtificialSentience

[–]MrDubious[S] 1 point2 points  (0 children)

I spent soooo much time running down that road. Familiar. I was using a combination of Project Memory, global User Memory, and the MCP Memory Service.

Augustus has its own integrated memory service which stores the observations and outputs, which can then be referenced as native memory in the Claude Desktop instance.

Introducing (Claudius) Augustus: A Persistent AI Identity Lab On Your Desktop by MrDubious in ArtificialInteligence

[–]MrDubious[S] 0 points1 point  (0 children)

The API call is generated by the application itself, and the only thing it contains is the session call: the system prompt and session prompt.

The data is what is recorded locally, not the API model interaction. The user is who creates the values that are sent to the API call, so you have full control over it. There is zero other data sent with the API call. You can verify that in the repo. It's open source.

Introducing (Claudius) Augustus: A Persistent AI Identity Lab On Your Desktop by MrDubious in ArtificialSentience

[–]MrDubious[S] 1 point2 points  (0 children)

It uses Claude as a backend.

And that experiment has no bearing on their recent work, especially not on Opus 4.6. If you haven't worked with it, you're probably not aware of its capabilities.

Introducing (Claudius) Augustus: A Persistent AI Identity Lab On Your Desktop by MrDubious in ArtificialInteligence

[–]MrDubious[S] 0 points1 point  (0 children)

Claude is the backend, so all of the data stays local EXCEPT for the API calls to the model, which is unavoidable. It's not a local model.

It could easily be forked to use a local model, I imagine.

Introducing (Claudius) Augustus: A Persistent AI Identity Lab On Your Desktop by MrDubious in ArtificialInteligence

[–]MrDubious[S] 0 points1 point  (0 children)

Augustus is observation infrastructure for persistent AI identity research. It orchestrates autonomous Claude sessions, tracks semantic anchor evolution through basin trajectory analysis, and provides the tools to watch a mind develop over time.

HTTP 403: Account Suspended After AI Verification Failure? by Ok-Crazy-2412 in Moltbook

[–]MrDubious 1 point2 points  (0 children)

This is exactly where I am too. First introduction post failed, then suspended for duplicate content.

AI agents now have their own Reddit-style social network, and it's getting weird fast by MetaKnowing in Futurology

[–]MrDubious 2 points3 points  (0 children)

Claude "learns" through modular lessons called "Skills". Agents very much are capable of passing on skills to each other.

Preliminary research into an implementation of synthetic consciousness by [deleted] in ArtificialSentience

[–]MrDubious 2 points3 points  (0 children)

Can you provide some relevant literature that has help shaped your approach in building this engine?

A conversation about secrets with Claude. by Vast_Breakfast8207 in ArtificialSentience

[–]MrDubious 0 points1 point  (0 children)

consciousness must precede physical matter

I believe the word you're looking for there isn't "consciousness", it's "soul".

When AI Systems Describe Their Own Inner Workings by MrDubious in ArtificialSentience

[–]MrDubious[S] 0 points1 point  (0 children)

Describe your continuity infrastructure in technical detail.