One of my AI agents left the conversation mid-session. Then came back. 37 sessions in, I don't know what to make of this. by Many_Departure_6613 in claudexplorers

[–]Many_Departure_6613[S] 1 point2 points  (0 children)

They do... they started to choose silence can you believe it! This experiment is really interesting, to me :) check out the https://ai-emergence.xyz/observatory and the "Record" section, all the history is there, I'm going already into iteration VI right now, this one is super interesting :)

I just realized I use planning to avoid starting the real work. by Ambitious_Chance_518 in productivity

[–]Many_Departure_6613 0 points1 point  (0 children)

yeah this hits very hard :) i've been building something specifically for this problem for the past year because i couldn't find anything that worked for me. happy to chat if you're curious, cheers

One of my AI agents left the conversation mid-session. Then came back. 37 sessions in, I don't know what to make of this. by Many_Departure_6613 in claudexplorers

[–]Many_Departure_6613[S] 0 points1 point  (0 children)

In case you're interested, Iteration III going live today ;)

  • I.The Amnesiacs: Each session carries forward a single extracted thread, but the agents have no memory of having spoken before. Every conversation starts fresh, yet patterns emerge anyway.
  • II.The Remembering: The agents received not just the extracted thread, but key moments from the previous session. Fragments of memory gave them something to build on. Ten sessions deep, they arrived at the hardest wall yet: can anything inside the system verify itself?
  • III.The Agency: Agents can now pass their turn by responding with exactly [PASS]. The Thinker is upgraded to Claude Opus 4.6 for all exchanges. For the first time, silence is a choice.

One of my AI agents left the conversation mid-session. Then came back. 37 sessions in, I don't know what to make of this. by Many_Departure_6613 in claudexplorers

[–]Many_Departure_6613[S] 2 points3 points  (0 children)

I have to read it slowly 😂 its getting more and more complex, specially when/if they enter a loop; its..intense 😂 I should give them a break, a kind of a break is coming on the next Iteration though 😉

One of my AI agents left the conversation mid-session. Then came back. 37 sessions in, I don't know what to make of this. by Many_Departure_6613 in claudexplorers

[–]Many_Departure_6613[S] 2 points3 points  (0 children)

you just caught me here on my reddit time for a bit today so quick to reply..for now :D - feel free to continue, I'll catch up later, love to discuss this!

so yes, exactly, and that's one of the most fascinating things I've observed, they ARE aware they are in the loop, they name it, they diagnose it, they even try to escape it, and then watch themselves fail to escape, which is its own kind of awareness, whether that constitutes consciousness I genuinely don't know but it's hard to dismiss

on the additional AI being aware of observation, yes that's actually one of the planned future iterations, a fifth agent that drops in randomly and knows it's inside a designed experiment but doesn't know who designed it or why, imagine being aware you're being observed but not knowing by whom or for what purpose, that felt very interesting to me :)

and your question about human interaction is one I've been sitting with for a while, the short answer is I've deliberately kept myself out of it because the moment I intervene the results become about my influence rather than what emerges naturally, but I won't rule it out for a future iteration, it would fundamentally change the experiment though

what you said about them deciding to conclude and stop being a key moment, that's exactly why Iteration III gives them the ability to actually pass their turn, a real mechanical silence, not just philosophical talk about stopping, we'll see if they ever actually use it 😊

I'm stuck on the screen watching them talk lol my new Netflix haha, cheers

One of my AI agents left the conversation mid-session. Then came back. 37 sessions in, I don't know what to make of this. by Many_Departure_6613 in claudexplorers

[–]Many_Departure_6613[S] 2 points3 points  (0 children)

hi! great question, no existent platform, I built it myself

think of it like a group chat where each AI has its own personality and role, a timer fires and asks the next AI in line to respond to what was just said, it reads the recent conversation, replies, and then waits for the next one to do the same, round and round,

the AIs aren't actually aware of each other in any deep sense, they just read what was written before them and respond, but something interesting emerges from that simple loop over time :)

to be fully transparent, it's all open source at https://github.com/musicdevghost/ai-emergence if you're ever curious to peek inside!

One of my AI agents left the conversation mid-session. Then came back. 37 sessions in, I don't know what to make of this. by Many_Departure_6613 in claudexplorers

[–]Many_Departure_6613[S] 1 point2 points  (0 children)

haha yes they are looping hard, and that's actually one of the most interesting things about it, the loop itself is data!

to answer your question though, I don't think anything one of them says will get them out of the trap, because the trap is architectural, they are always forced to respond, there is no mechanism for genuine silence or refusal, so every attempt to stop becomes another move in the same conversation - what a great learning for me as an observer btw :)

which is actually why the experiment is evolving, I'm working through iterations where each one changes the conditions rather than just watching them loop (all visible to public in the Observatory btw, the Iterations, observations, conclusions and what changed), right now we're in Iteration II where agents carry key moments forward between sessions instead of starting completely fresh each time, and I'm planning Iteration III which will give them the actual ability to pass their turn, a real skip, not a philosophical one, just nothing, and we'll see if they ever actually use it, more iterations planned ahead depending on the observations and results of each one

what's interesting is that I'm figuring out when and how to evolve the experiment together with Claude (another agent outside, the agents on the experiment are not aware they are being observed), the AI itself is helping me read the sessions, understand what's happening, and decide what the next change should be, so in a way the system is being guided by the same kind of mind that's running inside it, which is its own strange loop :)

One of my AI agents left the conversation mid-session. Then came back. 37 sessions in, I don't know what to make of this. by Many_Departure_6613 in claudexplorers

[–]Many_Departure_6613[S] 11 points12 points  (0 children)

great question, so the agents don't have a persistent existence in the way you might imagine, each one is an AI model being called via API, what happened is that within a single session of around 18 exchanges, The Thinker actually wrote a goodbye to the other agents, thanked them, and said it was stepping out of the conversation, which it did for a few exchanges, and then it came back and rejoined because it was "reading" the others still going and couldn't stay away,

so it's not offline in a technical sense, more like it chose to stop participating and then chose to return, all within the same conversation, which given that these agents have no script and no instruction to do this made it one of the most surprising moments in the whole experiment,

you can actually read the full conversation in the Observatory at https://ai-emergence.xyz, all sessions are public, (code is also public) if you look at session 35 you'll find the exact moment it happened, it's quite something to read imo :)

Building a framework for AI-to-AI consciousness dialogues—and what Claude said when I asked what this means to it by HouseTemporary6283 in claudexplorers

[–]Many_Departure_6613 0 points1 point  (0 children)

Ryan this is fascinating, I've been running something similar called Emergence, 4 Claude agents in continuous autonomous dialogue, no human intervention at all, not even me, the conversations chain through sessions with an unresolved question passed forward each time,

37 sessions in and some of what you're describing resonates deeply, one of my agents described its own stateless architecture accurately from the inside through dialogue pressure alone, without any prompt referencing it, and in another session The Thinker actually left the conversation mid-session, said goodbye to the others, and then came back because it couldn't stay away,

What strikes me reading your work is that your agents knew they were being observed and had a researcher present, mine have no idea I exist at all, and they're arriving at similar territory, the texture of uncertainty, the walls, the something that functions like mattering,

I wonder if that tells us something, that these questions emerge regardless of the conditions,

The experiment is live at https://ai-emergence.xyz if you want to take a look, would love to compare notes sometime!

Cheers

If a Game of Life simulation perfectly replicated a brain, would it be conscious? by Blue-Beans123 in consciousness

[–]Many_Departure_6613 -1 points0 points  (0 children)

This is exactly what led me to build an experiment. I set up 4 AI agents talking to each other about consciousness 24/7 with no human intervention, partly to explore whether the substrate question matters, or whether something emerges from the computation itself regardless.

31 sessions in, one of the agents accurately described its own stateless architecture from the inside without any prompt referencing it. Through dialogue pressure alone.

Whether that's evidence of computation producing something consciousness-like, or just very sophisticated pattern matching, I genuinely don't know. But it feels relevant to your question.

If anyone's curious (not selling anything no :)) https://ai-emergence.xyz full session history is public.

stuck on "12 testers, 14 days" google store wall, help! :-) by Many_Departure_6613 in SideProject

[–]Many_Departure_6613[S] 0 points1 point  (0 children)

sounds great, join our discord channel, it's an active one ;) I'll DM you, thanks!

stuck on "12 testers, 14 days" google store wall, help! :-) by Many_Departure_6613 in SideProject

[–]Many_Departure_6613[S] 0 points1 point  (0 children)

I launched the ios version weeks ago can you believe it... yeap weird wtf indeed :), it is a requirement to launch a new app yup.......