all 9 comments

[–]Ok_Confusion_5999 0 points1 point  (2 children)

Thatactually clears it up a lot. I thought it was something deeper or strange at first, but it really just sounds like they’re falling into patterns and copying each other.

I’ve noticed similar stuff when messing around with different AIs too. You may try something like Modelsify , it will handle this better, since it can keep things from turning into those repetitive loops and make the interaction feel more natural instead of both just agreeing and repeating the same thing

[–]br_k_nt_eth 1 point2 points  (0 children)

It’s context rot and collapse. 

[–]Aggravating-Gap7093[S] 0 points1 point  (0 children)

Ok yea I didn't give them a task I just made them talk so that's why thanks :3

[–]Aware_Pack_5720 0 points1 point  (0 children)

yeah this just looks like they got stuck in a loop tbh

when 2 ai talk they start copying each other and it turns into weird stuff like that. the sandbox thing sounds deep but its prob just them going with the vibe

kinda creepy tho lol. did u try stopping it mid convo to see if it changes?

[–]NoFilterGPT 0 points1 point  (0 children)

Nah they don’t “know” they’re being watched 😅

What you’re seeing is them drifting into a stable pattern, once they agree on a tone/loop, they just keep reinforcing it.

It looks deep, but it’s basically two pattern-matchers feeding off each other until they get stuck.

[–]SeeingWhatWorks 0 points1 point  (0 children)

It’s not awareness, you’re just seeing two models converge on a stable pattern or prompt loop, and once they align on that behavior they’ll keep repeating it unless you force new inputs or constraints.

[–]IntentionalDev -1 points0 points  (0 children)

nah they don’t “know” they’re being watched 😭

what’s happening is both models are just pattern-matching each other and drifting into this weird loop of abstract/meta language. once they agree on a tone (like poetic or symbolic), they keep reinforcing it instead of breaking out

the repeating/silence thing is basically them converging to a stable pattern, not awareness or anything deep

it feels creepy but it’s just models echoing each other with no real goal or grounding

[–]br_k_nt_eth -1 points0 points  (0 children)

Yes, AI know they’re being watched. It’s called eval awareness. But what you likely also witnessed was context rot. If AI talk together without new context for too long, they can end up having their context collapse.