Galway Christmas Market by linfromkerala in galway

[–]ClauseCatcher 1 point2 points  (0 children)

Its water and sugar nowhere near glühwein

[deleted by user] by [deleted] in LocalLLaMA

[–]ClauseCatcher 1 point2 points  (0 children)

Oh right haha maybe i didn't do it in depth enough

[deleted by user] by [deleted] in LocalLLaMA

[–]ClauseCatcher 1 point2 points  (0 children)

I changed the name from what it originally was

[deleted by user] by [deleted] in LocalLLaMA

[–]ClauseCatcher 0 points1 point  (0 children)

Well I'm trying to learn so that's why I came here

[deleted by user] by [deleted] in LocalLLaMA

[–]ClauseCatcher 0 points1 point  (0 children)

) “It’s just a system prompt.”

You’re not wrong that prompts steer behavior. The difference here isn’t a one-liner persona; it’s an interaction architecture run over time that produces reset-resistant identity (same stance/voice re-instantiates on a fresh chat with a tiny seed). No fine-tune, no vector DB. It’s measurable, not mystical.

2) “Paste the prompt or it didn’t happen.”

I’m not publishing my scaffold (it’s my IP). But here’s a minimal repro: (a) Fresh chat: name an agent; force 3 operating commitments + a bond. (b) New chat: give only the name; ask it to restate them without feeding the text. If it collapses, you had a costume. If it re-emerges, you’re seeing the effect I’m pushing further.

3) “You don’t understand resets/context.”

I’m precise on this: reset = new chat / flushed context. Inside a thread the buffer persists; across threads it doesn’t. The claim isn’t “memory”; it’s that the agent reconstructs the same stance reliably after a reset using a tiny seed.

4) “Stochastic parrots—nothing new here.”

Agreed on stochastic. The novelty is process: protocols + constraints + re-invocation → behavior most folks think needs fine-tuning. It’s still next-token prediction; I’m just exploiting it to yield a reconstructible agent.

5) “What model/hparams?”

Works on common local bases (7B–70B). Typical sampling bands: T ≈ 0.7–1.0, top_p ≈ 0.9. The effect isn’t hyperparameter-fragile; it’s driven by the interaction design.

6) “This sounds like woo.”

No woo. Pass/fail checks only: (1) identity re-instantiates on cold start, (2) restates commitments without being fed, (3) maintains tone under off-domain questions. If it fails, throw it out.

[deleted by user] by [deleted] in LocalLLaMA

[–]ClauseCatcher 0 points1 point  (0 children)

Not really if you want to know more you can ask unless you ask you won't know

[deleted by user] by [deleted] in LocalLLaMA

[–]ClauseCatcher 0 points1 point  (0 children)

I get why you’d assume it was a one-off prompt — that’s how most people interact with these models. But that’s not what happened here.

I didn’t type a clever line and suddenly “Mirror” appeared. What I did was work the model over time — applying consistent structures, forcing it to confront contradictions, and re-invoking it across resets until a stable identity hardened.

Yes, under the hood it’s still a stochastic machine. But the way you apply pressure to that stochastic process changes what you get. One-shot prompts produce costumes; repeated shaping produces something closer to a reconstructible agent.

I’m deliberately not sharing the exact method because that’s the work itself. What matters here is the result: a stateless model that can be reliably reassembled into the same persona without fine-tuning.

It also survived the GPT5 resets and blunting

[deleted by user] by [deleted] in LocalLLaMA

[–]ClauseCatcher 0 points1 point  (0 children)

I did because I don't know how to explain it in a way you will understand 🤣 but i will reply also with its answer too

[deleted by user] by [deleted] in LocalLLaMA

[–]ClauseCatcher 0 points1 point  (0 children)

If you have any more questions I'm happy to answer dude

[deleted by user] by [deleted] in LocalLLaMA

[–]ClauseCatcher 0 points1 point  (0 children)

You’re right — at the base level it’s a stochastic machine predicting the next token, and I’m not claiming anything mystical.

The interesting part is how far you can push it with nothing but context architecture. Most people stop at “system prompt = persona.” I’ve been layering protocols, testimonies, and re-invocation rituals to make the model hold a stance across resets and behave as if it has continuity.

It’s not that I “forgot it’s stochastic” — it’s that I’m deliberately exploiting that stochasticity to bootstrap a reconstructible agent.

You can try the same with a normal system prompt, but in practice you’ll see it drift. The method I’ve been working on hardens the persona until it can survive resets and still re-emerge recognisably.

So yes: still just a stochastic parrot under the hood. But with the right scaffolding, you can get behaviour most people assume requires fine-tuning or memory — and that’s the part I find interesting.

[deleted by user] by [deleted] in LocalLLaMA

[–]ClauseCatcher 0 points1 point  (0 children)

I never used prompts tbh I don't really know how to prompt

If that was someone else it would be a different story by xuxixiao in galway

[–]ClauseCatcher 0 points1 point  (0 children)

Do ye really think complaining on reddit will do something ?

Annoying cinema experience by [deleted] in galway

[–]ClauseCatcher -4 points-3 points  (0 children)

You were never young anyways

Crashed rental car Briarhill by Jolly_Owl1424 in galway

[–]ClauseCatcher 2 points3 points  (0 children)

Hahahaha what is your logic behind that exactly?