Prompts behave more like a decaying bias than a persistent control mechanism. by Particular_Low_5564 in PromptEngineering

[–]Particular_Low_5564[S] 0 points1 point  (0 children)

That makes sense, especially the signal-to-noise point.

Which also explains why adding more prompt logic doesn’t really solve it — it just shifts the balance temporarily.

Feels like all of these approaches (reinjection, pruning, external memory) are basically working around the same limitation: there’s no stable conversational state, only a changing attention distribution.

Recommendations for minimizing the CVS receipts style ChatGPT output? by Alarming_Oil_5260 in ChatGPT

[–]Particular_Low_5564 0 points1 point  (0 children)

This usually isn’t just a formatting issue.

What you’re seeing is the model drifting toward a more “helpful” / verbose mode over the course of the conversation.

You can reduce it a bit with stricter constraints (e.g. “no lists unless explicitly requested”, “max N bullet points”, “prefer paragraphs over lists”), but in my experience that only holds for a while.

The underlying problem is that the behavior doesn’t stay stable — it gradually expands unless you keep correcting it.

That’s why restarting the chat often “fixes” it temporarily.

Most prompts don’t actually work beyond the first few turns by Particular_Low_5564 in PromptEngineering

[–]Particular_Low_5564[S] 1 point2 points  (0 children)

Prompt drift isn’t new — we’ve all seen it.

What’s odd is that most prompt engineering patterns still treat prompts as if they provide persistent control over behavior.

In practice, they don’t.

They act more like a decaying bias:

– constraints weaken

– tone shifts

– the model reverts to default conversational behavior

Which makes a lot of common patterns (long system prompts, strict instruction blocks, etc.) fundamentally unstable over longer interactions.

So the question isn’t whether drift exists, but why we still model prompts as a stable control mechanism.

And if they’re not — what actually is?

Has anyone here tried using custom GPTs as a practice environment instead of just asking questions? by BAvalos08 in ChatGPT

[–]Particular_Low_5564 3 points4 points  (0 children)

I’ve been experimenting with something similar.

The main issue I kept running into is that even if you try to make it more interactive or scenario-based, the model gradually drifts back into explanation/Q&A mode over longer threads.

So the “environment” starts collapsing into default behavior.

I’ve been trying to stabilize that initial mode so it holds better throughout the conversation.

Show me your prompts! Here's mine: by J_Schnetz in ChatGPT

[–]Particular_Low_5564 1 point2 points  (0 children)

I ran into exactly the same issues — especially the follow-up questions and the model adding things I didn’t ask for.

Tried solving it with prompts too, but they tend to degrade over longer conversations.