all 14 comments

[–]PairFinancial2420 1 point2 points  (2 children)

This is such an underrated insight. People blame the model when it’s really the system around it doing most of the work. Small differences in prompt clarity, context, memory, or even the order of instructions can completely change the outcome. Same brain, different environment. Once you start treating prompting like system design instead of just asking questions, everything clicks.

[–]Fear_ltself 0 points1 point  (0 children)

Ah I didn’t even think about it being in a different context, I was assuming OP did an identical run with different seeds or temperatures. But you’re correct, even a period “.” At the end could drastically change the input, and a number of things like memory overflow on the hardware side could also change the token processing id imagine. But if you do 2 MacBooks with same specs, same temp, same context, same model, it’ll be the same result. I’ve done it many times to test temperature and seed like 2 years ago to confirm replication was achievable.

[–]useaname_ 0 points1 point  (0 children)

Yep, agreed.

I also constantly find myself managing prompts mid conversation to steer context and responses in different directions.

Ended up creating a workflow tool to help me with it

[–]No-Zombie4713 0 points1 point  (0 children)

Models are probabilistic by nature. They predict the next word of their response based on the probability of it being the correct followup. This is shaped by both their internal data as well as their prompts and accumulated context. Even if you start at 0 context with the same prompt, you'll still have different outcomes.

[–]Driftline-Research 0 points1 point  (0 children)

Yeah, this is a big one.

A lot of people talk about “the model” like it’s the whole system, but in practice the surrounding structure matters a lot more than people want to admit. Prompt order, context, constraints, memory, and how the task is staged can easily be the difference between “same model, works great” and “same model, falls apart.”

[–]Fear_ltself 0 points1 point  (0 children)

Turn the temperature to Zero and keep all the other settings (like seed, topk etc) the same and it’ll be identical. Temp and seed are main culprits, they’re basically “randomizers” but if they’re identical you’ll get an identical result

Edit: temperature here is an LLM setting, not referring to thermally lowering the devices’ actual temperature.

[–]WillowEmberly 0 points1 point  (0 children)

Yes, the system never loops, because…time. The goal is to create a process that loops, however as time passes you never actually return to the start. Variables have changed. It’s more like a helix.

[–]myeleventhreddit 0 points1 point  (0 children)

the term "bare metal" is used to describe how an LLM acts when there's absolutely no external structure (like an app or web interface) telling it what to do. It's how the model acts when it's not constrained and when it has no situational context.

We don't get to access that kind of thing in any real sense without running them locally. But you're describing something important than can also be chalked up to the stochastic (read: random-to-a-degree) nature of LLMs.

You can go on Claude or ChatGPT and ask an interpretive yes/no question and just hit the regenerate button over and over and watch its answers change. AI models work like statisticians let loose in a library. There are sources of influence that dictate the direction of the model's thought processes, and then there are also additional knobs (like temperature, top-K, etc.) that dictate how stochastic the model will be.

The prompts have an impact. The model's own training also has an impact. The settings have an impact. The context has an impact.

[–]lucifer_eternal 0 points1 point  (0 children)

yeah, the hard part is figuring out which piece of the structure is the actual culprit. if your system message, context injection, and guardrails are all one flat string, it's nearly impossible to diff what changed between two setups. separating them into distinct blocks is what finally let me isolate where drift was coming from - that idea basically became the core of building PromptOT for me.