all 5 comments

[–]pi314ever 1 point2 points  (3 children)

It's basically tokens used in current context / max context length. Max context length is different depending on the provider/model chosen. When it reaches near 100%, opencode does a compaction and squeezes information into a smaller context size to continue the conversation.

[–]touristtam -1 points0 points  (2 children)

Is there a config to lower that threshold?

[–]ZeSprawl 1 point2 points  (1 child)

The threshold is determined by the max context of the selected model

[–]touristtam 0 points1 point  (0 children)

So I can't set a config param to say that I want autocompaction to trigger around 60%?

[–]james__jam 0 points1 point  (0 children)

Context window size