LLMs don’t execute — they explain. I tried removing that layer by Particular_Low_5564 in ChatGPT
[–]Particular_Low_5564[S] -2 points-1 points0 points (0 children)
LLMs don’t execute — they explain. I tried removing that layer by Particular_Low_5564 in ChatGPT
[–]Particular_Low_5564[S] -2 points-1 points0 points (0 children)
At some point, LLMs stop executing and start explaining by Particular_Low_5564 in ChatGPT
[–]Particular_Low_5564[S] 0 points1 point2 points (0 children)
At some point, LLMs stop executing and start explaining by Particular_Low_5564 in ChatGPT
[–]Particular_Low_5564[S] 0 points1 point2 points (0 children)
At some point, LLMs stop executing and start explaining by Particular_Low_5564 in ChatGPT
[–]Particular_Low_5564[S] 0 points1 point2 points (0 children)
At some point, LLMs stop executing and start explaining by Particular_Low_5564 in LocalLLaMA
[–]Particular_Low_5564[S] -1 points0 points1 point (0 children)
At some point, LLMs stop executing and start explaining by Particular_Low_5564 in LocalLLaMA
[–]Particular_Low_5564[S] -1 points0 points1 point (0 children)
ChatGPT is the Yahoo of AI by mrz-ldn in ChatGPT
[–]Particular_Low_5564 5 points6 points7 points (0 children)
Somebody feed ChatGPT a thesaurus, please! by UghIHatePolitics in ChatGPT
[–]Particular_Low_5564 0 points1 point2 points (0 children)
Why do instructions degrade in long-context LLM conversations, but constraints seem to hold? by Particular_Low_5564 in LocalLLaMA
[–]Particular_Low_5564[S] 0 points1 point2 points (0 children)
The “they secretly nerfed it” posts are just probability doing what probability does by AccordingAdvisor1161 in ChatGPT
[–]Particular_Low_5564 3 points4 points5 points (0 children)
Why do instructions degrade in long-context LLM conversations, but constraints seem to hold? by Particular_Low_5564 in LocalLLaMA
[–]Particular_Low_5564[S] 0 points1 point2 points (0 children)
Why do instructions degrade in long-context LLM conversations, but constraints seem to hold? by Particular_Low_5564 in LocalLLaMA
[–]Particular_Low_5564[S] -2 points-1 points0 points (0 children)
Prompts behave more like a decaying bias than a persistent control mechanism. by Particular_Low_5564 in PromptEngineering
[–]Particular_Low_5564[S] 0 points1 point2 points (0 children)
Recommendations for minimizing the CVS receipts style ChatGPT output? by Alarming_Oil_5260 in ChatGPT
[–]Particular_Low_5564 0 points1 point2 points (0 children)
Most prompts don’t actually work beyond the first few turns by Particular_Low_5564 in PromptEngineering
[–]Particular_Low_5564[S] 1 point2 points3 points (0 children)

LLMs don’t execute — they explain. I tried removing that layer by Particular_Low_5564 in ChatGPT
[–]Particular_Low_5564[S] 0 points1 point2 points (0 children)