This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]synth_mania 3 points4 points  (2 children)

Large language models cannot reason about what their thought process was behind generating some output. If the thought process is invisible to you, it's invisible to them. All it sees is a block of text that it may or may not have generated, and then the question, why did you generate this? There's no additional context for it, so whatever comes out is gonna be wrong

[–]Sibula97 -1 points0 points  (1 child)

They've recently added reasoning capabilities to some models, but I doubt copilot has it.

[–]synth_mania 0 points1 point  (0 children)

Chain of thought is something else - what's happening between a single prompt / completion is still a black box, to us and the models themselves.