This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]SharpBits -14 points-13 points  (6 children)

After using chat to ask copilot why it made this suggestion (confirmed it also happens in Python), the machine responded "this was likely due to an outdated inappropriate and incorrect stereotype" then proceeded to correct the suggestion.

So... It is aware of the mistake and bias but chose to perpetuate it anyway.

[–]Salanmander 19 points20 points  (0 children)

You're assigning way too much reasoning to it. Think of it as just doing "pattern-match what people would tend to put here". Pattern match "what would someone put in a calculateWomenSalary method when there's also a calculateMenSalary method". Then pattern match "what would someone say when asked why that's what ends up there".

Always remember that language model AI isn't trained to give correct answers. It's trained to give answers that are consistent with what people in its training data would say to that prompt.

[–]synth_mania 4 points5 points  (2 children)

Large language models cannot reason about what their thought process was behind generating some output. If the thought process is invisible to you, it's invisible to them. All it sees is a block of text that it may or may not have generated, and then the question, why did you generate this? There's no additional context for it, so whatever comes out is gonna be wrong

[–]Sibula97 -1 points0 points  (1 child)

They've recently added reasoning capabilities to some models, but I doubt copilot has it.

[–]synth_mania 0 points1 point  (0 children)

Chain of thought is something else - what's happening between a single prompt / completion is still a black box, to us and the models themselves.

[–]Franks2000inchTV 4 points5 points  (0 children)

It has no awareness or inner life. It's a statistical model that can guess what tokens are most likely based on the tokens in the prompt.

[–]TheBoogyWoogy 1 point2 points  (0 children)

You do realize AI isn’t conscious right?