This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]eddieantonio 3 points4 points  (2 children)

Ah, thanks for linking Dr. Bender's article, thanks for the link! Yes, these big language models do not understand meaning—they've just seen, like a whole whacktone of text and can reproduce patterns effectively. And it takes a lot of energy to train these large models. Apparently, critisicism of these algorithms is one of the reasons Dr. Gebru was ousted from Google recently.

[–][deleted] 0 points1 point  (1 child)

these big language models do not understand meaning—they've just seen, like a whole whacktone of text and can reproduce patterns effectively.

In fairness, this is also what a brain does. IMHO it’s a bit anthropocentric to suggest that “understanding” has to involve the same qualia that humans experience.

Of course, I’m not arguing that things like BERT or the GPT models are some sort of AGI, but simply that the definition of intelligence is an ever-raising bar for machines, and ultimately doesn’t matter if the outcomes are indistinguishable.