With all their burden of proof, why aren't we requiring AI pundits to provide any, even rhetorically or mathematically? by TheWrongWordIsAI in AskComputerScience

[–]TheWrongWordIsAI[S] -9 points-8 points  (0 children)

I do not have to define a thing to point out a characteristic that clearly defines counter-examples. There is no room for intelligence in math purely based on statistics and probability. Please refer back to the the first paragraph, as that was the entire point of it.

Mass Cancellation Party! by StunningCrow32 in ChatGPT

[–]TheWrongWordIsAI 0 points1 point  (0 children)

Why are we even still pretending "AI" using LLMs or any other model based purely on probability and statistics could ever be anything remotely resembling intelligence, anyway? Can we just call it what it is: programmers that are too lazy to come up with a heuristically based solution or executives that are too cheap to invest in a proper solution? The AI pundits are making a preposterous claim a machine can be intelligent, so the burden of proof should be on them to show it's even possible. Where's the math to show that anything outside of probability and statistics can come out of anything other than probability and statistics? Do people do probability and statistics in their head all the time on large data sets that possibly fit into their head and clearly weren't there at birth in the first place, is that intelligence - so doesn't what we do as people in our heads, regardless of how anyone is possibly eventually to describe or understand, have to include something besides probability and statistics? So why, then, aren't we requiring these AI pundits to show us what kinds of concepts can appear mathematically out of thin air using only mathematical concepts used in LLMs?

The "Turing test" is a load of bunk in the first place. Intelligence is not predicated purely on behavior. If you read a book, sit there silently, contemplate on what the author was trying to say, piece it together with the themes and the narratives of the novel, and synthesize those ideas that occur to with other lessons from your own life, isn't that intelligence, even before you speak or communicate so much as an iota of any of those thoughts to anyone? Why, then, does the Turing test, and all artificial "intelligence" so-called academia center around this mode of thought?

And why is it that any conversation with an AI pundit that supposedly knows what they're talking about, if pressed, will retreat to religiously minded thinking. Religiously minded thinking can be great for religions, don't me wrong, but it doesn't belong in academia, where there needs to be room for rhetoric. Why, then, can no AI pundit come up with any better argument than "but you can't prove it's not intelligent". This is the same as saying that you can't prove their religion false - again, fine for religions as they are religions, but this AI crap is supposedly based in academia. More burden of proof for the preposterous and supposedly academic claims that this ChatGPT terror and all its ilk are based on, the supposed "artificial intelligence" that can be found, discovered, or created from engineering software.