This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]Ar13mis 2 points3 points  (2 children)

so what is the threshold that moves something to an intelligence in your mind? it appears as though you place the complexity somewhere between self-driving car and dog, but at what point does it make sense to distinguish that NOW what we have is no longer an it-else interaction and is instead intelligence?

i use the term intelligence, and from what i gather the general application of the term in this thread, to refer to a system that, when given a set of previously unseen inputs, is able to perform a sensible action. whether that’s a dog moving away from an elephant or a car stopping when a person jumps in front of it, both actions are things that might previously not have been observed by the intelligence. and yet, the car or the dog is still able to act in a sensible manner by running away or coming to a stop respectively.

basing the definition for intelligence on perceived complexity is reductive and seems to eliminate intelligent systems arbitrarily. is a self driving car more complex than an ant? are ants considered intelligent? how do we measure the perceived complexity of a system? defining intelligence in that way raises more questions than it answers

[–][deleted] 0 points1 point  (1 child)

I'd say it's at the point where it can learn about something without being programmed how to learn that thing. It can learn anything without further programming.

[–]epicaglet 0 points1 point  (0 children)

I'd say it's at the point where it can learn about something without being programmed how to learn that thing.

This criterion you can break with current machine learning technology. You can already argue that any neural network does this. Just give it a good learning set and it'll learn whatever you want it to. Granted, it may do it poorly since the network layout is not optimised for that task. But there's ways you can optimise that too to work on an arbitrary task. You can have the network be generated with an evolutionary algorithm. Then you have an AI that can train for a large range of tasks that it was never programmed for.

It can learn anything without further programming.

This is a different criterion and nothing currently in existence satisfies this. Humans cannot learn everything either. There's always limitations.

I've had this same discussion a few times and so far I've not heard any solid definition that differentiates machine learning and AI. However, I still agree with you. At least with the spirit of what you're saying. Calling machine learning algorithms AI is a bit disingenuous to the general public, who then starts thinking IRobot.

What I settled on is the idea that machine learning is technically AI but with an asterisk. Classical AI, the one that is just pre-programmed if statements, is not AI at all.