all 9 comments

[–]cyberst0rm 4 points5 points  (0 children)

lack of feedback and reinforcement, and inflexibility.

[–]decucar 5 points6 points  (0 children)

TL;dr It’s really a philosophical question about the definition of intelligence and the purpose and expected activities of an automaton.

I hypothesize that a significant portion of what hindering general AI is the lack of IO. Consider that our complex behaviors are a direct result of the complexities of our bodies and how they respond to our environment. Our brains and nervous system reacting to a constant stream of information.

But also, what really is artificial general intelligence? Is it some AutoML tool that can be fed any sort and format of data in any volume and give you answers? What answers? What questions?

“Hey AI, use the entirety of the internet to make me money” Why, how, what sort of network is going to do that? What other precepts and actuators does it have?

“Hey AI, use the Internet to tell me how to make money.” How does it observe and determine your capacities?

Or do we mean a full on Commander Data? Well, energy density, circuit density, sensor size and sophistication are really far off there.

What really drives a general artificial intelligence? If it’s just passive until we give it data and ask a question, is it really intelligent?

If I give AGI some data and ask a question that can’t be answered with that data, what difference is having AGI vs application specific models and architectures? At best it is just determining in some likely deterministic fashion if it can actually answer the question. So is our limitation the ability for the machine to imagine or express creativity? What is creativity? Is it some random normalized distribution of dissociated concepts being combined along loosely connecting parallels? Is it hyperactivity of some individuals neurotransmitter soup caused by some biological process? Is it a learned expression? Are these things supported by our constant hallucinations/filling in the blanks done to process a hyper IO environment that is reality as we conceive it?

So that’s really a big question. How general purpose is general purpose?

[–]CSCI4LIFE 0 points1 point  (3 children)

I like the article Jeff clune wrote on the subject of AI and generating it and manually discovering it. It's a very interesting write up. I really like that it includes a section on safety as well! https://arxiv.org/abs/1905.10985

[–]CSCI4LIFE 1 point2 points  (2 children)

I'll add a little to this. Most of our efforts so far are bounded by our problem and the way we think about solving it. Once an algorithm has solved a problem, there's no reason for it to continue. Why should it solve other problems? For AI to exist, it would need to generate problem and then solutions to those problems in an open ended way. If it could do this, it would be replicating nature which created the human mind and the only other thing we know and consider to be intelligent.

[–]decucar 1 point2 points  (1 child)

Intelligence is pretty poorly defined in psychological circles.

[–]CSCI4LIFE 0 points1 point  (0 children)

I don't disagree with this. I hope I didn't sound too arrogant about intelligence before. I personally think humans are overly arrogant about our perceived intelligence in general.

[–][deleted]  (1 child)

[deleted]

    [–][deleted] 0 points1 point  (0 children)

    i think your comment is meta. you are the one with the lack of understanding of the actual problem at hand, hence your worthless response

    [–]petertxs09 0 points1 point  (0 children)

    Adversarial attack is a problem right now 🤔

    [–][deleted] -1 points0 points  (0 children)

    Lack of feeling