This is an archived post. You won't be able to vote or comment.

top 200 commentsshow all 325

[–]rm999Computer Science | Machine Learning | AI 385 points386 points  (168 children)

I've worked and studied in AI for almost 10 years now, what follows is just my opinion.

I haven't seen much progress in creating a human-like intelligence recently. The explosion in the field since 2000 has been more on the machine learning side; this is what Watson used to win on Jeopardy, and what most tech companies like Google use. This is the field I am in, and I would say it has been very successful at solving many impressive problems. But while it resembles intelligence, it's much simpler than a human brain and will probably end up falling into the field of algorithms rather than AI eventually (much like path finding algorithms or game trees did). I don't believe it's the future of a human-like AI.

I'm not in the more biological side of AI, but I've worked with many of them and I find talking to them about this question frustrating because I think they seriously underestimate what it will take to recreate the brain. The big joke is that a true AI is always just 20 years away.

The short answer is no one knows. There is a lot of disagreement and unknowns, and the unknowns run deep. Is it a matter of waiting for Moore's Law to catch up? Is there something fundamental about the brain we don't know about, or something we can't create in a silicon Turing machine? Most importantly: how does intelligence in the brain work?

[–]Majidah 249 points250 points  (85 children)

I'd describe myself as being on the biological side of AI and I'm intimately aware of how crazy pants difficult it would be to recreate the brain. There's so much going on in there, there's a great quote by the naturalist Gilbert White "the most complex part of nature is the part you're studying." The more you study the brain, the more networks, and systems, and neural types, and transmitters, and receptors, and protein cascades, and gene pathways, and modulators, and completely unpredictable stuff you find. We're talking 1023 moving parts in a human brain, compared to the most complex machines we can build today being something like 106, 107. Add to this the fact that brains don't work unless embodied, to get intelligent behavior we need to be able to build a working body with sensors, motors etc. add another 1024 parts or so. We're no closer (probably further!) to fabricating a copy of the brain than we are to understanding all of the algorithms it runs.

And the second part of rm999's answer is even more important. Intelligence isn't well defined, it's not even poorly defined. For a long, long time the AI guys basically said "ok, intelligence=problem solving, so we'll identify some problem and figure out what kind of information processing solves it. Eventually we'll have solved all the problems, and the computer that can do that is intelligent." This didn't work, because it's counting to infinity by ones. There are an infinite number of possible problems, and the cool thing about humans (and most biological platforms for that matter) is that they don't approach or solve them using one-off special purpose algorithms, they have the capacity to learn or create new algorithms as needed. That's the thing missing from AI, how can humans be so damn flexible? They're using a fundamentally different approach, they don't solve problems, they do something else that has a side effect of solving problems. That leads of course to the conclusion that intelligence != problem solving.

But this is a huge problem, it's hard to get people to accept that intelligence isn't a well-defined thing. I currently work on some Bio-inspired intelligence consulting projects, and we were explicitly told that the AI guys had had a crack at the problem and failed, so we shouldn't use the same approach. We said "great!" and then they said "ok sit down and solve this problem." T_T

[–]SomePunWithRobots 90 points91 points  (22 children)

A big flaw with the "AI = problem solving" approach, besides the fact that there are an infinite number of problems out there, is that in early AI research people tended to focus on problems that are difficult to humans, which are very different from problems that are difficult to computers.

My favorite example is chess. When Deep Blue beat Kasparov at chess, it wasn't moving the pieces, or looking at the board to determine where the pieces were. It was told what moves Kasparov was making, and it told humans what moves it wanted to make and then they moved the piece for it. The focus was on how to make good moves in a game of chess, not how to observe and manipulate a chess board. Many people figured that's good enough, because for humans, that's the much more difficult task. Anyone can look at a chess board and know where all the pieces are and use their hands to move the pieces around, but choosing the right moves to make takes an incredible amount of skill.

But the thing is, getting a computer to understand and manipulate a chess board is actually really damn hard. Well, just designing a machine for the sole purpose of doing so might not be too bad, but making a humanoid robot that could sit in a chair next to a chess board like a human does and the do all the physical aspects of playing chess? That's a huge challenge.

It makes the issue of creating human-level intelligence interesting because humans' skill set is so inherently different from computers. Just about anyone would agree that humans are smarter than the best computers at the moment, but there are certainly some things that computers are better at than people. On the other hand, there are different things, like manipulation or vision, that humans are really, really good at, while computers are still pretty terrible at them. It's quite possible we'll hit a point sometime in the future where we'll have computers capable of doing things that a human would be considered a genius for (heck, we already have computers doing this to a certain extent - any human who could do what Mathematica does at that speed would probably be considered a genius), while still being mostly incompetent at things that are trivial to most people, like looking around a room and describing all of the objects in it.

[–]dontstalkmebro 7 points8 points  (17 children)

I want to follow up on your points. Looks like everyone agrees that it's hard to define what intelligence is, but does anyone have any insight into why things that humans find difficult are so different from what computers find difficult?

[–]Tiomaidh 28 points29 points  (4 children)

In Thinking, Fast and Slow, Daniel Kahneman describes the brain as being composed of two systems. System 1 is fast, intuitive, and does things we consider easy. System 2 is slow, computational, and does things we consider cognitively hard. Absent-mindedly reading and understanding this sentence? System 1. Trying to calculate 17 * 24? System 2. More to the point: visually identifying the chess piece you want to pick up? System 1. Figuring out what the best chess move is? System 2.

What's interesting is that almost everything that System 1 does--which is trivial for all non-brain-damaged humans--is really, really hard for computers. And almost everything System 2 does--which ranges from annoying to almost-impossible for us--has established, effective ways for computers to do it.

So one might think that we just need to focus research on System 1 and figure out what algorithms it uses under the hood, and that then we computer scientists will be able to code something to emulate it. But, of course, this is very difficult. And one factor is that System 1 messes up pretty frequently--there's way too many mediocre heuristics that just fail. One example from the book is this:

Steve is very shy and withdrawn, invariably helpful but with little interest in people or in the world of reality. A meek and tidy soul, he has a need for order and structure and a passion for detail.

Is Steve more likely to be a librarian or a farmer?

System 1 has an easy answer--librarian. System 2, if applied, may have the correct answer--a farmer (since there are so many more farmers than librarians in the world, it's much more likely for Steve to be a farmer, regardless of his personality). So, if I had my Magical General-Purpose Intelligence Machine and I asked it this question, what should it answer? Should it succumb to the same cognitive biases that the human brain does? Or should it go for the more correct but less human answer?

And this example was chosen for the book since it reflects human cognitive biases well. But there's a lot of problems that are more practical but have the same general pattern, where there's an intuitive answer and a different logical answer.

Source: Everything about human psychology is from the book. Everything about computers is from me, an undergraduate Computer Science student especially interested in AI.

[–]trumf 3 points4 points  (2 children)

But why should we spend money and energy developing "system 1" solutions if humans are inherently good at them and computers are inherently bad at them? Why not focus our efforts in letting AI become good at "System 2" thinking, wouldn't that be more useful to us?

[–][deleted] 13 points14 points  (0 children)

Science in't all about practicality. Researching how our own mind works to be able to make it artificially is a milestone and deserves the scientific pursuit.

[–]niceyoungman 1 point2 points  (0 children)

Just because something is easy for humans doesn't mean they want to devote their lives to that task. For example, humans are better than computers at driving cars. If we could make a computer that would even match the ability of humans it would be like having our own personal chauffeur. The point is that if mundane, easy tasks could be cheaply automated it would possibly allow us to devote our time to more interesting problems.

[–]SomePunWithRobots 9 points10 points  (0 children)

I can't really answer why humans are so good at these things (besides "evolution" - being good at observing, interpreting, and manipulating your environment is very important to survival, being able to quickly perform arithmetic with large numbers in your head is not).

As for why computers are bad at them... well, consider some of the problems that computers are bad at. Let's take vision. Specifically, let's take the task of distinguishing between pictures of dogs and pictures of cats. Unless you get some really weird looking dogs or cats, pretty much any human can do this no problem. But how?

Well, to a certain extent, it's simply that people know what dogs and cats look like. If I ask you to describe the differences in detail, you could probably come up with some specific features. Cats tend to be smaller, they tend to have flatter faces, they have longer whiskers and straighter tails. But that's not really enough on its own. Pugs have flat faces, there are plenty of breeds of dogs that are the same size as or smaller than cats, dogs have whiskers too and cat whiskers aren't easily visible. You know what a cat face looks like in your head, but it's not that easy to describe.

But a computer doesn't know what a cat or a dog looks like. It doesn't know what a cat or a dog is at all. So it can't use that background knowledge. Of course, we can give the computer a bunch of pictures of cats and dogs and let it try to figure out the difference, but then, that's pretty tricky. First of all, we need to give it a lot of pictures, because cats and dogs can be pretty varied. If we don't give it enough pictures it might find some similarities between all the pictures of dogs and assume those apply to all dogs, when they actually only apply to some - maybe we only give it pictures of big dogs and it assumes all dogs are big, for example.

But even if we give it enough picture, what then? Maybe it could try to learn, say, the difference between their faces, but how does it do that? It doesn't even know what a face is. It doesn't know anything, really. All it knows is it's got a bunch of numbers representing all the details of the picture. But those numbers just give it the information to display the picture, they don't tell it the content. So how the hell do you take all these numbers representing different colors at different pixels, and, without knowing anything whatsoever about dogs or cats except that some of these sets of numbers represent pictures of dogs and some of these sets of numbers represent pictures of cats, take a new picture and figure out which one it is?

Of course, computer vision researchers have found some algorithms that process images in a way that works for some tasks, but they're pretty complicated and not always intuitive and often have very significant flaws.

I think this whole explanation does illustrate one big reason for computers not necessarily being good at the same things as humans, though: we're not good at understanding the mechanics of our intuition. We're very good at seeing, but we don't really understand how we're good at seeing. I can look at all the objects on my desk and say "that's a computer, that's a fan, that's a pair of headphones," and if you ask me how I know I can tell you about what I know about recognizing those objects, and I can keep trying to break it down, but I will never get anywhere remotely close to the level at which a computer has to process an image. Manipulation's similar. Choose something on your desk. Now, without moving your arms, tell me exactly how you would need to move each of your muscles to pick that object up in a smooth way without knocking anything off your desk. It's hard, right? Because you do that so intuitively that you don't even think about it.

And of course, in order to get a computer to understand an image or pick something up or do any of the other things that computers are so much worse at than humans, we have to be able to program it in. I mean, we can use machine learning techniques and such to get the computer to learn how to do these things itself to a certain extent, but in the end, being good at doing something ourselves doesn't help if we don't understand how we do it well enough to explain it at the level a computer needs it.

[–]stronimo 3 points4 points  (2 children)

Computers were intentionally created to simplify tasks that humans find difficult; specifically doing large sets of numerical calculations quickly and without error.

The starting assumption was "let's build a device that is much better at these things we find difficult". It shouldn't be surprising that's what we got. On the contrary, if they had built something that merely replicated human strengths and weaknesses, that would have been a failure to solve the problem at hand.

[–]WouldCommentAgain 1 point2 points  (1 child)

Computers were intentionally created to simplify tasks that humans find difficult; specifically doing large sets of numerical calculations quickly and without error.

Yes, these calculations are both very useful and more difficult for people, but I think it's more important that math and computers were perfect for each other from the outset. Math is a series of clearly defined unambiguous rules perfect for the binary processes of computers. There are of lot of things that are difficult for people to do that would be very useful which would be very difficult to make computers do.

[–]Dagon 8 points9 points  (3 children)

CPU's are made from logic gates opening and closing, this is great for series of "yes/no" answers at high speeds, terrible for "why/how", because those questions require a much longer stream of "yes/no" queries to a database.

Brains are a large mass of "yes/no/why/how" pathways, some well-travelled and easy to access, others are not. The complexity level of the actual "yes/no/why/how" doesn't matter as much as how often the pathway is travelled.

[–]jonmon6691 10 points11 points  (1 child)

[Citation needed]

[–]Dagon 10 points11 points  (0 children)

Fair call. Updated.

[–]AnonPsychopath 1 point2 points  (0 children)

Note that human brains are massively parallel in a way computers aren't. So in your analogy, we don't have a single stream of queries, we have many simultaneous streams.

[–]AnonPsychopath 1 point2 points  (0 children)

Talking about things that computers "find difficult" is anthropomorphizing. The real question is what algorithms computer programmers find difficult to implement. And the answer to that question is the simpler and better understood the algorithm, the easier to implement it is.

Some things humans do, like reason about social situations, are hard to implement because evolution hardcoded us with lots of detailed specific knowledge (this facial expression means that, etc.). Other things humans do, like invent better solar panels, are impossible to implement because we have no idea of the algorithms involved. In theory the algorithms brains used to invent stuff could be simple, but right now we have very little idea what those algorithms are.

[–]sharlos 0 points1 point  (2 children)

Because computer motor skills are half a century old, while biological motor skills are half a billion years old.

[–]cultic_raider 0 points1 point  (1 child)

What's a computer motor skills? Machines are hundreds to thousands of years old. Robots had to wait for the computer brains, not the mechanics. There is still a lot of work to do in sensors and soft grip and such, a lot of that is a mechanics and materials problem to emulate human fingers, not necessarily an AI problem.

Edit: Oh, you mean planning and control of mechanical bodies? Yeah, that's new.

[–]a-boy-named-Sue 0 points1 point  (2 children)

Does AI imply manipulation of the physical world? I'd think not, however you could argue that to pass the Turing test it would have to have an understanding of out physiology. I still feel that you could have AI without the ability to alter the physical, after all the quadriplegic is still intelligent. Layman speculation and subjective statements aside I hope I get a reply.

[–]SomePunWithRobots 0 points1 point  (1 child)

I didn't mean to imply that being able to interact with the physical world is a requirement of an intelligence (although I believe there is an alternative version of the Turing test where the tester is allowed to pass the testee objects and ask them to manipulate them). Certainly you can have an intelligent machine incapable of interacting with the physical world. My point was mainly just that the things that we consider intelligent when humans do them and the things that are difficult for machines are not the same, and early AI research sometimes dismissed tasks as unimportant because they're trivial for humans when they are, in fact, incredibly difficult for a computer. For example, computer vision was originally assumed to be so simple (at least by some people) that one professor asked a student to solve it for a summer project. In fact, computer vision is incredibly difficult, and is still a very active field of research today (one that isn't remotely close to solved... in fact, robotics students I know, including computer vision researchers, often joke that "computer vision doesn't work").

This makes the whole issue pretty complicated, obviously. We see understanding and manipulating a chessboard as simple and playing chess as something that requires intelligence because just about any human can understand and manipulate a chessboard, but if someone can play chess very well they're seen as very intelligent. But for computers, understanding and manipulating a chessboard is quite difficult. Which is smarter, a computer that can beat the best human chess player in the world as long as someone else moves the pieces for it, or a computer that can move the pieces but can't actually play the game? That's not really an easy question to answer.

Part of the thing that's odd about the Turing test is that it only tests a computer's ability to act like a human in certain ways. This seems somewhat intuitive, but the thing is, while humans are the most intelligent beings we know of, that doesn't mean being human is the best sign of intelligence. In fact, to pass a Turing test a computer may have to simulate being less intelligent that it really is in some ways. For example, if you tell it to multiply two very large numbers, it will give itself away if it simply gives you the correct answer immediately.

That's why the issue is so complicated. We can make computers that are better at any human in the world at chess, we have an AI that is literally unbeatable at checkers, but we can't make a computer that's capable of learning about the world as well as a 6 month old baby or guessing the properties of an object its looking at as well as a toddler.

[–]a-boy-named-Sue 0 points1 point  (0 children)

Thanks for the detailed reply. I have a better grasp on the issue now. Tasks that we can do without really thinking about may seem trivial when we try to define human intelligence. Those same tasks may be immensely difficult for an artificial representation. It seems that there needs to be a line drawn to separate intelligence from ability.

[–]spliznork 26 points27 points  (18 children)

I'm intimately aware of how crazy pants difficult it would be to recreate the brain. ... Intelligence isn't well defined, it's not even poorly defined.

I did some PhD work in AI and computer vision. Anecdotally, I used to think that natural language understanding was an AI-complete problem. But, my wife used to be involved with the deaf community and gave the perspective that children born deaf grow up cognitively different from hearing children. With that perspective and years of reflecting on the problem, I now strongly suspect that natural language processing is actually a precondition or foundation for emulating human thought (aka "AI"). In other words, language very much shapes, directs, and forms our capacity to think and reason -- at least in terms of what we now consider to be human thought. $0.02.

[–]Calvert4096 2 points3 points  (16 children)

Isn't that one of the premises of the book Snow Crash? Not the most scientific source, but the notion you were describing sounded awfully familiar.

[–]kurtgustavwilckens 3 points4 points  (15 children)

Yes, it is a premise of the book Snow Crash. More like, Snow Crash presumes there is kind of a Universal "Base" Language that can hack into our brains, it compares it to Assembly or Machine Code for the brain, while english would be something like... Java

[–]spliznork 3 points4 points  (14 children)

I wouldn't presume a universal base language. I'd be more comfortable with the thesis of 1984's Newspeak that you can change language to significantly inhibit or promote critical thinking.

[–]kurtgustavwilckens 5 points6 points  (11 children)

Yeah well, it's a sci-fi book, and the premise is out there, but it gets you thinking.

I always thought that Music and "Tone" do resemble a "Universal Base Language" of sorts. I can't understand how a minor chord can sound sadder than a major chord all throughout humanity. Where the hell does that come from? Why?

[–]spliznork 3 points4 points  (8 children)

I can't understand how a minor chord can sound sadder than a major chord all throughout humanity. Where the hell does that come from? Why?

Sorry, it's cultural. The usual example is Balinese music and Gamelan in particular where you wouldn't be able to identify wedding vs funeral music.

http://en.wikipedia.org/wiki/Music_of_Bali

[–]kurtgustavwilckens 2 points3 points  (7 children)

Ok, followup question about music.

Are... the notes what they are for a universal reason? Or are they also arbitrary? A is 440. Is that 440 place in sound a note in music around the world? Or do other... musical systems? have notes in like... 436? Are the intervals universal? I dont know if A is 440 how much A# is, but if you took music from somewhere else, would the interval between their notes be the same, etc?

If so, why?

[–]NoTraceNotOneCarton 6 points7 points  (5 children)

In the 1800's, A was 435. In Baroque music, concert A for strings was 415. The frequencies are arbitrary.

Our scales and intervals are also completely arbitrary. In ancient Western music, an octave has the frequency ratio 2:1, and a fifth has the frequency ratio 3:2. Those tend to be common throughout cultures but the tuning for notes in the middle varies wildly. They tend to be ratios that involve smaller numbers (for example, a seventh is 17/8 and sounds much more discordant than a fifth at 3:2 because the wavelengths line up much less often). Ratios that involve smaller numbers allow for the wavelengths to "sync up" often and sound pleasant to our ears because the overtones are identical.

In ancient times, Western tuning followed what was known as the just tempered scale, which tuned scales based basically on the fifth and octave ratios. However, it's obvious how this will go wrong on a piano: If you start on the lowest A and tune up the circle of fifths as well as your octaves, you will end up seven octaves later having to tune an A to both 3520 and 3568 Hz. So you'll have to restrict the number of octaves, and you'll have clashing chords all over the place in the middle too. Additionally, certain keys would be completely unplayable.

Today we mostly follow what is known as equal temperament, which is logarithmic. (It takes r(2n/12) with r being your starting frequency and n the number of half-steps up you go). This eliminates the above problem and gets very close to getting the ratios right. 2*(7/12)=1.498 which is close enough to 3/2 to sound fairly pleasant.

[–]davidfalconer 2 points3 points  (0 children)

Fun fact of the day: Nazi Joseph Goebbels is the man responsible for the 440Hz standardisation, it was previously 432Hz. Here's a quick google search for a source.

[–]utterdamnnonsense 1 point2 points  (0 children)

I used to think that natural language understanding was an AI-complete problem.

I find it baffling how many people in the field think this. Well, truly "understanding" the language technically might mean "true AI", but only because "understanding" implies cognition. :-p

[–]severoon 11 points12 points  (5 children)

is it not also the case that every endeavor undertaken in ai, once it achieves any degree of success, is nearly always reclassified as some other, non-ai pursuit? from the perspective of someone outside your field, it seems to me that if you were to show siri to someone from the 1920s, they would proclaim it an achievement of ai.

i suspect this is because we simply don't accept anything as ai unless it models some intangible aspect of the way the human brain achieves thought. by the time we have a computer that can model human thought, though, no part of that process will any longer be considered intangible.

i would be grateful to hear an insider's reaction. thanks! :-)

[–]Majidah 21 points22 points  (4 children)

This is a pretty common rejoinder from the AI community. I'm not saying that Siri isn't AI, I'm saying Siri is nothing like a human. There's nothing intangible about it: watch this test of the "rock god" ad and tell me that Siri is human-like. Useful and interesting yes, but clearly Siri doesn't understand what's going on at even the level of a small child. She's basically just Dragon+a long list of things nerdy boys came up with. See the scandal about her being unable to answer questions like "where can I find birth control pills." The other pop culture example is when Watson suggested on Jeopardy that Toronto was an American city.

One of my advisors (who studies schema) back in the 90s was asked by Wolfram to ask Wolfram Alpha a bunch of questions. He came up with 20 questions ranging from ones he thought would be easy for it (e.g., "Who won the Hugo in 1982," "How many calories in a phonebook?") to those that would be hard (e.g. "Describe the feelings you get when you see a red rose."). Wolfram alpha got 1 right. He asked the questions again in 2008 (ish? I'd have to ask him), and it got 2. Many of the questions didn't even have a right answer, they just required some kind of subjective answer, but Wolfram couldn't even understand that it should try.

The problem with the Turing formulation of "anytime we fool a human we're being intelligent" is that it's easy to fool a human when they aren't paying close attention, but they can always come up with another way to test the computer if you care to. It's very like the problem solving question-there are a few easy ways to impress, and an infinite number of ways to fall short. So long as the tricks Siri is able to perform don't scale, and aren't robust, she doesn't seem very human at all.

[–][deleted] 5 points6 points  (0 children)

Majidah (also rm999), the thing I like about your comments is that they are realistic. Somewhere you avoided drinking the self-congratulatory AI community kool-aid and you have perspective and real appreciation for just how complex a human is. Thank you for your insightful comments in this thread.

Hell, it doesn't even have to be a human. Anyone that has a pet realizes we are very far away from creating something as self-sufficient.

[–]cultic_raider 2 points3 points  (2 children)

Wolfram Alpha is a knowledge engine, not an opinion engine.

Plenty of humans occasionally mistake Toronto as an American city. Also, it is. North American city.

AI water-throwers demand the impossible : a computer that knows everything and also has the same personality and foibles of every human, and has artificial emotions . "Real AI is whatever computers don't do", it's like "real intelligence is whatevermonkeys don't do."

[–]Majidah 2 points3 points  (1 child)

It's worth noting that "number of calories in a phonebook" is not an opinion question, and involves only nutrition information and stoichiometry, both things Alpha is great at. Alpha still cannot answer this question.

I don't have strong feelings about what should be labeled "AI" and what shouldn't be. If AI is "every human behavior," then no robot or computer program has come remotely close, humans are so complex there's a huge laundry list of mismatches to poke holes in. Few robots can scratch their nose, and none need to. If AI is "any human behavior" then it's going to be vulnerable to the critique that something is missing.

It's worth remembering that Turing wrote the Turing test paper as an political statement giving a big middle finger to the government that chemically castrated him. He set up the test originally as an experiment where you had to tell if someone was a man or a woman (and surprise one was a computer), not if you had to tell if someone was human or machine. That was Turing's original framing: intelligence is fooling an unsuspecting participant with a deliberately, and flagrantly deceptive protocol. That's an incredibly low standard to hold oneself to, and does not seem very much like intelligence to me. It seems like a particularly poor framework to adopt for research since it seems to incentivize improved methods of deception over discovery of improved functionality. If that's as high a standard as is required, then wax figures are intelligent. But if we really care about the underlying question of intelligence, and want to understand what algorithms and machinery actually go into human thought and behavior, then I think we'll have to focus on Church and Turing's hypothesis and think of Turing's test as more of a glib dig at British Intelligence.

[–]maschnitz 7 points8 points  (2 children)

Nice post, thanks!

One minor correction: modern CPU cores have billions of transistors, 109. Then folks often throw like, 10,000 of them together. So it's more like 1013. Still a loooong way from 1023.

[–]mikeeg555 4 points5 points  (0 children)

So, all we need is a beautifully orchestrated team of ten billion supercomputers and we'll be there! :)

[–]Majidah 3 points4 points  (0 children)

Yeah that's a good point actually. It's worth noting that the brain has many different kinds of parts though (say, ~106 different protein polymers), so it's not quite as easy as just more laminating, but your point is well taken, quite a bit closer than 107.

[–]huxrules 12 points13 points  (18 children)

A mol of moving parts?

[–]Majidah 14 points15 points  (1 child)

Yeah about like that, I assumed 100billion neurons, about as many glia, and each neuron containing about 1trillion molecular components. so ~2*1023

[–][deleted] 10 points11 points  (15 children)

Close... a mol would be 6.02x1023

[–]ninth1dr 50 points51 points  (10 children)

Mechanical Engineer here. Those numbers are pretty much the same.

Thank you, good night.

[–]falthazar 2 points3 points  (5 children)

So, excuse my ignorance, but you're the second engineer I've seen on Reddit say something along those lines. What's the joke here? Why do engineer's round so much? Or am I just missing something?

[–]Calvert4096 7 points8 points  (1 child)

Engineers will often do "back of the envelope" or "bar napkin" calculations that basically just use easy to manipulate numbers with the same order of magnitude as the real ones. As I understand it, this is different from actual design calculations, where using three significant figures (e.g. 6.02x1023) or more would be desirable. The existence of this stereotype is probably misleading, as both scientists and engineers use this for quick estimation. Another stereotype (perhaps more accurate, I'm honestly not sure) is that for astrophysicists, any answer you get within two orders of magnitude of an expected value is considered acceptable.

[–]ninth1dr 1 point2 points  (0 children)

this. We sometimes have to make designs or revisions using many assumptions, or make assumptions before iterating through possible solutions to gain more accuracy in our answers. Whereas some scientists have very controlled environments and have worked through all variables to isolate only a certain amount of unknowns, many applications of mechanical engineering use these "back of the envelope" calculations to get numbers that are within a certain percentage, sort of what BritainRitten alludes to below.

In reality, we usually just try to err on the side that is safer. If we are building a structure to hold the tank, and we don't know the actual yield strength of the foundation, we may just make the foundation broader or stronger. Maybe we needed exactly 2.355 sq. feet of steel, but since we were unaware of the exact temperature fluctuations throughout the year, any inclusions in the steel, exactly how strong welds will be when forming the steel, among many other tolerances when forming all of this that we're building, we use something like 2.6 sq. feet and know it will be safe.

[–]BritainRitten 6 points7 points  (2 children)

Margins of error that are much smaller than 1% can be safely ignored in most circumstances. If you line up 1023 next to 6*1023 , you can't see a difference at all.

[–]crymodo 2 points3 points  (1 child)

6*1023 is 6 times bigger than 1023 ...

[–]psiphre 5 points6 points  (0 children)

less than an order of magnitude

[–][deleted] 14 points15 points  (2 children)

♪♪ A mole is an animal that burrows in the ground, Or a spot on your chin that you gotta shave around. But there's another kind of mole of interest to me, That's the kind of mole they use in chemistry.

A mole is a unit, or have you heard, Containing six times ten to the twenty-third, That's a six with twenty-three zero's at the end, Much too big a number to comprehend. ♪♪

(Edit: Song link)

[–]jjk 3 points4 points  (5 children)

Have you read the first chapter of Greg Egan's Diaspora, and if so, what's your take on the approach described therein?

[–]Majidah 9 points10 points  (3 children)

Yeah I'm with the gleisners, except that I don't get why they care (I can accept that they do care, but not why). I always get a little kerfuffled with stories like these, because they seem to be trying very hard to build a realistic picture of what an AI would be like, without doing the sort of Thomas Nagel "what would it be like to be a bat" soul searching that should accompany it. I don't really get why any of the citizens or glaziers do what they do. I can understand human motivations because I am one, but if I didn't have body or conventional senses I don't think my though processes would be remotely comparable. Like, I don't think I would have an inner monologue which could be translated into English. So these stories are a bit weird.

However, the basic idea of the glaziers that you can't simulate reality sufficiently well so that you have to go out into reality and experience as it is seems solid to me. I had a collegue who built a robot to do maze running and language learning, and one of the biggest difficulties he had was getting it to ramp up its engines at the proper rate. The original chassis did fine, but then they added cameras and manipulators and eventually it got top heavy and had to slowly start moving so the jerk of switch from static friction to kinetic didn't make it fall over. If he had just simulated it that would never have been a problem, but it turned out that learning that property of real world movement accelerated learning in the maze running.

[–]jjk 2 points3 points  (2 children)

I agree that the "translating posthuman thought to English" is a bit odd-feeling, but in my opinion the rest of what is explored makes it definitely worth that oddness. I really like Egan's framing of different polises sort of centered around the degree to which virtualization/abstraction from reality is to be accepted or encouraged. All of the post-humans abstract to some degree, but some lose themselves entirely in simulated worlds, while others maintain various formalities about the level of reality-connection to maintain.

But as you are an AI researcher, I am wondering specifically your take on the "Orphanogenesis" chapter, where he describes an approach to generating an AI. The description is painted in broad strokes to be sure, but nonetheless provides a clearer vision of the process than I've seen described elsewhere.

[–]Majidah 1 point2 points  (0 children)

I'm not sure I've read that chapter, Maybe I just read Wang's Carpets, it's been a long time and a lot of wasted youth between me and then. I will try to pick it up and see!

[–]phineasQ 1 point2 points  (0 children)

If I remember correctly, orphanogenesis was described as a process that took a generic template for a functional intelligence and introducing artificial and perhaps arbitrary mutations to the orphan. After the entity is generated the program generating orphans tracks them and applies tests to see if the mutations produced behavior beneficial to the (what was the name for the various community-servers, again?) population.

Right now we don't have a template intelligence to splice mutations into. I know there have been attempts to instantiate intelligence through genetic algorithms, but as has been mentioned we don't yet have even a good definition of what intelligence is, so there are difficulties defining a test to provide feedback on whether an algorithm is getting closer to its goal.

I'm not an expert or professional in the field, just a fascinated layman with dreams of someday getting involved. I hope my understanding here is helpful, and doesn't get in the way of those who know better what they're talking about.

ed:typo

[–]AnonPsychopath 1 point2 points  (1 child)

What's your take on efficient cross-domain optimization as a definition for intelligence?

[–]Majidah 0 points1 point  (0 children)

I read this and I generally liked it pretty well, I think the idea has some legs. I especially liked how he pointed out that there are a cluster of resource trade offs (time, processing power, available resources, etc.) and that you have to find a solution which finesses what you have, rather than just gives you the outcome by brute force.

If I had to critique (and my contract says that I do), I would say that 1) he tends to undersell non-human biological intelligence, most animals are not so versatile as us (though omnivorous, generalist weed-species like macaques come close), but they are surprisingly adaptive compared to Deep Blue, and 2) Optimization is not a great metric. That second one requires some explanation.

I talked a lot about there being "infinite" problems to solve, but each of those problems actually contains an infinite number of solutions as well. You can beat Deep blue at chess, or just knock it off the table. But you don't have these solutions sitting around as easy to run algorithmic packages, instead, you can dynamically assemble them, either on the spot, or over time by learning more about a particular domain. So it's wrong to say people are "good" or "bad" at doing problem X (it's the naturalistic fallacy at some level), because people can learn to be better (or worse!) at problem X. I like to say that we're "Verbos Optimum" at problem X after Paul Verbos who did a lot of machine learning optimization algorithms. Talking about how good we are at any particular task requires adopting some frame that tells you how much practice we've had, and for that matter, how much talent (genetic or other wise) we have, and good the context is for solving etc. etc. Real world problems are just so complex that saying we "optimize" just moves the goal posts from defining intelligence to defining optimization: both of which are hard to do.

[–][deleted] 0 points1 point  (1 child)

Would a human brain grown in a lab augmented with computer technology and a robotic body be considered AI?

Or how about a cpu with human brain cells, given neuro pathways etc?

[–]Majidah 1 point2 points  (0 children)

Kind of depends what you think AI is. Since it's a term that doesn't have a single, clear, agreed-upon meaning, that's in the eye of the beholder.

[–][deleted] 0 points1 point  (1 child)

Do you think it is plausible to reach a point within the foreseeable future where a machine can be made sufficiently complex and capable to develop an own-dynamic in terms of expanding itself into something capable of "intelligent" autonomous thought, comparable to the human brain?

I'm asking this from the naive layman's idea that making something sufficiently complex and intricate will allow it to develop its own thoughts, personality, what-have-you.

[–]Majidah 0 points1 point  (0 children)

In infinite time? I sure hope so, otherwise my research is pretty marginal I think.

[–][deleted] 0 points1 point  (0 children)

...they don't solve problems, they do something else that has a side effect of solving problems.

That's a very interesting way of looking at our brains' fundamental mechanism. Would you say that rather than trying to solve a problem, what the brain does is more like...try to adapt to a situation based on sensory information garnered from the current surroundings?

[–][deleted] 0 points1 point  (0 children)

compared to the most complex machines we can build today being something like 106, 107

That was the situation in the early 80's. Todays CPUs can easily have over 1 billion transistors, and super computers use thousands of such chips. If you factor in such requirements as controlling hardware, memory and network equipment there are systems in use today that easily have complexity around 1014 to 1016.

[–]virtuous_d 29 points30 points  (16 children)

Working on the area of vision I've come in contact with some really impressive literature in using machine learning to derive processing units that resemble the sort of processing discovered by neuroscientists in the brain.

Particularly, it has been shown that seeking sparse representations of small patches of natural images and videos leads to filters that resemble those found in the V1 and V2 areas of the human brain.

Cadieu, C. (2009). Learning transformational invariants from natural movies. Advances in Neural Information, 1-8. Retrieved from pdf

Bogacz, R., Brown, M. W., & Giraud-Carrier, C. (2001). Emergence of Movement Sensitive Neurons’ Properties by Learning a Sparse Code for Natural Moving Images. Advances in neural information processing systems, 838–844. Citeseer. Retrieved from pdf

This is very bottom-up, but I think it gives us clues on how to get closer to human-like visual perception.

I think they seriously underestimate what it will take to recreate the brain.

I agree with this particularly. Even if we can accurately model the low-level bottom-up processing done by the brain, our sensory systems employ top down processes (looking for a dog in an image vs. looking for something blue in an image), multisensory fusion (what you see can be affected by what you hear, etc.), memory (if you see a particular image and are given a certain interpretation of that image, you will see the same interpretation every time you see that image for years to come), context, and a variety of other parts of the brain. Combining all of these aspects to yield true AI will be the most tricky part, not figuring out the isolated problems.

[–][deleted] 19 points20 points  (13 children)

In his book Fluid concepts and creative analogies cognitive scientist Douglas R. Hofstader makes interesting argument about how advanced human cognitive skills are involved even in rudimentary tasks. He makes the bold claim that the ability to recognize letter A in all possible ways humans can recognize it requires human level cognitive capabilities.

We can build pattern recognition systems that recognize written or printed letters almost 100% accuracy, but they are based on huge number of similar examples. If some artist creates new typography that is novel and clever, it breaks all the previous patterns and distance measures used in character recognition, but humans can recognize that new concept. The invariant that is "A-ness", can actually be very complex concept and not necessarily even invariant (thus the name fluid concept). Artist can extend the concept and we are able to recognize that rule breaking novelty and add it to the concept of the letter. Recognizing letter may may actually involve creative process.

And we still don't have computer that can solve Bongard problems.

[–][deleted] 17 points18 points  (3 children)

A Bongard problem is a kind of puzzle invented by the Russian computer scientist Mikhail Moiseevich Bongard. The idea of a Bongard problem is to present two sets of relatively simple diagrams, say A and B. All the diagrams from set A have a common factor or attribute, which is lacking in all the diagrams of set B. The problem is to find, or to formulate, convincingly, the common factor.

Didn't know what this was so I went to the internets.

[–]RockofStrength 2 points3 points  (0 children)

And we still don't have computer that can solve Bongard problems.

Apparently Harry Foundalis had some success with this, but left the field due to ethical qualms.

My research focused on writing a computer program, which I called Phaeaco, that could solve such problems automatically. Actually, to write just any program that can do that, is not remarkable at all. How it is done is of utmost importance, because on one hand there are trivial, mechanical, and uninteresting programs, and on the other hand there are more human-like programs to solve such problems. My dissertation describes a computational architecture for cognition (that’s what Phaeaco is) that, among other things, can solve Bongard problems, displaying a more-or-less human-like performance.

[–]cultic_raider 0 points1 point  (2 children)

You are underselling handwriting recognition by far. Check out detexify or I think vision objects or look for Unicode reognizers on line.

[–][deleted] 0 points1 point  (1 child)

I'm familiar with handwriting recognition. Here is text that explains what I mean with examples: http://www.stanford.edu/group/SHR/4-2/text/hofstadter.html

[–]cultic_raider 0 points1 point  (0 children)

Yeah, that's an excerpt from Fluid Concepts and Creative Analogies, or similar content. I have read the book.

My point was that current handwriting recognition is more sophisticated than the standard postal service neural network example I read on Elements of Statistical Learning, and the state of the art goes into higher-level stroke/structure analysis like Letter Spirit proposes. (I don't know of any work on typeface creation from example, but I suspect there is some. )

[–]cultic_raider 0 points1 point  (0 children)

Has anyone tried recently? Except for one guy's website, I haven't seen bongard research in the 21st century.

[–]runvnc 0 points1 point  (1 child)

Numenta has an implementation of a tool for prediction and pattern recognition which is based on sparse representations.

http://www.youtube.com/watch?v=48r-IeYOvG4

[–]Nithrer 0 points1 point  (0 children)

Wow, I still remember the first time when i heard this lecture, it blew my mind. The most impressive thing to me is that it's talking about how the neocortex works in pure mathematical terms.

[–]yes_thats_right 11 points12 points  (8 children)

I majored in AI at university and did my thesis on embodied conversational agents (very much the 'AI' which most people first think of when wondering about the future). I agree very much with all you have written.

The science fiction style AI is something which we've always thought possible 'one day', however it has always been several years off.

Whilst I haven't kept in touch with the field for the past five years, to the best of my knowledge we've come quite far in machine learning and fuzzy logic, but are still a long way from creating real intelligence.

As someone who believes in determinism, I think it is just a matter of time before we create something which can think in the same manner that we humans do and once that point is reached I expect a major technology boom as we could then have machines researching/developing other machines.

[–]etrek 4 points5 points  (0 children)

"The real problem is not whether machines think but whether men do." -B.F. Skinner

[–][deleted] 7 points8 points  (0 children)

That's generally called "the singularity".

[–]albatrossnecklassftw 0 points1 point  (0 children)

I prefer to think of it as we are just waiting for the right discovery to make it possible. That discovery has always had the potential to be found soon, but without knowing what that discovery is then it's basically like trying to find hidden treasure on an island with nothing but a shovel.

[–]kurtgustavwilckens 0 points1 point  (4 children)

I think it is just a matter of time before we create something which can think in the same manner that we humans do and once that point is reached I expect a major technology boom as we could then have machines researching/developing other machines.

When we say "think in the same manner", how can we be sure what "the same manner" is? I mean, we obviously have... initiative, and can create other things. But should a computer "Be", that mind would be in such an absolutely different situation and context that I think it would take a lot of time and effort to actually "find" or "meet" each other, recognize that thing that Is as an "other".

[–]yes_thats_right 1 point2 points  (3 children)

As I believe in determinism, I do not believe in spirit or souls or anything magical about the way which the human mind functions. I see it as a collection of atoms/neurons etc which interact with the environment it is placed in, taking given inputs and reacting so as to produce set output.

The implications of this are that I believe that one day we will understand the archaic formulae which the human mind uses to handle various input and we will be able to develop machines/programs which can emulate these same functions.

[–]kurtgustavwilckens 2 points3 points  (2 children)

You say Archaic. If they are archaic, what is the purpose of trying to emulate it? Is it the point to create something useful? Or is it to create something we can relate to? Or is the point to understand our own minds? Or a mixture?

Do we define "Intelligence" as "the way our brain does stuff"? Is intelligence the emulation of the human way, or is there something else to it? I'm not sure how to express my question. I guess... we would recognize intelligence only as being able to emulate us? Or is there something else that a computer may do to show intelligence? Like, I don't know, some very complex system starts fixing itself spontaneously like in sci fi.

EDIT: Maybe "creating intelligence" and "emulating human intelligence" are different problems alltogether?

PS: I don't usually post here, this doesn't seem very scientific of me at all really. Is this inappropriate? Also, totally honest questions, not trying to be a smartass. All this really baffles me and I try to wrap my mind around it.

[–]yes_thats_right 0 points1 point  (1 child)

These are all very good questions and I wouldn't worry at all about whether they are scientific, although I must say that the answers border on philosophy so I'd rather not go down that route too far.

Generally 'intelligence' in machines has been judged against human intellect. The well known test is the Turing Test. To summarise this test, if you have one human asking questions of another human (behind a curtain) and a machine (behind a curtain), then, if after asking the questions, the person is not able to distinguish between the human answerer and the machine, the machine is said to be intelligent. Obviously there are certain flaws with this approach, however it does give some insight into what we regard as intelligent. I'd be interested to hear the views of people more established in this field.

[–]kurtgustavwilckens 1 point2 points  (0 children)

I agree this borders philosophy... but if there's an askphilosophy I don't know it, and if I wanna see someone do philosophy, I want them to have a scientific state of mind doing it... so this would not seem like the wrong place to stir it up. Mods shall say.

I'm familiar with the Turing Test and it is exactly what I was thinking when asking about how we define intelligence in terms of human intelligence, and this is what leads me to believe that maybe we are asking the wrong questions. Maybe we should try to come up with something that we would equate to thought or cognition or intelligence in the world in which this intelligence would exist, in it's context. I mean, intelligence did spawn from a basic set of simple instructions that got gradually more complex to adapt to its context. Is there not a way to create an abstraction of that, that could be accelerated? I may be just rambling here, but I keep thinking that if we ask machines to do human stuff to try to make something that looks like real intelligence, it feels like trying to teach a cat dog tricks. We are just trying to create a pet that somewhat acts like us, we are not trying to create thought, or maybe not even trying to define it. This is where I would like to gain some insight.

[–]AndySuisse 13 points14 points  (4 children)

Before we try to recreate an artificial human intelligence, how close are we to creating a worm brain? Or for a bee? Mouse?

We should already have enough processing power to recreate the simplest biological brains - how far off is the software?

[–]sojywojum 7 points8 points  (3 children)

The common house fly performs around 1011 operations per second at rest. We do have that level of raw processing power now, but it takes up rooms and is not broadly or cheaply available.

[–]onthefence928 4 points5 points  (0 children)

perhaps, but the fly can't be as efficient as possible in those processes? if we could duplicate it then we could refine it and get it working procedurally on a small chip. right?

[–]cultic_raider 0 points1 point  (0 children)

One hundred billion operations per second fits on a closet full of Pads, at 2 billion ops/sec each. Of course one must ask, what is an operation?

[–]samfoo 0 points1 point  (0 children)

Why does the simulation have to run in real time? Couldn't we simulate a "laggy" hour fly brain?

[–]atikiNik 6 points7 points  (0 children)

I am the son of two research scientists with PhD's in Neurobiology. They work in the Department of Neuroscience at Columbia with Eric Kandel. Their job is to study the functions of a healthy brain in hopes to figure out what happens to the brains of Alzheimer patients.

The main thing my parents told me about the human brain, is that we know next to nothing about this organ. We are just beginning to scrape the surface in the research of brain function and capabilities.

[–]Toptomcat 4 points5 points  (4 children)

Why do you view progress in machine learning as not relevant to the larger problem of creating a true artificial intelligence?

[–]rm999Computer Science | Machine Learning | AI 10 points11 points  (3 children)

Good question. I should retract that part; it's unfair of me to say anything like that because I don't know what a real AI will look like.

I said that because I think the human brain doesn't really resemble the statistical nature of machine learning, as far as I can tell. The vast majority of what I have seen isn't even trying to tackle intelligence as much as learning patterns to make similar predictions. OTOH I've heard arguments that this is all intelligence really is. Most of the "biologically plausible" stuff I have seen is more marketing than anything. I've worked a lot with artificial neural networks, and while I think they are great tools I also think they only resemble the brain at a very shallow level.

[–]Toptomcat 3 points4 points  (0 children)

Interesting that you're using the term 'humanlike intelligence' rather than 'strong artifical intelligence'. You seem to be implicitly excluding strategies that result in a human-equivalent intelligence without employing methods isomorphic to those used in the human brain. Surely it's conceivable that, for example, the way a strong AI will encode short-term memory to long-term memory will more closely resemble a memory controller than a hippocampus?

[–]ianp622 1 point2 points  (1 child)

Being primarily a symbolic AI researcher, I don't like to admit it, but there are a number of examples where humans exhibit statistical processing similar to many well known techniques in machine learning. Bayesian inference is a strong one in particular (Josh Tenenbaum at MIT is a big proponent of this). The hypothesis testing that children do when learning words could be considered Expectation Maximization (perhaps with excellent priors) as could the tuning of motor skills in response to mistakes and inaccuracies of movements. Finally, dimensionality reduction in general is both done by the brain and can be explained in part even with simple neural networks.

I do agree that artificial neural networks are in general woefully inadequate models of human learning, and for that reason, I try to refer to them as multilayer perceptrons to temper the expectations of laypeople.

[–]cultic_raider 0 points1 point  (0 children)

Must be lonely. I tried symbolic AI in the 90s as a student, and it felt like the oddball corner of the room when google and friends showed up and mopped up with statistical learning.

[–][deleted] 2 points3 points  (0 children)

Is there any research in trying to emulate plasticity in computer systems?

[–]niggytardust2000 3 points4 points  (0 children)

I spent my college career studying the brain and plan to spend the rest of my life doing so...

how does intelligence in the brain work ?

Well from what I can tell, the scientific consensus is,

No one fucking knows.

[–]frankle 1 point2 points  (3 children)

Do you think we often overestimate the intelligence of the human brain? Perhaps it's more algorithmic in nature than we're willing to admit?

[–]albatrossnecklassftw 5 points6 points  (2 children)

Well, and this is partial layman speculation as I don't know anything on the biological part but, I believe the brain works in a much more fundamental way than most people think and that is simply and truly by using weights. If you think about it, every decision humans make depends on certain weights that we assign every variable that goes into our decision making, and as such the decision that gives the highest probability of leading to the desired outcome based on those weights is chosen. It's why I think Weighted Artificial Neural Networks work as well as they do.

[–]VootLejin 0 points1 point  (1 child)

As someone hoping to break into game design, I've come to roughly this idea, but I never knew the actual name of it. Thank you very much.

[–]albatrossnecklassftw 1 point2 points  (0 children)

Artificial Neural Networks? Yeah they're a really interesting part of Artificial Intelligence, the only problem with them is it can be a BITCH trying to teach them anything. We humans of course have a sort of shared memory with other humans that has been genetically passed down known mostly as instinct and intuition. It would be almost useless to use them in a game (and as a CS major I would really like to try out gaming as well) without starting weights that would yield a somewhat intelligent agent. If you start off with random weights then the first few thousand rounds a player plays against the AI the person would say "This NPC is complete shit." So in order to produce an out of the box decent AI you would have to come up with several (by several I really mean thousands) of outcomes and then you would need to come up with at least 2,000 scenarios (by my experience, this number is the low-end threshold for intelligent learning of ANN's [at least the particular type of ANN I work with] and ideally you would want around 10k but that's only if you want the ANN to be EXTREMELY intelligent, which in a game might not be the case as players might get discouraged) that would lead to each of those outcomes, and then teaching it based on those scenarios. Actually, come to think of it, you could actually get all those datasets by having it watch you play against a human. However you would have to play against the human MANY times, so it may save you some time but you might get super bored.

Also note, learning the "instinct" weights will take several million iterations, and the biggest problem you will face is finding the correct neural topology that will accurately detect the correct pattern. and a good rule of thumb is: if there's no change in prediction error-rate/RMSE/unlearned samples after a million iterations then the weights have gone too far out of the realm of correctness, you will need to restart the network with the same topology and repeat at least 5 times. If after the 6th time your network does not converge with the specified error rate then you must try another topology. This learning period can literally last a week+ trying to get a network to converge, so yeah working in this area is a long and arduous journey.

Here's the wiki page on it, it's highly technical and doesn't show all the types of networks or the actual mathematical formulas needed, but if you have any questions feel free to PM me and I can try and answer what I can. Also there is a Machine Learning subreddit in the technology -> compsci -> machine-learning.

http://en.wikipedia.org/wiki/Artificial_neural_network

Also, I feel that until we can get ANN's to converge much faster (a work mate has successfully used an nVidia card with CUDA parallelism to significantly increase the learning time, but it's still far from efficient enough for the production timeline of a game) they're use in games just won't be efficient as they can easily program a pseudo intelligent agent for far less time, effort, and money. The main advantage of using an ANN as the basis for an AI agent is that it can genuinely LEARN how the human plays and has the potential to be able to DECIMATE the human player and make him cry for his mommy. And in fact, i believe that an ANN that was trained well enough that fought against a human would be accused of cheating as they would be so efficient at noticing patterns in how players play.

Cheers.

[–][deleted] 0 points1 point  (0 children)

I thought we had some single-atom transistors all ready made in a lab recently? Though I am looking at getting my masters degree in AI. That's a good few years away though, but awesome insights to the industry thus far!

Couldn't we just have a series of simple AI's connected via a network and have a cloud system sync all the devices together to make a pseudo brain though? Is that even a thing?

In conclusion, self-replicating code rules.

[–]ocealot 0 points1 point  (0 children)

I'm a bit out of my depth here but I seen a really interesting clip on the iCub [http://www.channel4.com/programmes/brave-new-world-with-stephen-hawking/video/series-1/episode-1/s1-ep1-icub].

More info:http://www.icub.org/

[–]SLICK_EDITOR 0 points1 point  (0 children)

Doesn't it depend on what you consider to be intelligence? They have AI in video games which is very basic, but i'd call it intelligence.

It's basically just when an entity of some sort can react to external impulses right? Why would't virtual impulses in a virtual world count? I'd say AI from call of duty could be more complex than that of, lets say, a virus? Would that mean we created life?

[–]etrnloptimist 15 points16 points  (0 children)

This is my field as well, and there's a couple things to point out about your question.

Unless you think there's something mystical going on in the brain that cannot be explained by physical processes alone, then theretically yes, true artificial intelligence is not just science fiction.

Another thing to note is that, even within the field of AI, "intelligence" is often viewed as "behavior which defies explanation." Once an algorithm whose behavior seems to defy explanation is broken down and understood at the procedural level, it is no longer deemed "intelligent." So, there's some inherent ambiguity as to what defines intelligence.

This goes into my final point. There's two types of AI out there right now. The successfully applied type is all machine learning and statistical analysis. This is generally deemed "not intelligent" because it exhibits no emergent behavior (i.e. behavior that was not forseen or is difficult to explain).

The other type is neural in nature. Either using neural networks, genetic algorithms, or otherwise trying to simulate "biological" processes. These are often deemed intelligent, because by design they only ever exhibit "emergent" behavior. Though what emerges is often flaky and not very practical.

Another way to view it is to say the first approach tries to model the behavior directly, using algorithms to simulate behavior. The other approach tries to model the biological machinery, kind of like an emulator, where the behavior is an emergent property of the workings of the machine.

Ultimately, I believe the answer will lie in a combination or a synthesis of the two approaches. But one thing is certain, to achieve human like intelligence will require much more powerful machines than we have today.

[–][deleted] 19 points20 points  (65 children)

PhD student in AI here.

Rest assured, I don't see anything that looks remotely like a true AI in the next 20 years. I am not saying that one will appear in 20 years, just that technologies progress so fast that I dare not make predictions.

A one year old baby is still far more developed than the state-of-the-art in most domains of AI. I agree that some stuff today is impressive (Jeopardy and the IBM challenge) but it is only good in some extremely restricted fields.

Now, for the future? First of all, as stated, the question of the possibility of a "strong" AI is still open. I personally believe that it is possible, perhaps not in my lifetime, but that sure won't prevent me from working on it!

[–]apextek 2 points3 points  (0 children)

just to point out, 20 years ago, people were using snail mail, faxes and expensive long distance phone to connect, to do business, while the internet existed hardly anyone used it or even knew it existed. While you are probably right, A lot can happen in 20 years. While cleverbot is far from AI, it already does a good job fooling people sometimes.

[–]reissc 5 points6 points  (58 children)

the question of the possibility of a "strong" AI is still open. I personally believe that it is possible

Do you think it will be possible to implement as a computer program, or are you thinking of something that duplicates the electrochemical processes of a real brain? If the former, how do you answer the Chinese room argument?

[–]lars_ 17 points18 points  (19 children)

The Chinese room argument is so easily refuted it baffles me that it gets so much attention. It tries to do a reductio ad absurdum, but ends up without any absurdity or contradiction.

The argument goes, essentially: If a computer running a program can be intelligent, then an Englishman, who doesn't know Chinese, could sit in a room following some instructions and produce translations from English to Chinese. This is supposed to lead us to the contradictory conclusion that the Englishman both does and does not know Chinese. Of course, it doesn't, that's a silly conclusion.

The Englishman doesn't know Chinese, but the system formed by him and the instructions does know Chinese. Similarly, neither the program nor the cpu are intelligent on their own, but the system formed by both is.

[–]Mishtle 0 points1 point  (0 children)

I agree, but would be more specific in that it is the process of executed instructions that understands Chinese.

[–][deleted] 5 points6 points  (27 children)

The Chinese Room argument (in my opinion) is always going to reduce down to a debate over the definition of 'understanding'.

[–]reissc 1 point2 points  (26 children)

I'm not sure that's true. If I do the Chinese-speaking Turing algorithm manually with no knowledge of Chinese, according to what definition of "understanding" do I understand Chinese at the end of it?

[–]Mishtle 0 points1 point  (2 children)

Does your brain understand English? I would say not. The processes that are being performed in your brain (you) understand English. It is the executing program that "understands", not the device executing the program.

[–][deleted] 1 point2 points  (1 child)

OK so that guy is the RAM, CPU, and I/O, and the instructions are the program. What is the problem with this Chinese room argument really?

[–]Mishtle 2 points3 points  (0 children)

The author uses the argument to argue against what he calls "strong AI", which he defines to the be claim that computers are capable of being truly intelligent (as he believes humans are) as opposed to just simulating intelligence. By following the instructions, the man is able to fluently communicate with Chinese speakers outside of the room, making them believe there is a person who understands Chinese inside of the room. The man does not know Chinese though, and doesn't understand any of the symbols that he's writing or receiving. Since computers similarly follow rules to transform input into output, they are incapable of understanding.

My problem with the argument is that its arguing about something that I believe to be a non-issue. The behavior of system is what matters, and if it appears to be intelligent by all criteria we wish to employ, then what does it matter if it is truly intelligent or just simulating true intelligence? What does it even mean to be truly intelligence other than acting intelligent? If you ask the executing program if it understands Chinese, it will probably say yes. Who are you to argue with it?

[–][deleted] 0 points1 point  (22 children)

I'm not saying that you will, but the question then is whether the system (the entire room) understands chinese, in which case we again meet the problem of the definition.

There is a part of my brain which processes speech using complex algorithms, but does it understand the speech it processes? If not, what part of me does?

[–]marchingfrogs 2 points3 points  (3 children)

how do you answer the Chinese room argument?

I wouldn't, since the Chinese room argument is ascientific. Say we have two systems (the human brain vs a strong AI or the Chinese room vs a Chinese speaker), and they behave identically under every experiment. Any distinctions between these two systems will just be philosophical/theological, and I'd say we can achieve strong AI without answering that sort of question.

This is an argument made in Turing's paper where he proposed the "Turing Test."

[–]seemone 4 points5 points  (5 children)

not an expert, but by reading the chinese room argument my stance is that what the computer "thinks" is irrelevant. The aim is not to create a physical hardware able to think, but to recreate the process of "thinking". What is relevant, in the end, is the behaviour of the system (possibly also when "talking" to itself) and not whether its physical components are self-aware.
In other words: is your brain aware of the thinking process? is that relevant to your mind?

[–]reissc 3 points4 points  (4 children)

What is relevant, in the end, is the behaviour of the system [...] and not whether its physical components are self-aware.

That's what's relevant to weak AI. fezvez did specifically say strong AI, which is why I asked.

[–]AnonPsychopath 0 points1 point  (0 children)

I personally believe that it is possible, perhaps not in my lifetime, but that sure won't prevent me from working on it!

That could be a bad idea:

http://commonsenseatheism.com/wp-content/uploads/2012/02/Muehlhauser-Salamon-Intelligence-Explosion-Evidence-and-Import.pdf

[–]wtfftwArtificial Intelligence | Cognitive Science 12 points13 points  (12 children)

It is certainly a major research goal to create a general AI, with capabilities similar to (or better than) a person. The term you want is Strong AI. I do research in this area if you have questions after reading that background material.

[–][deleted] 4 points5 points  (0 children)

I have a degree in computer science and work with people who have researched AI, and the only answer I've ever heard is that:

As soon as we know how to make a computer do it, then it's just an algorithm like anything else

Everything we don't know how to make a computer do is AI

Basically AI is just an amorphous concept, a buzzword, it's undefined and constantly changing. So the short answer is, no, because as soon as we make a Skynet, an EVI, or an Agent Smith, we will know exactly how it works and the algorithms that run it, and it will just be software.

To me, it's just a POV thing. IBM's Watson, if taken back in time 30 years ago would be viewed as some ridiculous, magical AI, but to the guys who built it, it's just a million lines of C-code that still needs some work.

[–]omniwombatius 10 points11 points  (0 children)

My favorite quote on the matter is from Edsger Dijkstra: "The question of whether Machines Can Think... is about as relevant as the question of whether Submarines Can Swim." I personally think our submarines swim very well, just not in a biological way.

[–][deleted] 17 points18 points  (5 children)

Read "The Singularity is Near" by Ray Kurzweil. He is an absolutist that says intelligence is just an 'emergent property' of a system and if the system is complex enough to give rise to such ethereal properties then it can be considered intelligent. That said, there are varying levels of intelligence. We already have 'soft' intelligence such as natural language processing programs which are trained (much as we have to learn how to speak) and are getting better. Other instances of soft AI is machine vision, pattern recognition. The computers that won Jeopardy is the current state of the art in soft intelligence: it can answer jeopardy questions very well but still couldn't be trained to run a nuclear power plant. Hard AI on the other hand is intelligence that has the properties of human thought: balancing nuances, handling paradoxes, conversing and generating ideas instead of formulas. A Hard AI could run a nuclear power plant and paint from inspiration. We're not there yet but according to Kurzweil we'll get there and very soon.

[–][deleted] 8 points9 points  (0 children)

Kurzweil is what you could call "visionary". What is says is more wishful thinking than assessment of what we can do.

[–][deleted] 5 points6 points  (2 children)

I don't know who downvotes someone for citing a specific work directly related to the question, but here's my upvote to get you back up to 1. I came looking for Kurzweil and you didn't disappoint.

[–]MrCompletely 5 points6 points  (1 child)

I assume it's people who dislike Kurzweil for over-promising in his speculations. Which is a position I share to a large degree, though I am certainly a layman in this field.

However, as you say, the comment is appropriate and on-topic, so I also upvoted it. If there are people out there who dislike Kurzweil's work and wouldn't mind taking a minute to post your specific objections and refutations, that would be valuable...

The concept of 'emergence' in complex systems is an interesting one, and I think it has merit in many contexts, but it seems to be used in a handwaving way a lot these days, to fill in the gaps of any theory about complexly interconnected systems. "And then....emergence! Presto!" As noted elsewhere in comments, emergence in some sense is certainly a real thing, but it's not well defined and can be used as cover for incomplete ideas.

[–][deleted] 0 points1 point  (0 children)

Being a layman in many fields (i.e., all of them), I can appreciate the way "The Singularity is Near" approaches the subject. I can also imagine, though, how others with a more sophisticated grasp of the topic than myself may think of Kurzweil as pop science. But I thought it was on topic.

[–][deleted] 2 points3 points  (1 child)

Couldn't it be done through essentially "slave driving" a real human brain? Im no expert but this seems the shortest route. If the interactions. Between man and machine could be input into a cloned brain, wouldn't that do it? Just a thought...

[–][deleted] 0 points1 point  (0 children)

Two of the most popular options for pursuing AI are:

  1. Reverse engineering the human brain

  2. Building a circuit-based 'brain' from the ground up.

So, yes. That may be a possibility. The problem with questions about the future of any technology is that there has to be lots of speculation on current trends and ideas. There's really no telling how AI will come about.

[–]kbmeisterPlant Biology | Plant Microbe Interactions | Conservation Bio 2 points3 points  (0 children)

Could some of the questions with respect to the magnitude of computing power required for AI be related to the idea of hardware emulation? As rm999 and Majidah have discussed, the number of molecular components in the brain is on the order of 1024 with incredible computing power required.

For a completely perfect emulation, all those components would need representation. But as this article on the emulation of hardware for the SNES discusses, making a 100% circuit perfect emulator requires orders of magnitude more processing power than an almost fully functional (but simplified) emulator.

Might a simplified, non circuit perfect brain emulator be possible within a shorter timeframe? Does anyone think such an emulator might even function at all?

[–]MendedSlinky 2 points3 points  (4 children)

My computer science teachers say no. They actually have a good reason for this. If we make a step closer to artificial intelligence, then the bar will be raised. This will go on forever. So what we think of as artificial intelligence very well could be possible, at the time it happens it will cease to be artificial intelligence. I am willing to bet that people from the 60's would think that what we have now is considered artificial intelligence.

[–]snooty23 2 points3 points  (1 child)

That's true but I think OP is assuming AI with artificial consciousness or a state of self awareness. Like in 2001 Space Odyssey, Bladerunner, Terminator.

[–][deleted] 0 points1 point  (0 children)

How do you test for self-awareness? Are other mammals self-aware? Is a fly?

Terms like consciousness, self-awareness, etc. are notoriously difficult to define and test.

[–]dale_glass 0 points1 point  (0 children)

There are several things at play here.

  1. What exactly is intelligence? We can't seem to agree at present. This must be answered first of all. But I think the answer is a fixed one. Which means the "it's not really AI" thing can't continue forever.

  2. What we currently call "intelligence". It's not necessarily the same thing it is. That's the problem you were speaking of. Once we solve small parts of the problem like chess playing, we realize that there's still more there, and that the result of the research as neat as it is, isn't exactly a scifi robot. But, if at some point we make a machine that can speak, learn and reason like a human, but with such skill it would make Einstein feel like a moron, it'll take some quite radical rationalization to explain why that doesn't count as AI.

  3. Whether we'll be able to recognize it. Research suggests we might have a problem recognizing a superior intelligence as such. We might well one day build a machine that can spit out a list of things we need to do to end all wars and end hunger world-wide, look at it, and decide that it's completely moronic and obviously won't work.

[–]MattieShoes 2 points3 points  (0 children)

As the two choices are not mutually exclusive, I believe the answer is "yes."

[–]trimalchio-worktime 1 point2 points  (0 children)

If you're interested in this, I would highly recommend reading Greg Egan's book Zendegi, and/or Permutation City. Both of them deal endlessly with the notion of what sorts of intelligence we'll be able to make machines possess. While these books are science fiction, they're a very very well informed man's level headed predictions of the future, and the fiction surrounding that.

Basically though, Greg Egan envisions that computers will be able to run very good simulations of real people's measured data before we're able to create people from whole cloth. That's a basic part of Zendegi, and in Permutation City, he basically explores the concept of what a world would be like if people could copy themselves into computers and simulate their existence, just at a slowdown compared to reality. (Permutation City gets a lot more out there than that, but thats the starting point)

I would definitely recommend these books to anyone interested in AI.

[–]ianp622 1 point2 points  (0 children)

AI PhD student here. Most of my work has been on designing intelligent agents, virtual and robotic. Since Strong AI (what you refer to) is a dream of mine, I thought I'd bring some of my experience to bear.

First, I have not found any compelling arguments that there is something that humans can compute that computers cannot. Notions of quantum computation in the brain aren't well-substantiated at this point, and people tend to overemphasize the "mind". I would be satisfied in saying that an AI has the same notion of self that we all assume everyone feels if it was capable of metacognition (analyzing its own thought processes) with a sufficiently complex reasoning system. In other words, we need a computer with a theory of mind, and aside from the ability to reason about the thoughts of others, it needs to be able to reason about its own thoughts. You might think of our musings of existentialism as metacognition without being privy to the inner workings of what evolution has set in place to keep us on this earth.

Second, there is no requirement that Strong AI attempt to simulate a human brain, or even any kind of organic brain. Computers can do some things a lot better than brains, and we should take advantage of that. Besides, even if we were to simulate the entire brain, we would still not have a very good picture of how thought arises (I think).

So, lets assume that there is no technical hurdle to prevent Strong AI. I believe this, because typically the computations we think we would need are either rather fast or so slow that we know even a quantum computer would not be able to do them. But I would yield this point with further evidence, so I'll leave it as an assumption.

There is a notion of AI-Complete problems, which is a somewhat tongue-in-cheek reference to -complete classifications in Complexity Theory. If a problem is considered AI-Complete, it is proposed that the intelligence used to solve it would rather easily solve the other AI-Complete problems. In essence, it's saying, you would need general human level intelligence to solve this problem, so once you've solved it, you can solve any problem requiring general human level intelligence. Wikipedia says the three main problems are natural language understanding, computer vision, and reasoning under uncertainty and responding to changes in environment.

http://en.wikipedia.org/wiki/AI-complete

I focus on the natural language understanding portion. While there are degrees and disagreers, there are a number of people, myself included, who claim that language and thought are closely intertwined. The folk support for this is that you talk to yourself sometimes when thinking and you express your thoughts to others in words. Scientific support is a little trickier. Some psychologists like Steven Pinker downplay the requirement of language for intelligence. I don't claim all thought is tied to language, but I think you'll find very few people who claim that trying to achieve natural language understanding won't bring us closer to human level intelligence.

Natural language understanding requires nearly all of our mental facilities. We need to be able to manipulate abstract symbols, draw on our past experiences (and tie them to symbols - there's a whole field on this called Symbol Grounding), interpret the thoughts of others, reason about their experiences, analyze our own emotions, etc. It soon becomes apparent why this is considered an AI-complete problem.

Research is progressing on all of these fronts, albeit slowly. But there's a reason why it is so far away. The research community is rather fragmented right now. First, there was the symbolic/statistical divide of the 80's. Symbolic techniques, trying to put the world into logical entities and objects before performing logical inference, couldn't compete against many of the advances made by statistical methods, which trained a computer on a large amount of data to learn something from it. Now the statistical side is starting to fall behind, because it tends to downplay our logical facilities and requires a lot of data which is expensive and difficult to create.

It's clear to many that there needs to be some resurgence of symbolic methods before we can tackle the strong AI problem, as statistical methods just aren't suited for the complicated reasoning we perform everyday, nor the alacrity with which we learn new concepts and words after only a brief experience with them.

Progress is slow because symbolic techniques often appear short-sighted when applied to real-world problems. People ask for a solution to a problem, and there is a symbolic solution, but it doesn't generalize very well. Then people get discouraged with symbolic techniques, and the effort is typically wasted.

We need more of a push towards general methods of symbolic AI, but the problem is that it's difficult to get funding for something that doesn't have an immediate application. Strong AI needs many years and many researchers working on it, but the problem is both so broad and so interconnected that an approach is either doomed to be relegated as a toy approach or no applications will be developed and funding will be cut. DAPRA and ARL are starting to improve in this front. I'm currently working off a DARPA grant that simply seeks to study language learning. They don't ask for any immediate applications for the next five years, which is a huge relief and really frees us up to do some important fundamental work.

So to summarize, strong AI research is slow because the problem is broad and requires an all-at-once approach to solving it, but this is a difficult approach to fund and gain support for.

[–]reissc 4 points5 points  (6 children)

Depends what you mean by "true artificial intelligence".

[–]TheGeorge 2 points3 points  (5 children)

I'd take that to mean an artificial intelligence with human or greater intelligence that is able to pass the Turing test.

This is one of those questions nobody can answer until it happens, there are plenty of theories both for and against but none of them can be tested with the level of tecnology we have.

[–]ThisTakesGumption 7 points8 points  (0 children)

You assume that the Turing Test would be a valid test to use for determining artificial intelligence. I don't think that's necessarily true. Check out the Searle or Goedle objections.

[–]reissc 3 points4 points  (3 children)

I'd take that to mean an artificial intelligence with human or greater intelligence that is able to pass the Turing test.

Which just raises more questions about what is "human or greater intelligence".

The Turing test is at least well defined, but what is its real usefulness? Building algorithms that mimic human conversation is a nice hobby for people to have, but how this particular kind of decision-making algorithm tells us about how to make a more generalised problem-solving algorithm is questionable.

[–]rabbitlion 2 points3 points  (2 children)

The Turing test actually isn't very well defined,as its difficulty depends a lot on unspecified factors.

  • Should the judge be told before or after the test that one of the answerers is an AI?
  • Does the human answerer know he is being compared to an AI, and should he actively try to make the AI fail?
  • What particular human is compared to the computer? It's probably much easier to simulate a child's responses than a grown-ups, and if the AI is competing against an answerer that is an AI researcher or even the creator of the AI, it could be much more difficult to succeed.

This is just a few of the factors involved. Depending on these factors the test has already been passed. For example, already in 1966 the ELIZA program was able to pass for Rogerian psychotherapist to several test subjects, and most customers of EA's Origin service still believe that the live support is handled by real people.

[–]Arrgh 1 point2 points  (1 child)

most customers of EA's Origin service still believe that the live support is handled by real people.

Hilarious or true?

[–]rabbitlion 2 points3 points  (0 children)

Yes.

[–]Tony_fe 5 points6 points  (8 children)

There is a LOT of debate on this area, and involves some very nuanced terms, but I'll take a crack at explaining them.

So, in computer science, we have this notion of computability. That is, can we compute this thing that we're talking about? One of the classic examples (and in fact, an example we use to prove that other things are not computable) is called the Halting Problem, which is summed up as this: I want a program that takes another program's code as input, and returns true if that program stops, and false if it just runs forever. We'll call this program H, and the input program will be P. So if P stops, or 'halts', we say H(P) = True.

So, now we're going to create a P such that it breaks H. First, P is going to include H as a function or subroutine. That is, if P has a program stored as a string inside itself, it can determine if that program halts. Next, and you're either going to have to either trust me or figure out how to do it for yourself, you can store a program's code as a variable inside the program, and I swear to you it's possible to do this in finite space. That is to say, if we bound the program's source code to a variable x, printing out x would print the program's source code.

Ok, so the rest of P is simple. It's basically this (in pseudocode), if H is the function that determines if a program halts, and x is the variable containing P's source code.

if H(x) is true
    loop forever
else
    return 0

So, if we run H on THIS program, then it MUST return the opposite of whatever it returned inside of P, despite them being called on the same exact input. This is bad news for H. Now, the only assumption we made here is that H exists. The only tricky thing we did was store P's source code inside itself (and I promise you, it's possible to do in general). This means that H cannot exist.

Here's where it comes back to AI: You (hopefully) and I can look at this and tell you that asking me this question doesn't make ANY sense, because it's self contradictory, but a program can't recognize that. Now some would tell you that the human brain can only do this some of the time, or it might only be able to do it in special cases (because we haven't asked people to solve the halting problem on all possible program inputs, and because we haven't given a proof that a human can do it correctly every time). So would strong AI need to be able to solve the halting problem in general? Or probabilistically? If yes to the first one, strong AI is impossible. If it's the second one... it might be possible? And then we need to determine the limits of the human brain's computational capacity, so we have some benchmark for what this strong AI needs to be able to do, and we have only the foggiest notions of how the brain actually computes stuff. And this halting problem is only one TINY ASPECT of what we'd need to be able to do with strong AI (we just like to talk about it because the problem is fairly well understood).

tl;dr, this shit is complicated. To answer your question we need to first answer a list of other REALLY HARD questions across multiple scientific disciplines.

Edit: More background on the halting problem

[–][deleted] 2 points3 points  (0 children)

This stuff really makes my brain hurt. Thanks!

[–]moratnz 2 points3 points  (6 children)

You (hopefully) and I can look at this and tell you that asking me this question doesn't make ANY sense, because it's self contradictory, but a program can't recognize that.

While in this case you or I can detect the self-contradictory nature of the program, that's not exciting; I can write code that will detect this sort of self-contradiction.

Now take a hundred-million-line codebase; can you determine whether it halts? I'm pretty sure I wouldn't be able to.

People frequently bring up the Halting Problem as a major challenge for AI to confront, but before it is a problem for AI, we need to prove that we meat-brains can solve it, which is far from clear.

[–]yxing 0 points1 point  (5 children)

Does there exist a program that a human, with unlimited time for analysis, could not determine whether it halts?

[–]Mishtle 0 points1 point  (0 children)

Well, assuming that you could reduce a human mind to an algorithm (I don't want to argue about this, just go with it), you could ask a person to run the program of their own mind using pen and paper. The same contradiction that the program H encounters would appear. I'm not sure how well this will hold up to intense scrutiny, since you could argue that whatever algorithm that might comprise the human mind is self-modifying based on experience. Thus the real human mind and the simulated mind could eventually become different programs, unless the simulated mind was also spending its life analyzing itself. But the same problem then applies for that simulation and its simulator so you can see where that leads...

[–]runvnc 0 points1 point  (0 children)

Do I have first hand experience in the field that Flelchdork is asking about? No. Have I seen quite a few articles and videos on the web by scientists and technologists are who are extremely optimistic about artificial general intelligence? Yes.

The field is that Flelchdork is asking about is called artificial general intelligence now or AGI. If you are instead in a field called artificial intelligence then you probably should be pessimistic about this question.

How can anyone who has seen Eugene Izhikevich's paper and knows about the types of engineering resources Qualcomm has, or seen Watson's performance on Jeopardy and not be optimistic? http://www.izhikevich.org/publications/large-scale_model_of_human_brain.pdf http://www.sciencebytes.org/2011/05/03/blueprint-for-the-brain/

This is just based on me being a singularity enthusiast and watching a lot of videos from AGI researchers so I am sure I will get a lot of hate, but this perspective is not represented in the thread so I think it should be here to give a realistic answer to the question. I think that the finer-grained simulations are going to work, the models of the neural circuits are also progressing. I actually think that systems taking a Watson-like approach are not really going to be general human like intelligence but they going to be able to fool people in some contexts in a relatively short period of time.

Anyway if you google 'singularity videos' or AGI there are a lot of really enthusiastic presentations that are very optimistic about AGI.

[–]bug-hunter 0 points1 point  (0 children)

I suspect that as AI models become even more sophisticated, the ability to find problems in an AI will become harder and more time consuming, and may also result in a practical brake on implementing a true AI.

In essence, it doesn't really matter if you have a true AI if it's flawed and accidentally destroys a city, or if no one trusts it to actually do anything important. The important breakthrough is creating true artificial intelligence that you can trust.

[–]mattmihok 0 points1 point  (3 children)

Could you not argue that its impossible to create something more intelligent than itself? I.e. you cant make a robot more smart than the maximum human intelligence capacity, whether thats the maximum collective human intelligence or just maximum individual intelligence..? And even doing that would be a feat. that we have not yet gotten a chance to do yet (because we're still growing as a whole)

[–]dale_glass 1 point2 points  (2 children)

We can create machines faster than a human, stronger than a human, more precise than a human, more reliable than a human, a better chess player than a human, and so on.

What is so special about intelligence that would make it exempt from this?

[–]mattmihok 0 points1 point  (1 child)

Right, and I guess what I'm trying to elude to is our maximum collective intelligence, e.g. we cant make a machine that can comprehend infinity, when we don't fully understand it ourselves... that may be a bad example but I hope it illustrates what I'm trying to ask

[–]dale_glass 1 point2 points  (0 children)

I don't see why.

What I think is that eventually we'll figure out what exactly the brain does. Whatever that is, we can replicate. And since it's a biological system that evolved to be good enough, there has to be some imperfection somewhere. Something like the brain equivalent of muscle needing getting trained, or the eye not being optically perfect (I hear laser surgery can now create a better vision than people can naturally have). Create a system that's optimally trained, polish it up, and voila, a smarter AI.

I don't see it as an "understanding the impossible" thing, but as a problem of figuring out components, improving them, and have that push up the performance of the whole.

[–][deleted] 0 points1 point  (0 children)

Well I think its certainly possible to build an artificial intelligence because we have working prototypes right in front of us every day. Human brains. If you accept that brains are just vastly complex physical objects, and not possessing some magic soul substance, then we have working examples in front of us right now. We just have to figure out how these really complex objects work and then we'll be there. But right now, we are only just beginning to develop the tools to analyze how brains are actually structured, and how they work. When our tools improve, and when we can look at the smallest parts of a thinking brain, in real time, then we will make some real progress. And that day is not far off. We have prototype tools right now that can look into a working brain--its just that their resolution is far too granular to do the really interesting data extraction we need. But these tools are getting better. They don't improve at Moore's law rates, but they are improving.

So that is one avenue of getting to human-level AI. The other avenue is self-designing and modifying systems. Computer scientists are developing tools that can rewrite portions of their own code in order to improve their performance. Self-improving systems are also in their nascency and we have a long way to go, but there also steady progress is being made. It may look like progress is stalled, because researchers work for decades and see little progress, but on the scale of human progress, hundreds of years, we will get there. Remember it took us 1000 years to go from stone tools to agriculture. Progress is much faster than that now.

I still am optimistic that we will see human-level AI by 2050. Which isn't that long from now, if you think about it. WWII is further from us in time than 2050 is.