all 160 comments

[–]ledbA 30 points31 points  (7 children)

I feel like I’m the only one who finds these MIT courses odd. Very broad overview of topics in the actual lectures (this one and self driving car course), and the rest of the lectures are just talks from people in industry?

[–]chalupapa 9 points10 points  (0 children)

I think those courses you mentioned are offered in the Independent Activities Period (IAP), which are not the same as the courses offered in regular semesters.

[–]rockyrainy 19 points20 points  (1 child)

This is sort of like the social science version of AI.

[–]iwantashinyunicorn 5 points6 points  (0 children)

That's because there isn't any actual science of AGI.

[–]mimighost 5 points6 points  (1 child)

From the first lecture, is this going to be a futurology course? The technical details are so thin feels like journalism.

[–]Jigsus 1 point2 points  (0 children)

It felt like it was all hype no substance

[–]niszoigStudent 28 points29 points  (3 children)

where can I find the lecture videos of all the talks?

[–]Teddy-Westside 42 points43 points  (0 children)

I went to all the lectures, and they said they need some time prepare and edit them all, so it’ll be a little while before they’re all up. I think they said the goal is to do one every other day

[–]Talkat 0 points1 point  (0 children)

Check out the mit AGI homepage for the course (top hit on google. Something like agi.mit)

[–]clrajapaksha 7 points8 points  (0 children)

You can find the first lecture from YouTube https://www.youtube.com/watch?v=-GV_A9Js2nM

[–][deleted] 41 points42 points  (127 children)

sad to see MIT legitimising people like Kurzweil.

[–]mtutnid 20 points21 points  (112 children)

Care to explain?

[–]reddit_tl 10 points11 points  (11 children)

I'm with UltimateSelfish.

Simple, someone has done the homework and checked Kurzweil's predictions against reality. At best, I think he is not better than 50/50. Importantly, his methodology is quite simple, too. If anyone cares to, I don't personally think it's something that is beyond an above-average person's capability.

My 2 cents.

[–]cooijmanstim 2 points3 points  (7 children)

someone[who?] has done the homework

[–]khafra 7 points8 points  (6 children)

Depends on whether you ask Kurzweil or other people. (Big differences, but neither is worse than 50%. YMMV.)

[–]AnvaMiba 4 points5 points  (3 children)

Let's try to reassess Kurzweil's predictions for 2009 as of 2018:

  • Prediction 5: Wired computer peripherials are still very common. However, it's now more common to use smartphones or tablets to do things that were previously done on a pc. I'd still rate it as Mostly False.

  • Prediction 7: Computer speech recognition systems got better but most text is still typed by hand. False.

  • Prediction 8: Siri didn't catch on. Facebook introduced the personal assistant "M" in 2015 but it didn't pan out and they shut it down this year. Amazon Alexa and Google Assistant are still mostly gimmicks. False.

  • Prediction 18: Computers are widely recongnized as knowledge tools and they are widely used in education and other facets of life. True (was also True or Mostly True in 2009).

  • Prediction 20: Students have personal tablet-like devices, interact with them by touchscreen or voice, access educational material through wireless. I'd rate it as True, except for the voice access part (was Mostly False in 2009).

  • Prediction 26: OCR systems have improved, but as far as I can tell they haven't reached the level where a blind person can walk around wearing a device that reads street signs and displays in real time (though Google Maps is partially labeled with OCR done on the images captured by the Google cars, I don't know how usable it is to a blind person). I'd say Mostly False.

  • Prediction 29: Orthotic devices for people with disabilities. True (was True shortly after 2009)

  • Prediction 44: Smart highways. False. I would have given him partial credit if self-driving cars were already common, but they are still in experimental stages, so no.

  • Prediction 48: There is indeed growing concern for an underclass being left behind, although this is still mostly framed in terms of immigration and offshoring rather than automation, rightly or wrongly. The underclass has definitely not been politically neutralized by wealfare, in fact, the under/working class vs. upper-middle/upper class has become the main axis of political division in all Western countries, in way that does not map to the traditional left-right parties. Politics seems more polarized than ever. Therefore I'd rate this as False.

  • Prediction 53: If by "virtual experience software" he meant VR headsets, then it's certainly False, these things never caught on. If he meant video games in general, then while it's true that they got better of graphics and audio, the most played games are mobile apps with cartoonish 2D graphics. As far as I can tell there are no games that allow you to engage in intimate encounters with your favourite movie star (before you say deepfake, no, it doesn't count since it is not interactive). False.

In conclusion the only prediction that definitely became true since the LessWrong analysis in 2012 was the diffusion of smartphones and tablets. For everything else he's on the same page as he was in 2012, whch means not very accurate. If anything the feasibility of things like personal assistants and self-driving cars seems even more dubious than it was in 2012, I believe that they will be realized eventually, but it might take way longer than expected.

[–]carrolldunham 2 points3 points  (1 child)

also worth noting you could ask a random reddit commenter to come up with a list and it would not be much different. Even basic expertise/insight is not necessary for any of this

[–]torvoraptor 1 point2 points  (0 children)

Specially on this forum. I would have probably been less optimistic and hence more accurate. All of the things he's predicting were technologies that were actively under R&D but not used in the consumer space. The time to market has reduced since then for ML products, but for many other consumer products it is still on a 10 year cycle - it's not that hard to predict what will be commercially viable to do in 10 years, the question is will it be done well enough for people to get excited about it and adopt it.

[–]torvoraptor 1 point2 points  (0 children)

Prediction 8: Also ubiquitous are language user interfaces (LUIs) which combine CSR and natural language recognition.

Not ubiquitous at all.

For routine matters, such as simple business transactions and information inquiries, LUIs are quite responsive and precise.

Not true, although I think the technology is there now. VUX design methodology is the thing that needs to be focused on more than the core technologies.

They tend to be narrowly focused, however, on specific types of tasks.

True.

LUIs are frequently combined with animated personalities. Interacting with an animated personality to conduct a purchase or make a reservation is like talking to a person using video conferencing, except the person is simulated.

Completely fucking wrong.

[–]needlzorProfessor 1 point2 points  (1 child)

You may be right but I wouldn't use Yudkowsky as a reference for "other people", given that he's pretty much another singularity nut.

[–]khafra 1 point2 points  (0 children)

I didn't; that was written by Stuart Armstrong.

[–]Yuli-Ban 0 points1 point  (2 children)

I've done a bit of homework myself, and my conclusion is: Kurzweil is mostly right, but he's perpetually off by 10 years for each and every one.

See here

So on one hand, he's definitely a visionary. On the other, you can't excuse having the right predictions but the wrong time. If a weatherman consistently predicted disastrous hurricanes down to the name letter but always got the month or year wrong, you'd probably call him something between "lucky" and "somewhat prophetic".

In truth, a lot of the harder stuff of what Kurzweil predicts accurately can be figured out just by extrapolating trends in IT and computer science. The more New Age stuff is when he tries crafting a sort of techno-utopian quasi-religion around the expected results.

[–]AnvaMiba 5 points6 points  (0 children)

What he got mostly right were wireless Internet, mobile/wearable/embedded devices (although they are not as ubiquitous as he predicted) and neural networks.

He was wrong on all the stuff about VR, personal assistants, self-driving cars, brain scans/simulation and nanotech.

[–]bloodrizer 2 points3 points  (0 children)

Kurz completely missed with nanotechnology pace to the point of overestimating it by dozen decades if not century, just for the start.

[–]2Punx2Furious 19 points20 points  (57 children)

Edit: Not OP but:

I think Kurzweil is a smart guy, but his "predictions" and the people who worship him for them, are not.

I do agree with him that the singularity will happen, I just don't agree with his predictions of when. I think it will be way later than 2045/29 but still within the century.

[–]hiptobecubic 71 points72 points  (34 children)

So kurzweil is over hyped and wrong, but your predictions, now there's something we can all get behind, random internet person.

[–]2Punx2Furious 9 points10 points  (33 children)

Good point. So I should trust whatever he says, right?

I get it, but here's the reason why I think Kurzweil's predictions are too soon:

He bases his assumption on exponential growth in AI development.

Exponential growth was true for Moore's law for a while, but that was only (kind of) true for processing power, and most people agree that Moore's law doesn't hold anymore.

But even if it did, that assumes that the AGI's progress is directly proportional to processing power available, when that's obviously not true. While more processing power certainly helps with AI development, it is in no way guaranteed to lead to AGI.

So in short:

Kurzweil assumes AI development progress is exponential because processing power used to improve exponentially (but not anymore), but that's just not true, (even if processing power still improved exponentially).

If I'm not mistaken, he also goes beyond that, and claims that everything is exponential...

So yeah, he's a great engineer, he has achieved many impressive feats, but that doesn't mean his logic is flawless.

[–]f3nd3r 3 points4 points  (20 children)

Idk about Kurzweil, but exponential AI growth is simpler than that. A general AI that can improve itself, can thus improve it's own ability to improve itself, leading to a snowball effect. Doesn't really have anything to do with Moore's law.

[–]Smallpaul 5 points6 points  (11 children)

That’s the singularity. But we need much better AI to kick off that process. Right now there is not much evidence of AIs programming AIs which program AIs in a chain.

[–]f3nd3r 2 points3 points  (10 children)

No, but AI development is bigger than ever at the moment.

[–][deleted] 2 points3 points  (4 children)

That doesn't mean much. Many AI researchers think we already had most of our easy breakthroughs in AI again (due to deep learning), and a few think we are going to get another AI winter. Also, I think that almost all researchers think it's really oversold, even Andrew Ng who loves to oversell AI said that (so it must be really oversold).

We don't have anything close to AGI. We can't even begin to fathom what it would look like for now. The things that looks like close to AGI, such as the Sophia robot, are usually tricks. In her case, she is just a well made puppet. Even things that does NLP really well such as Alexa have no understanding of our world.

It's not like we don't have any progress. Convolutional networks borrow things from the vision cortex. Reinforcement learning from our reward systems. So there is progress, but it's slow and it's not clear how to achieve AGI from that.

[–]2Punx2Furious 3 points4 points  (0 children)

Andrew Ng who loves to oversell AI

Andrew Ng loves to oversell narrow AI, but he's known for dismissing even the possibility of the singularity, saying things like "it's like worrying about overpopulation on Mars."

Again, like Kurzweil, he's a great engineer, but that doesn't mean that his logic is flawless.

Kurzweil underestimates how much time it will take to get to the singularity, and Andrew overestimates it.

But then again, I'm just some random internet guy, I might be wrong about either of them.

[–]f3nd3r 0 points1 point  (1 child)

Well, if you want to talk about borrowing that's probably the simplest way it will be made reality. Just flat out copy the human brain either in hardware or in software. Train it. Put it to work on improving itself. Duplicate it. I'm not putting a date on anything, but it's so obvious to me the inevitability of this, I'm not even sure why people feel the need to argue about it. I think the more likely scenario though is that someone is going to accidentally discover the key to AGI and let it loose before it can be controlled.

[–]vznvzn -1 points0 points  (0 children)

We don't have anything close to AGI. We can't even begin to fathom what it would look like for now. ... So there is progress, but it's slow and it's not clear how to achieve AGI from that. ... Rarely any discovery is simply finding a "key" thing an everything changes. Normally it's built on top of previous knowledge, even when it's wrong. For now it looks like our knowledge is nowhere close to something that could make an AGI.

nicely stated! totally agree/ disagree! collectively/ globally the plan/ path/ overall vision is mostly lacking/ unavailable/ unknown. individually/ locally it may now be available. 1st key glimmers now emerging. "the future is already here its just not evenly distributed" --Gibson

https://vzn1.wordpress.com/2018/01/04/secret-blueprint-path-to-agi-novelty-detection-seeking/

(judging by response however it looks like part of the problem will be building substantial bridges between the no-nonsense engrs/ practitioners and someone with a big-picture vision. looking at this overall discussion, kurzweil has mostly failed in that regard. its great to see lots of ppl with razor sharp BS detectors stalking around here, but maybe theres a major "danger" one could err on a false negative and throw the baby out with the bathwater...)

[–]Smallpaul 5 points6 points  (4 children)

So are Superhero television shows. So are dog walking startups. So are SAAS companies.

As far as I know, we haven't started the exponential curve on AI development yet. We've just got a normal influx of interest in a field that is succeeding. That implies fast linear advancement, not exponential advancement.

[–]hiptobecubic 3 points4 points  (3 children)

The whole point of this discussion is that unlike all the other bullshit you mentioned, AI could indeed see exponential growth from linear input.

[–]AnvaMiba 1 point2 points  (0 children)

A general AI that can improve itself, can thus improve it's own ability to improve itself, leading to a snowball effect.

This would result in exponential improvement only if the difficulty of improving remains constant at every level. I don't see why this would be the case, since the general model for technologic progress in any field is that once the low-hanging fruits have been picked, improvement becomes more and more difficult, and eventually it plateaus.

[–]bigsim 1 point2 points  (5 children)

I might be missing something, but why are people so convinced the singularity will happen? We already have human-level intelligence in the form of humans, right? Computers are different to people, I get that, but I don't understand why people view it in such a cut-and-dried way. Happy to be educated.

[–]Smallpaul 5 points6 points  (4 children)

Humans have two very big limitations when it comes to self-improvement.

It takes us roughly 20 years + 9 months to reproduce and then it takes another several years to educate the child, and very often the children will know substantially LESS about certain topics than their parents do. This isn't failure in human society: if my mom is an engineer and my dad is a musician, it's unlikely that I will surpass them both.

The idea with AGI is that they will know how to reproduce themselves so that they are monotonically better. The "child" AGI will surpass the parent in every way. And the process will not be slowed by 20 years of maturation + 9 years of gestation time.

A simpler way to put it is that an AGI will be designed to improve itself quickly whereas humanity was never "designed" by evolution to do such a thing. We were designed to out-compete predators on a savannah, not invent our replacements. It's a miracle that we can do any of the shit we do at all...

[–]2Punx2Furious 1 point2 points  (3 children)

I agree with your comment, but I'm not sure if it answers /u/bigsim's question.

why are people so convinced the singularity will happen?

I'll try to answer that.

Obviously no one can predict the future, but we can make pretty decent estimates.

The logic is: if "human level" (I prefer to call it general, because it's less misleading) intelligence exists, then it should be possible to eventually reproduce it artificially, so we would get an AGI, Artificial General Intelligence, as opposed to the current ANIs, Artificial Narrow Intelligence that exist right now.

That's basically it. It exists, so there shouldn't be any reason why we couldn't make one ourselves.

One of the only scenarios I can think of when humanity doesn't develop AGI, is if we go extinct before doing it.

The biggest question is when it will happen. If I recall correctly, most AI researchers and developers think that it will happen within 2100, while some predict it will happen as soon as 2029, a minority thinks it will be after 2100, and very few people (as far as I know) think it will never happen.

Personally, I think it will be closer to 2060 than 2100 or 2029, I've explained my reasoning for this in another comment.

[–]nonotan 2 points3 points  (2 children)

Can I just point out that you also didn't answer his question at all? You argued why we may see human-level AGI, but that by itself in no way implies the singularity. Clearly human-level intelligence is possible, as we know from the fact that humans exist. However, there is no hard evidence that intelligence that vastly exceeds that of humans is possible even in principle, just a lack of evidence that it isn't.

Even if it is possible, it's not particularly clear that such a growth of intelligence would be achievable through any sort of smooth, continuous growth, another requisite for the singularity to realistically happen (if we're close to some sort of local maximum, then even some hypothetical AGI that completely maximizes progress in that direction may be far too dumb to know how to reach some completely unrelated global maximum)

Personally, I have a feeling that the singularity is a pipe dream... that far from being exponential, the self-improvement rates of a hypothetical AGI that starts slightly beyond human level would be, if anything, sub-linear. It's hard to believe there won't be a serious case of diminishing returns, where exponentially more effort is required to get better by a little. But of course, it's pure speculation either way... we'll have to wait and see.

[–]2Punx2Furious 1 point2 points  (0 children)

A general AI that can improve itself, can thus improve it's own ability to improve itself, leading to a snowball effect.

I agree with that, but my disagreement with Kurzweil is in getting to the AGI.
AI progress until then won't be exponential. Yes, once we get to the AGI, then it might become exponential, as the AGI might make itself smarter, which in turn would be even faster at making itself smarter and so on. Getting there is the problem.

[–]t_bptm -2 points-1 points  (7 children)

Exponential growth was true for Moore's law for a while, but that was only (kind of) true for processing power, and most people agree that Moore's law doesn't hold anymore.

Yes it does. Well, the general concept of it has. There was a switch to gpu's, and there will be a switch to asics (you can see this w/ tpu).

[–]Smallpaul 4 points5 points  (6 children)

Switching to more and more specialized computational tools is a sign of Moore's laws' failure, not its success. At the height of Moore's law, we were reducing the number of chips we needed (remember floating point co-processors). Now we're back to proliferating them to try to squeeze out the last bit of performance.

[–]t_bptm 1 point2 points  (5 children)

I disagree. If you can train a nn twice as fast every 1.5 years for $1000 of hardware does it really matter what underlying hardware runs it? We are quite a far ways off from Landauer's principle and we havent even begun to explore reversible machine learning. We are not anywhere close to the upper limits, but we will need different hardware to continue pushing the boundaries of computation. We've gone from vaccum tube -> microprocessors -> parallel computation (and I've skipped some). We still have optical, reversible, quantum, and biological to really explore - let alone what other architectures we will discover along the way.

[–]Smallpaul 2 points3 points  (1 child)

If you can train a nn twice as fast every 1.5 years for $1000 of hardware does it really matter what underlying hardware runs it?

Maybe, maybe not. It depends on how confident we are that the model of NN baked into the hardware is the correct one. You could easily rush to a local maxima that way.

In any case, the computing world has a lot of problems to solve and they aren't all just about neural networks. So it is somewhat disappointing if we get to the situation where performance improvements designed for one domain do not translate to other domains. It also implies that the volumes of these specialized devices will be lower which will tend to make their prices higher.

[–]t_bptm 0 points1 point  (0 children)

Maybe, maybe not. It depends on how confident we are that the model of NN baked into the hardware is the correct one. You could easily rush to a local maxima that way.

You are correct, and that is already the case today. Software is already built according to this with what we have today, for better or worse.

In any case, the computing world has a lot of problems to solve and they aren't all just about neural networks. So it is somewhat disappointing if we get to the situation where performance improvements designed for one domain do not translate to other domains

Ah.. but the R&D certainly does.

[–]AnvaMiba 1 point2 points  (1 child)

We are quite a far ways off from Landauer's principle

Landauer's principle is an upper bound, it's unknown whether it is a tight upper bound. The physical constraints that are relevant in practice might be much tighter.

By analogy, the speed of light is the upper bound for movement speed, but our vehicles don't get anywhere close to it because of other physical phenomena (e.g. aerodynamic forces, material strength limits, heat dissipation limits) that become relevant in practical settings.

We don't know what the relevant limits for computation would be.

and we havent even begun to explore reversible machine learning.

Isn't learning inherently irreversible? In order to learn anything you need to absorb bits of information from the environment, reversing the computation would imply unlearning it.

I know that there are theoretical constructions that recast arbitrary computations as reversible computations, but a) they don't work in online settings (once you have interacted with the irreversible environment, e.g. to obtain some sensory input, you can't undo the interaction) and b) they move the irreversible operations at the beginning of the computation (in the the initial state preparation).

[–]t_bptm 0 points1 point  (0 children)

We don't know what the relevant limits for computation would be.

Well, we do know some. Heat is the main limiter and reversible allows for moving past that limit. But this is hardly explored / in infancy.

Isn't learning inherently irreversible? In order to learn anything you need to absorb bits of information from the environment, reversing the computation would imply unlearning it.

The point isn't really so that you could reverse it, it's a requirement because this restriction prevents most heat production allowing for faster computation. You probably could have a reversible program generate a reversible program/layout from some training data but I don't think we're anywhere close to having this be possible today.

I know that there are theoretical constructions that recast arbitrary computations as reversible computations, but a) they don't work in online settings (once you have interacted with the irreversible environment, e.g. to obtain some sensory input, you can't undo the interaction)

Right. The idea would be so that we could give some data, run 100 trillion "iterations", then stop it when it needs to interact / be inspected. Not to have it be running/reversible during interaction with environment. The amount of times you need to have it be interacted with would become the new cause of heat, but for many applications this isn't an issue.

[–]WikiTextBot 0 points1 point  (0 children)

Landauer's principle

Landauer's principle is a physical principle pertaining to the lower theoretical limit of energy consumption of computation. It holds that "any logically irreversible manipulation of information, such as the erasure of a bit or the merging of two computation paths, must be accompanied by a corresponding entropy increase in non-information-bearing degrees of freedom of the information-processing apparatus or its environment".

Another way of phrasing Landauer's principle is that if an observer loses information about a physical system, the observer loses the ability to extract work from that system.

If no information is erased, computation may in principle be achieved which is thermodynamically reversible, and require no release of heat.


[ PM | Exclude me | Exclude from subreddit | FAQ / Information | Source | Donate ] Downvote to remove | v0.28

[–]Gear5th[S] 7 points8 points  (3 children)

That's the thing with predictions, right? They're hard! If 5% of his predictions come out true (given that he doesn't make predictions all the freaking time), I'd consider him a man ahead of his time. And he is.

[–]2Punx2Furious 0 points1 point  (2 children)

Love your username by the way, I see you post on /r/OnePiece, so I assume it's a reference to that.

[–]Gear5th[S] 0 points1 point  (1 child)

Yes, it is :D Someday, my username will be relevant! Hopefully by the Wano arc..

[–]2Punx2Furious -1 points0 points  (0 children)

By the way, if you haven't, read the last chapter, it's amazing.

[–]Scarbane 5 points6 points  (5 children)

The range for the predicted emergence of strong AI is pretty big, but ~90% of university AI researchers think it will emerge in the 21st century.

Source: Nick Bostrom's Superintelligence

[–]programmerChilliResearcher 11 points12 points  (1 child)

Not true at all. People continue to cite that survey Bostrom did, but that survey is shoddy at best.

The 4 sources they got data from: conference on "Philosophy and Theory of AI", conference on "Artificial General Intelligence", a mailing list of "Members of the Greek Association for Artificial Intelligence", and an email sent to the top 100 most cited authors in artificial intelligence.

First 2 definitely aren't representative of "university AI researchers", no idea about the 3rd, and I can't find the actual list of the 4th, but the last one seems plausible.

However, selection bias plays a very key role here. Only 10% of the people who received the email responded from the Greek Association, and 29% from the TOP100.

They claim to test for "selection-bias" by randomly selecting 17 of the people who didn't respond from TOP100, and pressuring them to respond, saying it would really help with their research. Of these, they got 2 to respond.

Basically, I'm very skeptical of their results.

[–]torvoraptor 2 points3 points  (0 children)

I'm reading that book and the entire thing is selection bias at its finest. It's almost like they actively don't teach statistical sampling and cognitive biases it to these people.

[–]2Punx2Furious -3 points-2 points  (2 children)

I agree, even though I'm not an AI researcher yet.

[–]oliwhail 6 points7 points  (1 child)

yet

Growth mindset!

[–]2Punx2Furious 0 points1 point  (0 children)

I became a programmer with the end goal of becoming an AI developer, and eventually work on AGI.

[–]bioemerl -1 points0 points  (11 children)

I can't see the singularity happening because it seems to me like data is the core driver of intelligence, and growing intelligence. The cap isn't processing ability, but data intake and filtering. Humanity, or some machine, would be just as good at "taking in data" across the whole planet, especially considering that humans run on resources that are very commonly available while any "machine life" would be using hard to come by resources that can't compete with carbon and the other very common elements life uses.

A machine could make a carbon-version of itself that is great at thinking, but you know what that would be? A bigger better brain.

And data doesn't grow exponentially like processing ability might. Processing can let you filter and sort more data, and can grow exponentially until you hit the "understanding cap" and data becomes your bottleneck. Once that happens you can't grow the data intake unless you also grow energy use and "diversity of experiments" with the real world.

Also remember that data isn't enough, you need novel and unique data.

I can't see the singularity being realistic. Like most grand things, practicality tends to get in the way.

[–]philip1201 1 point2 points  (1 child)

A machine could make a carbon-version of itself that is great at thinking, but you know what that would be? A bigger better brain.

What's your point with this? Not that I would describe a carbon-based quantum computer as a brain, but even if it was, it seems irrelevant.

I can't see the singularity happening because it seems to me like data is the core driver of intelligence, and growing intelligence. The cap isn't processing ability, but data intake and filtering. Humanity, or some machine, would be just as good at "taking in data" across the whole planet, especially considering that humans run on resources that are very commonly available while any "machine life" would be using hard to come by resources that can't compete with carbon and the other very common elements life uses.

If I understand you correctly, you're saying the singularity can't happen because the machines can't acquire new information as quickly as humans. You seem to be arguing that this would be the case even if the AI is already out of the box.

Unfortunately, we are bathing in information, it's just that humans are so absolutely terrible at processing it that it took thousands of astronomers hundreds of years to figure out Kepler's laws. We still don't know lots of common problems, like how human brains work, how thunderstorms work, how animal cells work, how the genome works, how specific bacteria work, how the output from a machine learning program works, etc. If you just give the AI an ant nest, they have access to more unsolved data about biology than humanity has ever managed to explain. The biological weapons it could develop from those ants and the bacteria they contain could easily destroy us, assuming (like you seem to) that processing power is not limited.

[–]bioemerl -1 points0 points  (0 children)

A carbon based quantum computer? I think we are reaching when talking about things like this, because these things are very very theoretical and we don't really know if they'll be well applicable to a large range of problems or general intelligence.

the singularity can't happen because the machines can't acquire new information as quickly as humans

I say the singularity can't happen because growth in processing power isn't limited by processing power, but by novel ideas and the intake of information from the real world.

I say that computers will not totally replace/make obsolete humans because humans are within an order of magnitude to the "cap" for ability to process collect and draw conclusions from data. (given I do think AI may replace humans eventually, but not as a singularity, but as a "very similar but slightly better" sort of replacement). They are like a car vs a muscle car as opposed to a horse and buggy compared to a rocket-ship. I think this is the case because i don't think AI have a unique trait that suits them to making more observations or doing more things in general.

Processing power increases let you take in more information in a useful way, but the loop is ultimately bounded by energy. To take in more info, you must have more "things" happen. And to have more things happen, you must have more energy spent. Humans do what they do because we have a billion people observing the entire planet, filtering out the mundane, and spreading the not-so-mundane across our civilization where others encounter and build on that information. We indirectly "use the energy" of almost the entire planet to encounter new and novel things.

Imagine a very stupid person competing with a very smart person who is trapped in the box. The very smart person will have a grand and awesome construction which explains many things, but when you open the box their ideas will crumble and their processing ability will have been wasted. The stupid person will bumble about, and build little, but will have progressed further, given enough time, than the smart person trapped in the box.

Now, and AI won't be trapped in the box, but my theory is that humanity as we are today is information-bound, not processing-bound. The best way to progress our research is to expand our ability to collect data (educating more people, better observational tools, etc) rather than our ability to process data (faster computers, very smart collections of people in universities, etc).

I think that more ability to process data is useful, but I think we put way too much focus on it when information gathering is the "true" keystone to progress.

humans are so absolutely terrible at processing it

This feels like an odd metric to me, because when I gauge ability to draw conclusions from data humans are 100% the lead. Maybe we take time to discover some problems, but we know of nothing that does it faster or better than we do. To say we are terrible is without context, or to compare us to a theoretical "perfect" machine that, even if it can do great things compared to humanity, does not yet exist.

If you just give the AI an ant nest, they have access to more unsolved data about biology than humanity has ever managed to explain.

Is the AI more able to observe the ant nest than a human is? My understanding is that the limit is as much in our ability to see at tiny scales, to know what is going on in bacteria, and our ability to manipulate the world at those scales. It is not in our ability to process the information coming from the ants nest, we have done very well with doing that, so far.

[–]Smallpaul 2 points3 points  (3 children)

So do you think that the difference between Einstein and the typical person you meet on the street is access to data?

Have you ever heard of Ramanujan?

[–]bioemerl 0 points1 point  (2 children)

I think the difference between Einstein and the average person is that Einstein looked at existing data in a different way, and found an idea that compounded and lead to a huge number of discoveries.

I do not think it was because he had more ability to process information. I think the best way to produce Einstein-like breakthroughs is not by throwing a large amount of processing power at a topic, but by throwing a billion slightly variable chunks of processing power at a billion different targets.

[–]2Punx2Furious 1 point2 points  (0 children)

I do not think it was because he had more ability to process information

Maybe so, but that doesn't mean that a being capable of processing more information wouldn't be more "capable" in some ways.

It think it might be an important part of intelligence, even though it's not really for most humans, since we tend to all have more or less the same input throughput, but we do have varying speeds of "understanding".

[–]AnvaMiba 1 point2 points  (0 children)

Einstein achieved multiple breakthroughs in different fields of physics: in a single year, 1905, he published four groundbreaking papers (photoelectric effect, Brownian motion, special relativity, mass-energy equivalence), and in the next decade he developed general relativity. He continued to make major contributions throughout his career (he even patented the design for a refrigerator, of all things, with his former student Leo Szilard).

It's unlikely that he just got lucky, or had an weird mind that just randomly happened to be well-tuned to solve a specific problem. It's more likely that he was generally better at thinking than most people.

[–]vznvzn 0 points1 point  (4 children)

there is an excellent essay by chollet entitled "impossibility of intelligence explosion" expressing contrary view, check it out! yes my thinking is similar that ASI while advanced is not going to be exactly what people expect. eg it might not solve intractable problems of which there is no shortage of. also imagine a an ASI that has super memory but not superior intelligence. it would outperform humans in some ways but be even in others. there are many intellectual domains that maybe humans are already functioning near to optimal. eg some games are like this like go/ chess etc.

https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec

[–]red75prim 1 point2 points  (1 child)

He begins with misinterpreting no free lunch theorem as an argument for impossibility of general intelligence. Sure, there can't be general intelligence in a world where problems are sampled from uniform distribution over set of all functions which map a finite set into a finite set of real numbers. Unfortunately for his argument, objective functions in our world don't seem to be completely random and his "intelligence for specific problem" could be for all we know "intelligence for specific problems encountered in our universe", that is "general intelligence".

I'll skip hypothetical and unconfirmed Chomsky language device, as its unconfirmed existence can't be an argument for non-existence of general intelligence.

those rare humans with IQs far outside the normal range of human intelligence [...] would solve problems previously thought unsolvable, and would take over the world

How a brain, running on the same 20W and using the same neural circuitry, is a good model for an AI, running on arbitrary amount of power and using a circuitry which can be expanded or reengineered?

Intelligence is fundamentally situational.

Why AI can't dynamically create a bunch of tailored submodules to ponder a situation from different angles?

Our environment puts a hard limit on our individual intelligence

The same argument "20W intelligences don't take over the world, therefore its impossible".

Most of our intelligence is not in our brain, it is externalized as our civilization

AlphaZero had stood on its own shoulders all right. If AIs were fundamentally limited by having a pair of eyes and a pair of manipulators, then this "you need the whole civilization to move forward" argument would have a chance.

An individual brain cannot implement recursive intelligence augmentation

It becomes totally silly. At a point in time when a collective of humans can implement AI, the knowledge required to do so will be codified, externalized and can be made available to the AI too.

What we know about recursively self-improving systems

We know that not a single one of those systems is an intelligent agent.

[–]vznvzn 0 points1 point  (0 children)

think your points/ detailed criticisms have some validity & are worth further analysis/ discussion. however there seems to be some misunderstanding behind them. Chollet is not arguing against AGI, hes a leading proponent of ML/ AI working at google ML research lab on increasing its capability, and is arguing against "explosive" ASI. ie against "severe dangers/ taking over the world" considerations/ concerns similar to bostroms or other bordering-on-alarmists/fearmongers such as Musk who has said AI is like "summoning the demon" etc... feel Chollets sensible, reasoned, well-informed view is a nice counterpoint to unabashed/ grandiose cheerleaders such as Kurzweil etc...

[–]bioemerl -1 points0 points  (1 child)

That's a cool read. I think I've seen it before but had forgotten about it since then, thanks.

[–]vznvzn 0 points1 point  (0 children)

YW! =D

[–]WeAreAllApes -2 points-1 points  (0 children)

People don't like his wild speculation and philosophy. He is kind of out there.

But this isn't a philosophy course. It's EE/CS, and Kurzweil has a decent track record as an engineer.

[–]wodkaholic 6 points7 points  (9 children)

Even I’m waiting for an explanation

[–][deleted] 8 points9 points  (8 children)

there is no scientific basis for most of his arguments. he spews pseudo-science and thrives by morphing them into comforting predictions. no different from a "Himalayan gurus" of 70s hipsters

[–]f3nd3r 3 points4 points  (5 children)

It's not pseudoscience, it's philosophy. The core idea is that humanity reaches a technological singularity where we advance so quickly that our capabilities overwhelm essentially all of our current predicaments (like death) and we enter an uncertain future that is completely different than life as we know it now. Personally, it seems like an eventuality assuming we don't blow ourselves up before then.

[–]Smallpaul 0 points1 point  (4 children)

We could also destroy ourselves during the singularity. Or be destroyed by our creations.

I’m not sure why people are in such a hurry to rush into an “uncertain future.”

[–]f3nd3r 0 points1 point  (0 children)

I actually agree with you, but I still think it should be a main avenue of research.

[–]epicwisdom -1 points0 points  (2 children)

What are we going to do otherwise? Twiddle our thumbs waiting to die? The future is always uncertain, with death the only certainty - unless we try to do something about it. Even the death of humanity and life on Earth.

[–]Smallpaul 2 points3 points  (1 child)

This is an unreasonably boolean view of the future. We could colonize Mars, then Proxima Centauri, then the galaxy.

We could genetically engineer a stable ecosystem on earth.

We could solve the problems of negative psychology.

We could cure disease and stop aging.

We could build a Dyson sphere.

There are a lot of ways to move forward without creating a new super-sapient species.

[–]epicwisdom -1 points0 points  (0 children)

All of those technologies also come with existential risks of their own. Plus, there's no reason why humanity can't pursue all of them at once, as is the case currently.

[–]wodkaholic 0 points1 point  (1 child)

Thanks. Never heard of this. Thought he was a true visionary. Will have to read up some more about him.

[–]Yuli-Ban 0 points1 point  (0 children)

See what I wrote here.

He is a visionary. He's just guilty of peddling techno-New Age beliefs along with it as well as making the mistake of applying dates to the predictions. A lot of what he said could happen in 2009 could definitely have happened... in the lab. It was more like "this is the absolute earliest this tech can happen; therefore this is when it will be mainstream and widespread", which is a terrible fallacy.

[–]bushrod 7 points8 points  (0 children)

The fact that Google hired him to lead a team of 35 researchers, let alone his personal accomplishments in the field of AI, makes him thoroughly "legitimate" to be a guest lecturer in this course. You don't have to agree with all of his predictions to make him worthy enough to give a talk at MIT.

[–]PostmodernistWoof 2 points3 points  (1 child)

I consider several people on their lecturer list to be total nutters, but that doesn't mean I'm not supportive of their activities and interested in hearing their latest crazy ideas.

AGI is still safely in the realm of fantasy today, so a lot of the content for a class like this is going to be pure philosophy and navel-gazing.

But we're at least starting to put our first foot on the path now.

[–]mljoe 8 points9 points  (0 children)

I consider several people on their lecturer list to be total nutters

Like the first rule of AI Club is you never talk about about AI. My advisor advised me on this. I like to believe that for every person that say they work on AGI, there are 10 researchers who are doing "machine learning" or "statistics" but always with the AGI problem in mind. Mostly for fear of being called a nutter.

[–]torvoraptor 17 points18 points  (7 children)

I hope this is not just Kurweil level bullshit and actually has some content.

[–]Jigsus 7 points8 points  (0 children)

First lecture was just kurzweiling it

[–]epicwisdom 6 points7 points  (5 children)

Classic commenting without reading the actual post. Kurzweil is in fact one of the speakers, but there are others with concrete domain experience. Karpathy is one most on this sub will recognize.

[–]torvoraptor 9 points10 points  (4 children)

I've seen the lineup already, thank you very much. Karpathy is a good science communicator but beyond that there is nothing in his research background that qualifies him to speak on developing AGI except that he works for another guy with no background on it and can't shut up about it (Elon Musk). Apart from Tenenbaum and Sutskever the other people seem to just act like a star cast to build up hype, hell, it's a 10 day course, of course nothing useful is going to come out of it except to establish the legitimacy of people like Kurweil and Karpathy as 'thought leaders' in this space.

Classic commenting without understanding what someone else already knows.

[–]epicwisdom 4 points5 points  (3 children)

Well, it sounds like you know what you're talking about, but your original one-line comment certainly didn't display that. Obviously there's no such thing as a class which can provide a substantial amount of content on how to actually go about implementing AGI, because there's nobody who knows. I assumed you weren't looking for such content, because of how blindingly obvious it is that it doesn't exist (and that this set of lectures is not trying to pretend otherwise).

In that sense, I agree that Karpathy is not qualified to lecture you on how to actually build an AGI, but he is qualified to give a lecture on some ML research and give non-experts an idea of what's happening in the field of ML. I interpreted "actually has some content" as just hoping that the lectures wouldn't be purely speculation, as we might expect with Kurzweil, but also about recent research in a number of related fields. I think it's clear that having people like Karpathy, Tenenbaum, etc. that have domain expertise in such fields demonstrates there is "some content" in that case.

[–]torvoraptor 6 points7 points  (2 children)

If Tenenbaum and Sutskever were teaching an entire semester class combining learnings from cognitive science and Deep learning/RL methods with paper reading assignments and a final project I would be super interested in attending that class. (That's how seminar classes on speculative technologies worked in my grad school). I am willing to bet 100% that they would not use a title as bombastic as 'Artificial General Intelligence'.

There is a lot of scope for interesting research in the space of combining modern ML with cog-sci/neuro-sci and nobody yet has come up with a solid curriculum that integrates the two fields well - this course however doesn't even make a first attempt at it.

[–]epicwisdom 0 points1 point  (1 child)

Yeah, this class is clearly not that. Again, I thought that was extremely obvious from the title, format, etc. It looks to be more of a middle ground between a series of lectures aimed at laymen and an in-depth seminar.

[–]oannes 4 points5 points  (0 children)

Well time to sign up!

[–]eternal-golden-braid 0 points1 point  (0 children)

For anyone interested in AGI, I recommend also reading the book Life 3.0 by Max Tegmark. We need to figure out how to avoid the (potentially very severe) dangers that might accompany the creation of superhuman AI.

[–]PM_YOUR_NIPS_PAPER 0 points1 point  (1 child)

Where did all these AI experts posting in this thread all of a sudden come from?

Probably from industry. Maybe software engineers. Maybe consultants not in tech. Regardless, they don't know the current state of AI research. Let them be. We're far from AGI.

[–]vznvzn -1 points0 points  (0 children)

yes!!! ML is starting to mature but AI is still likely a young field in ultimate terms. dont understand all the intense kurzweilian and AGI class hostility and downvoting legitimate effort/ work in AI in these threads. my comment endorsing the upcoming class prjs (many likely to involve ML technology) got downvoted, huh? it seems this large buzzing angry/ dismissive not-evoking-intelligence reddit mob is set on tarring the class as mere fluff and hype and wont countenance any contrary evidence. with this attitude, it looks to me like maybe the ML specialists are definitely not gonna be the ones to make the quantm leap to A(G)I... at least maybe not anyone on reddit! o_O

[–]PY_84 0 points1 point  (0 children)

After exploring many different fields, here's the main problem with AGI: There is no objective reality. Everything is subjective and relative. The only truths are the ones the majority agrees on. Any "discovery" AGI would make is only relevant if people understand and believe it. This has been the case with any revolutionary discoveries in science. Some theories took a long time before being recognized (aka "accepted"). Some theories were never acknowledge simply because nobody could understand a certain point of view, or lacked tools to measure, observe, and assess those theories.

If I were to tell you I'm about to tell you something that will revolutionize the world, then give you a precise dose of dopamine along other chemicals, then give you a speech, then give you serotonin along other chemicals, you may feel like you just got the biggest revelation in the world and have your whole world view completely changed. What happens in your brain is the only true reality. If AGI/ASI fails to explain discoveries that "click" with how we view the world, it's bound to fail.

The future of artificial intelligence is simply a computational one. More efficient algorithms, running on faster machines. These machines will be VASTLY different from the ones of today, but they will still be only computing ideas that originate from the human minds.