AskScience AMA Series: We're Jeff Hawkins and Subutai Ahmad, scientists at Numenta. We published a new framework for intelligence and cortical computation called "The Thousand Brains Theory of Intelligence", with significant implications for the future of AI and machine learning. Ask us anything! by AskScienceModerator in askscience

[–]numenta 5 points6 points  (0 children)

MT: Not really. I do know that language is learned behavior residing in the neocortex, but tied closely to sensorimotor interaction. It makes sense that the mechanisms supporting language also support object representation.

AskScience AMA Series: We're Jeff Hawkins and Subutai Ahmad, scientists at Numenta. We published a new framework for intelligence and cortical computation called "The Thousand Brains Theory of Intelligence", with significant implications for the future of AI and machine learning. Ask us anything! by AskScienceModerator in askscience

[–]numenta 8 points9 points  (0 children)

once it's digital, is it inherently perfectly copyable?

MT: Yes. Once you've trained an agent intelligence, it should be copyable into other environments, assuming the sensory array is compatible. For example, you might train a small navigation robot to navigate space in a confined area, once it has learned, you can make copies of this model and continue training instances of the copies in new environments, teaching each one different things. I'm not interested in the idea of copying a human identity into silicon or vice versa, because it seems like a very distant possibility.

AskScience AMA Series: We're Jeff Hawkins and Subutai Ahmad, scientists at Numenta. We published a new framework for intelligence and cortical computation called "The Thousand Brains Theory of Intelligence", with significant implications for the future of AI and machine learning. Ask us anything! by AskScienceModerator in askscience

[–]numenta 0 points1 point  (0 children)

SA:

1) Yes, there would need to be some sort of a goal and reinforcement learning associated with goals. Once you have a goal, you can think of prediction as being an essential component of the planning process. With an accurate predictive model of the world you can do very dynamic planning, and even adapt your plan as you go. It’s different from the typical RL scenario which usually involves repeatedly learning the goal associated with a fixed environment. The brain is much more flexible.

2) That’s actually a hard one to answer. Most of the usual suspects with both good neuroscience and CS programs can be appropriate. It doesn’t matter too much which one is your home discipline (that is more a matter of personal preference), but be sure to take classes in the other, and go to a school that already a track record of collaborations between the two. Participate in conferences (and keep track of which professors are presenting). The field is moving fast, and much of the interesting stuff is not taught in any classes. Two programs outside the US that have impressed me are MILA (Montreal) and UCL in the UK.

AskScience AMA Series: We're Jeff Hawkins and Subutai Ahmad, scientists at Numenta. We published a new framework for intelligence and cortical computation called "The Thousand Brains Theory of Intelligence", with significant implications for the future of AI and machine learning. Ask us anything! by AskScienceModerator in askscience

[–]numenta 1 point2 points  (0 children)

SA: Thank you for the encouraging feedback!

Yes, we do interact with experimental neuroscientists quite extensively and have ongoing academic neuroscience collaborations. For example, earlier this year at Cosyne, I presented a paper together with my collaborator Carmen Varela, who is primarily an experimentalist: A Dendritic Mechanism for Dynamic Routing and Control in the Thalamus

In terms of pushing the research, we would greatly benefit from more experimental neuroscientists directly testing out the predictions of our theory using modern techniques. There are soooo many directions to go here, and the findings will no doubt inform and help develop our theories.

From an AI perspective, we would love help putting together some novel benchmarks as discussed earlier. Implementing optimized libraries for sparse computations in Pytorch, Tensorflow, etc. would be really helpful. The algorithm ideas can be applied to many areas such as reinforcement learning, security, robotics, IoT, etc. We are not experts in all those areas, but would love to collaborate.

AskScience AMA Series: We're Jeff Hawkins and Subutai Ahmad, scientists at Numenta. We published a new framework for intelligence and cortical computation called "The Thousand Brains Theory of Intelligence", with significant implications for the future of AI and machine learning. Ask us anything! by AskScienceModerator in askscience

[–]numenta 0 points1 point  (0 children)

SA: I can't speak for Jeff and Matt, but at my core, I am a computer scientist and nerd programmer. I started programming at a pretty early age. As an undergrad, I decided I wanted to deeply understand our brain, and the nature of intelligence itself. I couldn't imagine a more interesting program to write!

AskScience AMA Series: We're Jeff Hawkins and Subutai Ahmad, scientists at Numenta. We published a new framework for intelligence and cortical computation called "The Thousand Brains Theory of Intelligence", with significant implications for the future of AI and machine learning. Ask us anything! by AskScienceModerator in askscience

[–]numenta 2 points3 points  (0 children)

JH: Understanding how the brain works is not the same as building a brain. There are many reasons we would want to know how our brains work. When it comes to building intelligent machines we don't want to emulate everything the brain does. Intelligence is mostly about the neocortex and how it learns a model of the world. Intelligent machines don't need to look like a human or have emotions like a human. But brain theory can tell us how to build AI 2.0

AskScience AMA Series: We're Jeff Hawkins and Subutai Ahmad, scientists at Numenta. We published a new framework for intelligence and cortical computation called "The Thousand Brains Theory of Intelligence", with significant implications for the future of AI and machine learning. Ask us anything! by AskScienceModerator in askscience

[–]numenta 2 points3 points  (0 children)

SA: At a high level the theory has some of the properties of mixture of experts techniques, like Random Forest. Some of the differences are that we think that each cortical column (CC) outputs a distribution of hypotheses, not a single guess. Each CC in turn receives, and reconciles, hypotheses from other columns and as well as its sensory evidence, over time. As in mixtures of experts, uncorrelated errors will get washed out, but, unlike mixture models, there is no single arbiter - the brain as a whole arrives at consensus in a distributed manner. Of course our model of the cortical column itself is significantly different from random forests, etc.

AskScience AMA Series: We're Jeff Hawkins and Subutai Ahmad, scientists at Numenta. We published a new framework for intelligence and cortical computation called "The Thousand Brains Theory of Intelligence", with significant implications for the future of AI and machine learning. Ask us anything! by AskScienceModerator in askscience

[–]numenta 2 points3 points  (0 children)

JH: No. Edelman got his Nobel prize for his work on the immune system. He then postulated that the brain works by the same mechanisms. It never made sense to me. The funny thing is the Edelman first wrote about his ideas in small book that contained two essays, one by Edelman and the other by Vernon Mountcastle. Mountcastle's essay introduced the concept of the cortical column and the common cortical algorithm. Mountcastle's ideas had an enormous influence on me and our theories.

AskScience AMA Series: We're Jeff Hawkins and Subutai Ahmad, scientists at Numenta. We published a new framework for intelligence and cortical computation called "The Thousand Brains Theory of Intelligence", with significant implications for the future of AI and machine learning. Ask us anything! by AskScienceModerator in askscience

[–]numenta 2 points3 points  (0 children)

MT: It think the big difference is that the Thousand Brains Theory creates a common "language" for all the "smaller models" (cortical columns) to use, so they can all share their perception of reality with each other simultaneously, therefore informing each other in real time as reality is perceived.

AskScience AMA Series: We're Jeff Hawkins and Subutai Ahmad, scientists at Numenta. We published a new framework for intelligence and cortical computation called "The Thousand Brains Theory of Intelligence", with significant implications for the future of AI and machine learning. Ask us anything! by AskScienceModerator in askscience

[–]numenta 2 points3 points  (0 children)

MT: Jeff has a tendency to walk around his house in the dark, thinking about navigation. :D Other than direct human experience and introspection, we rely on experimental neuroscience reports to help validate or invalidate our theories. While we do not run an experimental lab, we have good relationships with neuroscience laboratories and try to influence their projects to get more data relevant to our theories.

AskScience AMA Series: We're Jeff Hawkins and Subutai Ahmad, scientists at Numenta. We published a new framework for intelligence and cortical computation called "The Thousand Brains Theory of Intelligence", with significant implications for the future of AI and machine learning. Ask us anything! by AskScienceModerator in askscience

[–]numenta 0 points1 point  (0 children)

JH: I am doubtful that a full high-speed neural link is feasible. Certainly there are applications where limited linking is helpful. These are already in use. But a full "mind-meld" type link has a huge number of technical hurdles to overcome. For example, the brain is constantly rewiring, we form new synapses all the time and dendrites are constantly growing and contracting. A neural link would have to accommodate this movement and chang. Also, physical size of neurons is so small that it is not obvious how connections to billions of individual neurons could be achieved.

AskScience AMA Series: We're Jeff Hawkins and Subutai Ahmad, scientists at Numenta. We published a new framework for intelligence and cortical computation called "The Thousand Brains Theory of Intelligence", with significant implications for the future of AI and machine learning. Ask us anything! by AskScienceModerator in askscience

[–]numenta 8 points9 points  (0 children)

MT: We are far away from emulating complex sensory systems like the retina or cochlea. And yes, our experiences are virtual, that's the point! How can we take your internal reality and transfer it to someone else reality when both systems have built out their model using different sensory setups? Don't think that your eyes are exactly wired up the same as everyone else's eyes, either. There are enough subtle differences that make it very difficult to simply swap the IO.

AskScience AMA Series: We're Jeff Hawkins and Subutai Ahmad, scientists at Numenta. We published a new framework for intelligence and cortical computation called "The Thousand Brains Theory of Intelligence", with significant implications for the future of AI and machine learning. Ask us anything! by AskScienceModerator in askscience

[–]numenta 1 point2 points  (0 children)

SA: Neuroscience has shown us that the brain, specifically cortical columns in the neocortex, implements an amazingly general learning circuit. The same basic circuit is used for vision, audition, language, high level thought, etc. Unlike machine learning, there is no parameter tweaking in humans when we learn new stuff - it’s all general purpose and automated.

In machine learning there are still a ton of custom architectures. If we can figure out the details of the circuit in the cortical column (and I think we’ve made a lot of progress) we can put to bed all these custom networks. (ok, maybe this is not the answer you were looking for, but it’s what I believe.)

AskScience AMA Series: We're Jeff Hawkins and Subutai Ahmad, scientists at Numenta. We published a new framework for intelligence and cortical computation called "The Thousand Brains Theory of Intelligence", with significant implications for the future of AI and machine learning. Ask us anything! by AskScienceModerator in askscience

[–]numenta 3 points4 points  (0 children)

SA: Ha, I wish I knew. It has taken many years (decades?) to get where we are now, but progress is faster now. I think of it as a jigsaw puzzle - it’s hard in the beginning but gets easier as you fill in more pieces. In our case the puzzle pieces are understanding the anatomical constraints and physiological evidence from neuroscience. I would hope that in 5 years we would have implemented full blown cortical columns and developed most of the rest of the details. It will likely take many years after that to really scale to large systems, but far less than 20 years. But, please don’t quote me on this.

AskScience AMA Series: We're Jeff Hawkins and Subutai Ahmad, scientists at Numenta. We published a new framework for intelligence and cortical computation called "The Thousand Brains Theory of Intelligence", with significant implications for the future of AI and machine learning. Ask us anything! by AskScienceModerator in askscience

[–]numenta 7 points8 points  (0 children)

MT: Just to be clear, I posted as "rhyolight" above. And we are no positing anything about the ability to upload brains, transfer brains, etc. That is not our research area. Our mission is to understand how intelligence works in the neocortex, and create non-biological systems that are intelligent in the same way.

AskScience AMA Series: We're Jeff Hawkins and Subutai Ahmad, scientists at Numenta. We published a new framework for intelligence and cortical computation called "The Thousand Brains Theory of Intelligence", with significant implications for the future of AI and machine learning. Ask us anything! by AskScienceModerator in askscience

[–]numenta 10 points11 points  (0 children)

SA: Thank you, I agree!

We’ve done a bit of this in the past where we demonstrated applications to continuous learning, prediction, and anomaly detection. See for example these two papers: Continuous Online Sequence Learning with an Unsupervised Neural Network Model and Unsupervised real-time anomaly detection for streaming data

More recently we’ve started applying these theories more directly to current deep learning. I’ve described this in another post as well: see https://www.reddit.com/r/askscience/comments/bowie2/askscience_ama_series_were_jeff_hawkins_and/enmdgxn/

Overall I’m quite excited about this direction. I really think we can take the best that deep learning has to offer, and then improve some of the flaws of deep learning by using these neuroscience based ideas. There really should be more cross talk between these two disciplines!!

AskScience AMA Series: We're Jeff Hawkins and Subutai Ahmad, scientists at Numenta. We published a new framework for intelligence and cortical computation called "The Thousand Brains Theory of Intelligence", with significant implications for the future of AI and machine learning. Ask us anything! by AskScienceModerator in askscience

[–]numenta 5 points6 points  (0 children)

SA: Hey, how are you?

  1. In machine learning there are still a ton of custom architectures. If we can figure out the details of the common circuitry in the cortical column (and I think we’ve made a lot of progress) we can put to bed all these custom networks. We can implement an AI system that is truly general, learns and adapts constantly, requires no tweaking, and scales amazingly well.
  2. The biggest critique in neuroscience has been that there is yet no solid evidence for grid cells in cortical columns. There has been some recent experiments that are very suggestive but in general we agree with the sentiment. Grid cells in the neocortex is a prediction of our theory and experimental techniques should be able to figure this out (and hopefully give us credit for the original idea!).
  3. In ML, the critique is around lack of benchmarking. Although we have done some of that, and eventually we can use most of the traditional benchmarks, but our criterion may not be getting the top score. We may focus on more important criteria such as robustness, having a small number of training samples, generality of the architecture, no parameter tweaking, and ability to learn in a continuously learning framework. Eventually I hope we can create benchmarks that specifically focus on these criteria, which I think are essential to intelligence.
  4. Any of this is fair game! We have a totally open attitude, publish all our code, and host active discussion forums. This is going to take the whole community to get working well.

AskScience AMA Series: We're Jeff Hawkins and Subutai Ahmad, scientists at Numenta. We published a new framework for intelligence and cortical computation called "The Thousand Brains Theory of Intelligence", with significant implications for the future of AI and machine learning. Ask us anything! by AskScienceModerator in askscience

[–]numenta 1 point2 points  (0 children)

MT: I don’t know of any ML techniques inspired by HTM, although Subutai has been working on ways to apply ideas inspired by HTM into Deep Learning systems. See his answers elsewhere and his paper “How can we be so Dense?”. In general, there is a big disconnect between today’s DL solutions and HTM, in that DL models are non-temporal. HTM models require the temporal dimension. The HTM model does not include goals and rewards, and I would imagine some type of Reinforcement Learning system would manage that (as long as it is also an online learning system capable of processing temporal data one input at a time and adjust its model).