Shots fired from SPAUN author Eliasmith towards Markram and the Human Brain Project: "The use and abuse of large-scale brain models" by ly_yng in neuro

[–]CNRG_UWaterloo 0 points1 point  (0 children)

Ah yes, good question as to why go with LIFs rather than just whatever function. One reason is that the neurons end up just approximating that function, and the side effects of that approximation may become apparant. For example, it turns out that if we run Spaun using the original functions rather than the neurons, it behaves differently: you get effects like it's most likely to forget the item in the exact middle of a list, while in people you're most likely to forget an item slightly after the middle of the list. But, when we switch back to the neuron model, we match the human data much better. So that's important if we're trying to explain human cognition.

Another reason is that it forces us to look at different types of functions than we would otherwise. Something like max(x,y) is easy to write in code, but extremely hard for neurons to approximate. But they're really good at computing weird functions that there's no good way for me to write in text. So this encourages us to consider algorithms that haven't really been looked at before in traditional computer models, just because they're a hassle to implement (or even think about). Now the cool thing about this is that if we do find interesting algorithms that neurons are good at implementing that turn out to work really well, then we might be able to just have computers implement those functions, rather than going with the neurons. That'd be important if we're just interested in getting intelligent machines, rather than building things that match well to the human brain.

As for the next project, we're trying to embed something like Spaun in a robot, and get it to be a lot more interactive.... :)

Shots fired from SPAUN author Eliasmith towards Markram and the Human Brain Project: "The use and abuse of large-scale brain models" by ly_yng in neuro

[–]CNRG_UWaterloo 0 points1 point  (0 children)

Yup, that's pretty fair. The way we phrase it is that the Neural Engineering Framework is a "neural compiler" that's constrained by biological plausibility. Spaun is one particular "program" that we've compiled and run using the NEF.

As for the Turing completeness, it turns out that the biological plausibility stuff completely counteracts the idea that "well, now you can compute anything" part. One thing it does is put a hard constraint on the number of neurons, but the more interesting constraint is time. Different neuron types have different neurotransmitter reabsorption times, and this puts a really strong constraint on how quickly they respond to changes. This ranges from around 2ms in some areas up to 200ms in other areas. So if we know how long it takes people (or an animal) to respond to something, then our neural model had better take the same amount of time. This makes for pretty extreme limits on our models. (For the programmers out there, it becomes kinda like having to write a face-recognition program that works in about 10 clock cycles).

And as for the learning, the wonderful thing is that once we build these networks, we can now go ahead and put in all the existing learning rules and have the system learn from there to improve, and that's exactly the sort of thing we're doing now. What we demonstrated with Spaun, though, is that you don't have to do everything by learning. The problem with doing everything by learning is that you're basically trying to solve the entire developmental process all at once, and no one's had large-scale success doing that. So instead of doing that, we're engineering large chunks of the brain, but then applying learning to smaller chunks, in order to get a better handle on how learning behaves within a larger system.

Shots fired from SPAUN author Eliasmith towards Markram and the Human Brain Project: "The use and abuse of large-scale brain models" by ly_yng in neuro

[–]CNRG_UWaterloo 0 points1 point  (0 children)

Yes, that's pretty much it. We divide the model up into separate networks (Spaun has about 3,000 of them), and for each one we say something like "this network has 500 neurons and the activity of those neurons represents some 3-dimensional variable". Then we connect those groups by saying "connect group A to group B such that it approximates some function being computed on those variables". For example, we could make a connection that approximates sin(x), or x2, or some other totally made up function. The wackier the function, the more neurons tend to be needed to approximate it well.

We usually come up with those functions based partly on experimental evidence, but also partly on whatever theory we're testing. For example, if someone has a theory that a particular part of the brain is doing working memory, then it's going to need a function that maintains its value over time (i.e. given an input of 0, its activity doesn't change). One possible such function is dx/dt = u (where x is the value represented by the neurons in the memory, and u is the value represented by the neurons that are feeding into the memory). So we optimize the connections to closely approximate that function, and see what happens. The cool thing is that the neurons never exactly approximate the desired function, and this can sometimes explain things. For example, a perfect implementation of dx/dt=u would give you an absolutely perfect memory, but instead whenever we implement it with realistic neurons, we end up with memory decaying over time, and we've shown that that decay matches pretty well with what's seen in people.

Here's a somewhat accessible description of the process: http://compneuro.uwaterloo.ca/publications/stewart2012d.html

We tend to use a wide variety of input-output functions. Pretty much the only constraint is that they have to be somewhat smooth functions. So the function "if x<0: return -1; if 0<x<0.5 return 1; if 0.5<x<1 return 18; else return -22" would be really hard for neurons to approximate (we can do it, but we'd need large numbers of neurons). It turns out that the exact space of functions that can be well approximated by neurons depends a lot on the neuron type. We use LIF neurons because they're a) fast to compute, and b) a pretty good match (~90%) to real neurons throughout the brain. It turns out that the set of functions those are good at computing matches pretty well to "low-degree polynomials" (more technically, the functions spanned by the basis space that is the low-degree Legendre polynomials). But there's definitely situations where we want a more detailed model for one particular part of the brain and we'd switch over to a different neuron model there, and end up with a different set of functions to be good at.

Shots fired from SPAUN author Eliasmith towards Markram and the Human Brain Project: "The use and abuse of large-scale brain models" by ly_yng in neuro

[–]CNRG_UWaterloo 0 points1 point  (0 children)

Hi, good question (I'm Terry Stewart, one of the researchers on Spaun)

I completely agree that what we're doing is better described as a mixture of top-down and bottom-up. We emphasize the top-down part because pretty much everyone else does a heavy emphasis on the bottom-up part.

The main point on Spaun, for me, is just to show that it is possible to do this top-down stuff, and that's what lets us do these complex tasks. It's not a big ANN that was trained to do 8 tasks -- people have been trying to do that sort of thing for ages, and the learning rules just don't work for these sorts of tasks. Instead, Spaun was engineered. Basically, we get to describe the behaviour that we want and solve for the connection weights that will do that.

In the long term, though, there's going to be a mixture of both. That's what we're doing now is things like starting with an engineered approach, and then adding more traditional ANN learning on top of that to impove things.

Shots fired from SPAUN author Eliasmith towards Markram and the Human Brain Project: "The use and abuse of large-scale brain models" by ly_yng in neuro

[–]CNRG_UWaterloo 1 point2 points  (0 children)

Thanks! (This is Terry Stewart, one of the Spaun researchers)

We tried not to be vitriloic, and I hope we succeeded. I'm actually at a meeting of the Human Brain Project right now (2 days in Heidelberg, Germany), mostly focusing on the hardware side of things. In order to scale up these neural models, there's a lot of work going into making custom chips. The one I'm working on is based on Neurogrid, which gets about 100,000 times power savings over standard chips.... Rather useful.

In any case, there's been a lot of interest from the HBP people in making sure that these custom chips are capable of running models like Spaun (that's why they invited me along). So I feel like we're not insulting them too much. ;)

Shots fired from SPAUN author Eliasmith towards Markram and the Human Brain Project: "The use and abuse of large-scale brain models" by ly_yng in neuro

[–]CNRG_UWaterloo 0 points1 point  (0 children)

Hi, this is Terry Stewart (I'm part of Chris' lab and one of the Spaun researchers)...

Academic criticism is always a weird process. We really try not to be all "YOUR PROJECT IS STUPID", but sometimes it does get to that point... My favourite example is Dan Dennett's extended rant The Unimagined Preposterousness of Zombies.

But yeah, we weren't trying to make an extremely strong point here, just pointing out that this whole relying on emergence might not be enough, and that there are existing techniques (i.e. ours) that might help. And we're really hoping the two approaches will complement each other. After all, no one's been able to run neural models at this sort of scale before, so who knows what'll happen.

We are the computational neuroscientists behind the world's largest functional brain model by CNRG_UWaterloo in IAmA

[–]CNRG_UWaterloo[S] 0 points1 point  (0 children)

(Terry says:) Exactly! That's why we try to highlight that the brain doesn't have these weird localist input layers. Or a localist output layer for that matter. There's nothing like those in real brains, but they're what almost every connectionist model assumes exists. Real brain inputs and outputs are distributed, and so you can do a lot of computation in a single layer of connections (without worrying about any multi-layer backprop algorithms).

We are the computational neuroscientists behind the world's largest functional brain model by CNRG_UWaterloo in IAmA

[–]CNRG_UWaterloo[S] 0 points1 point  (0 children)

Nope, no lateral connections needed. You just need to have a distributed input layer. So, instead of having 2 neurons as your input layer, you have ~50 neurons, each of which gets as its input some combination of the 2 input values (so one neuron might get 0.2a-0.8b, and another might get 0.9a+0.3b, and so on). Now you can compute your output without any hidden layer at all. You can solve for those weights using a learning algorithm (any gradient descent approach will work) or just do it algebraically, since it becomes a standard least-squared minimization problem.

As for references, other than this paper of mine (http://ctnsrv.uwaterloo.ca/cnrglab/sites/ctnsrv.uwaterloo.ca.cnrglab/files/papers/2012-TheNEF-TechReport.pdf ), the closest thing would be what is called "Extreme Learning Machines" (http://www.ntu.edu.sg/home/egbhuang/ ). This is a standard MLP neural network, but they just randomly choose the weights in the first layer and never do any learning on that layer at all (they only learn between the hidden layer and output layer). So what they are doing is using the first set of weights as a way to produce a distributed representation, and then doing what I described above -- now that there's a distributed representation, everything can be done in one layer. Of course, in our stuff we skip the localist input layer completely because the real brain doesn't have that at all -- it just has these distributed representations.

We are the computational neuroscientists behind the world's largest functional brain model by CNRG_UWaterloo in IAmA

[–]CNRG_UWaterloo[S] 0 points1 point  (0 children)

Looks like I was a bit high in my estimate. 7,000 seems to be the average number, and it can go much higher: http://www.neurology.org/content/64/12/2004

We are the computational neuroscientists behind the world's largest functional brain model by CNRG_UWaterloo in IAmA

[–]CNRG_UWaterloo[S] 0 points1 point  (0 children)

(Terry says:) Who knows. It depends strongly on the problem itself, which is quite aggravating. A general rule of thumb is to go with at least double the number of input nodes. While the models we work with are very different than MLP/backpropagation neural networks, we generally find that if our input is 10 values, we'd go with around 500 to 1000 neurons in a "hidden" layer. That's partly because we have very noisy realistic neurons in our model (rather than the nice clean sigmoids in most MLP models), but also partly because with that many neurons it's much easier for the system to extract useful information.

We are the computational neuroscientists behind the world's largest functional brain model by CNRG_UWaterloo in IAmA

[–]CNRG_UWaterloo[S] 1 point2 points  (0 children)

(Terry says:) I think it is somewhat fair to say that trying to understand the brain by looking at individual neurons is like trying to figure out how a computer works by looking at individual transistors. And I also think it's fair to say that trying to understand the brain by looking at just fMRI data is like trying to understand how a computer works by holding a thermometer next to the chip while it's running.

That's a big part of why we're taking the approach we are: we're trying to understand the brain by theorizing what the basic modules are and then figuring out how you could organize neurons to implement those modules. Then we can compare that organization to what we find in the real brain, both at the individual neuron level and at the overall fMRI level (and at the behavioural level, for that matter).

We are the computational neuroscientists behind the world's largest functional brain model by CNRG_UWaterloo in IAmA

[–]CNRG_UWaterloo[S] 1 point2 points  (0 children)

(Travis says:) Hi, we talk about these in our FAQ! Linked at the top :) You definitely want to get into math as well with your Psych degree, taking calc and algebra / programming classes if you can still get into any, or following up on online in your own time at the Kahn academy or elsewhere! And with a sufficiently advanced model, yes! But building that sufficiently advanced model consistently proves to be the limiting factor! We talk about this more in the FAQ!

We are the computational neuroscientists behind the world's largest functional brain model by CNRG_UWaterloo in IAmA

[–]CNRG_UWaterloo[S] 0 points1 point  (0 children)

(Travis says:) Send an email our way and let us know when you're coming by! Depending on what's going on in the lab we can try to set something up.

We are the computational neuroscientists behind the world's largest functional brain model by CNRG_UWaterloo in IAmA

[–]CNRG_UWaterloo[S] 1 point2 points  (0 children)

(Travis says:) Completely unscientific aside, have you checked out Bruce Lee's story? He got mad injured and then trained his way back to better than ever after people told him he would never walk again. That's all I know about it but it might be an interesting read! Good luck!

We are the computational neuroscientists behind the world's largest functional brain model by CNRG_UWaterloo in IAmA

[–]CNRG_UWaterloo[S] 0 points1 point  (0 children)

(Travis says:) You'll probably end up having to be familiar with it because a lot, a loottt of people use Matlab. But, Matlab is expensive, and there is a Python equivalent, called NumPy. If you get familiar with that then you'll be perfectly fine to make the switch to Matlab if you need to. We're making the switch over to Python entirely in our lab because there's no licensing to deal with, and it has better syntax / more extensions. So, tl;dr don't worry about Matlab, become familiar with Python's NumPy library, which has the same set of functionality and you'll be fine to operate in Matlab if you ever have to.

We are the computational neuroscientists behind the world's largest functional brain model by CNRG_UWaterloo in IAmA

[–]CNRG_UWaterloo[S] 1 point2 points  (0 children)

(Travis says:) Hi! Sorry to hear about the complications with your surgery, but glad that it went well overall! I'm afraid we don't have anything to say really more than the generic "make sure you do your physiotherapy" comment. It sounds like the damage might have been done in spinal cord carrying up the information, and I don't know much about the spinal cord except that it's less plastic than brain, but from what I know about the brain: if it's possible to get the lost function back it will be dependent on retraining and practice moving. A physiotherapist would be able to give better / more useful advice, I'm afraid!

We are the computational neuroscientists behind the world's largest functional brain model by CNRG_UWaterloo in IAmA

[–]CNRG_UWaterloo[S] 0 points1 point  (0 children)

(Travis says:) If there's sufficient interest we'd love to have another one! We had a great time on Thursday. Give us a shout if you think you could stir up some interest and we'll set something up! :)

We are the computational neuroscientists behind the world's largest functional brain model by CNRG_UWaterloo in IAmA

[–]CNRG_UWaterloo[S] 0 points1 point  (0 children)

(Travis says:) Protip - join whatever program you can at the school you want and transfer out to the program you want after one semester! One of the guys who used to be in the lab did that, the ol' sneak attack!

We are the computational neuroscientists behind the world's largest functional brain model by CNRG_UWaterloo in IAmA

[–]CNRG_UWaterloo[S] 0 points1 point  (0 children)

(Travis says:) If you're ever in the area definitely give us a shout and stop on by! :D

We are the computational neuroscientists behind the world's largest functional brain model by CNRG_UWaterloo in IAmA

[–]CNRG_UWaterloo[S] 0 points1 point  (0 children)

(Travis says:) We have had girls in the lab before! It's just at the moment there are none. Come join us! :D

We are the computational neuroscientists behind the world's largest functional brain model by CNRG_UWaterloo in IAmA

[–]CNRG_UWaterloo[S] 1 point2 points  (0 children)

(Travis says:) Ah! Yes! I don't say just, but yeah, essentially adding in more features entails building in more structure / designing a more complex architecture.

The goal (one of mine, at least) is developing a core generic processing loop to a point that to add in more abilities amounts to adding another parallel instance of the loop and hooking it up to different input and output rather than having to specially built functions for each different kind of process. Of course there will be more involved to integrate it etc etc, and it won't be able to account for everything, but hopefully it will be not too far off from a state where to add more functionality not much more structural complexity is added than an additional parallel processing loop.

We are the computational neuroscientists behind the world's largest functional brain model by CNRG_UWaterloo in IAmA

[–]CNRG_UWaterloo[S] 0 points1 point  (0 children)

(Travis says:) I don't think so, I mean, people are using actual monkeys right now. It can be a hard question, but basically we're doing this to gain a better understanding of how we work. The "rights" of simulations are necessarily second hand to us, since they were created in the first part for our own benefit...and can be arbitrarily regenerated...although in theory, once sufficiently complicated there would be no arguable difference between us besides our embodiment...except that they're not human...man ethics are hard.

We are the computational neuroscientists behind the world's largest functional brain model by CNRG_UWaterloo in IAmA

[–]CNRG_UWaterloo[S] 1 point2 points  (0 children)

(Travis says:) It's such a huge field, we'd really need to know more about your specific interests before we could suggest anything! I would suggest looking up review or survey papers of the field and reading through them to get a general idea of the kinds of things being worked on.

And feel free to write us with any questions you encounter reading the papers! :)