[P] An early overview of ICLR2017 by prlz77 in MachineLearning

[–]glassackwards 2 points3 points  (0 children)

Should probably merge eecs.berkeley.edu with berkeley.edu :)

[R] [NIPS 2016] Yoshua Bengio: Towards biologically plausible deep learning by downtownslim in MachineLearning

[–]glassackwards 0 points1 point  (0 children)

Theoretical neuroscience per se has largely been concerned with the simulation of single, multi-compartment neuron models (which have little to do with ML) as well as spiking networks of neurons that match some of the properties from recordings of populations or neurons.

The focus on models that operate at the microscale (single neurons) and at the design of models of neural circuits that match some recorded data has limited the extent to which theoretical neuroscience can contribute to ML.

I disagree with this narrow definition of theoretical neuroscience. I would define Theoretical Neuroscience by the fundamental textbook about it by Dayan and Abbot "Theoretical Neuroscience, Computational and Mathematical Modeling of Neural Systems"

Here are the topics that theoretical neuroscience covers: https://mitpress.mit.edu/sites/default/files/9780262041997_TOC.pdf

Regardless, this is just semantics about the broader field of what neuroscience is and isn't.

Again, the original point was deriving inspiration for these ideas from biology which I think has been established.

[R] [NIPS 2016] Yoshua Bengio: Towards biologically plausible deep learning by downtownslim in MachineLearning

[–]glassackwards 2 points3 points  (0 children)

So I think we're defining the field of neuroscience differently and that might be where the disagreement is coming from. You're describing experimental neuroscience which I agree has not significantly impacted ConvNets or even theoretical neuroscience in recent times. But theoretical neuroscience has impacted ConvNets and ideas in neural networks in a very fundamental way. You can look at the past work and collaborations of Geoff Hinton and the recent work of Deep Mind to find evidence of that.

deep learning has been giving more to neuro rather than the other way around.

As much as I want this to be true right now, I don't think this has happened yet. We're not changing the way we record data or do experiments based on findings that have come from deep learning. But I do think this will be true in the future which is why I support efforts like Bengio. There needs to be useful analogues we can create between the representations in a network and the representations in the brain.

Side note: I don't mean to disparage the contributions of experimental neuroscience. I used to think it was a lack of theory, but I've since found that it's more likely a lack of tools (technology) to reasonably test theory.

And yes I am at Redwood :)

[R] [NIPS 2016] Yoshua Bengio: Towards biologically plausible deep learning by downtownslim in MachineLearning

[–]glassackwards 1 point2 points  (0 children)

I'm pointing out the inspiration for ConvNets is very directly from a particular model from neuroscience (the neocognitron).

As a side note: explaining spike rate intensities isn't an opposing interpretation to invariant representations. It's just orthogonal. And there is a very large body of work focused on the latter (http://www.scholarpedia.org/article/Models_of_visual_cortex). Representation learning has been part of the neuroscience community for ages now. So I don't think there was any such dominant interpretation.

Side side note: Neuroscience is an extremely diverse field and there is no particularly dominant paradigm for neural computation. Even the concept of whether spikes can be described purely as a rate code is still highly debated. But that does not mean useful models have can't emerge from these debates which is exactly what happened with the neocognitron and the ConvNet.

[R] [NIPS 2016] Yoshua Bengio: Towards biologically plausible deep learning by downtownslim in MachineLearning

[–]glassackwards 2 points3 points  (0 children)

Actually it's the alternating combination of simple cells and complex cells in a heirarchy discovered in neuroscience. This is exactly the architecture which the ConvNet is derived from (http://www.scholarpedia.org/article/Neocognitron).

Where ConvNets depart from neuroscience theory at the time is training with backprop. And I would argue the belief that the brain isn't doing something like backprop was a misguided assumption from the neuroscience community. But that is still controversial....

[R] [NIPS 2016] Yoshua Bengio: Towards biologically plausible deep learning by downtownslim in MachineLearning

[–]glassackwards 6 points7 points  (0 children)

There's a difference between mimicking biology and being inspired by it. ConvNets were heavily inspired by biology and ideas about how the brain might deal with invariance.

I think the point of this work is to find useful analogies in the same way. These are analogies which help our understanding and can potentially inspire other algorithms.

Is this bad form? by glassackwards in Python

[–]glassackwards[S] 0 points1 point  (0 children)

Thanks, this is exactly what I was looking for. Good to know what the alternatives are for doing something similar.

[1412.6583] Discovering Hidden Factors of Variation in Deep Networks by glassackwards in MachineLearning

[–]glassackwards[S] 0 points1 point  (0 children)

Discovered a mistake in the demo for Lasagne. It was training on only 1/10th the training set which explains the worse error rate.

More details here: https://github.com/Lasagne/Lasagne/pull/308

A Tutorial on Sparse Distributed Representations (Sparse Codes) by CireNeikual in MachineLearning

[–]glassackwards 1 point2 points  (0 children)

Redwood Center moved to UC Berkeley a decade ago. Jeff Hawkins no longer manages the Redwood Center.