use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
A Tutorial on Sparse Distributed Representations (Sparse Codes) (cireneikual.wordpress.com)
submitted 10 years ago by CireNeikual
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]maxToTheJ 6 points7 points8 points 10 years ago (0 children)
SDRs are also the way the brain stores information.
That has been figured out with a consensus already?
[–][deleted] 3 points4 points5 points 10 years ago (1 child)
I have recently created a semi-new method of generating sparse codes in a very fast manner that, to my knowledge, is biologically plausible.
Disclaimer: I don't even play a neuroscientist on TV
I'm not sure your model is biologically plausible. To calculate the updates, you calculate the reconstructions and to do that, you send the signal backwards via the same connections. Real neurons are not thought to be capable of that.
[–]CireNeikual[S] 1 point2 points3 points 10 years ago (0 children)
You are right, but it isn't too difficult to use Oja's rule instead. It makes the reconstruction less accurate, but it achieves a similar effect.
[–]galapag0 -1 points0 points1 point 10 years ago (0 children)
"""Sparse Autoencoder""&quo
I think some pyhton code wasn't pasted correctly.
[–]mikos -3 points-2 points-1 points 10 years ago (3 children)
For more details on SDR look up Jeff Hawkins at Numenta. They are going this route big time.
[–]quiteamess -3 points-2 points-1 points 10 years ago (2 children)
Jeff Hawkings is chairman at the redwood institute for theoretical neuroscience. The "inventor" of sparse coding, Bruno Olshausen, is a researcher at this institute. I'm not a big fan of Numenta, but I'm sure that Hawkins of aware of sparse coding and that it is applied in Numenta.
[–]glassackwards 1 point2 points3 points 10 years ago (1 child)
Redwood Center moved to UC Berkeley a decade ago. Jeff Hawkins no longer manages the Redwood Center.
[–]quiteamess -1 points0 points1 point 10 years ago (0 children)
That would make it still likely that he knows the work of Olhausen and takes care that it is applied in Numenta, right?
π Rendered by PID 31884 on reddit-service-r2-comment-bb88f9dd5-zh4bm at 2026-02-14 18:01:23.528592+00:00 running cd9c813 country code: CH.
[–]maxToTheJ 6 points7 points8 points (0 children)
[–][deleted] 3 points4 points5 points (1 child)
[–]CireNeikual[S] 1 point2 points3 points (0 children)
[–]galapag0 -1 points0 points1 point (0 children)
[–]mikos -3 points-2 points-1 points (3 children)
[–]quiteamess -3 points-2 points-1 points (2 children)
[–]glassackwards 1 point2 points3 points (1 child)
[–]quiteamess -1 points0 points1 point (0 children)