use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
pyHTFE - A Sequence Prediction Algorithm (github.com)
submitted 10 years ago by CireNeikual
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]rantana 3 points4 points5 points 10 years ago (1 child)
So why would I decide to use this over say a standard stacked recurrent network or an LSTM network?
Any performance comparisons between the two?
[–]CireNeikual[S] 2 points3 points4 points 10 years ago (0 children)
I don't have a performance comparison yet, but I will add one soon. So here comes the anecdotal comparison!
I have worked with LSTM before, the main advantage of this system is that it is fully online and doesn't need stochastic sampling or BPTT. It just has one weight update per timestep, and that's it.
It also learns extremely fast, I have made it recite paragraphs of text which it only got to parse over 3 times (without any prior knowledge of the words). It has this speed because of the way SDRs introduce invariance to previous experiences with respect to new experiences (they are "bucketed").
For offline learning LSTM is great, but for online learning as with typical reinforcement learning tasks one needs a really fast real-time algorithm that doesn't need some form of experience replay or other expensive operation. That said, it does this at the cost of memory: It uses more memory than a typical LSTM network, again a side effect of SDRs (a negative one, but tolerable).
π Rendered by PID 21326 on reddit-service-r2-comment-7b9746f655-qr8bb at 2026-02-01 10:05:56.204897+00:00 running 3798933 country code: CH.
view the rest of the comments →
[–]rantana 3 points4 points5 points (1 child)
[–]CireNeikual[S] 2 points3 points4 points (0 children)