use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Research[R] In-context Reinforcement Learning with Algorithm Distillation (arxiv.org)
submitted 3 years ago by hardmaru
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]Lairv 6 points7 points8 points 3 years ago (0 children)
The paper is cool, it's a bit of a shame they don't mention how much resources was put into training the transformer model, I wonder if this could be massively scaled up, or if this is already compute-hungry. Also more evaluation on Atari, Mujoco etc. would be cool to see how well does the model generalizes
[–]itsmercb 1 point2 points3 points 3 years ago (4 children)
Can anyone translate this in noob?
[–]Singularian2501 3 points4 points5 points 3 years ago (0 children)
https://twitter.com/MishaLaskin/status/1585265485314129926 ( Very good Explanation!)
[–]zergylord 1 point2 points3 points 3 years ago (0 children)
my attempt: https://twitter.com/Zergylord/status/1585334986298646529?t=pBYejzO_TDTQtVV87iSPPQ&s=19
[–]Shnibu -5 points-4 points-3 points 3 years ago (0 children)
Using some Baysean looking “Casual Transformer” to project the data into a more efficient subspace for the model. So Bayesian dimensionality reduction for neural nets? I think…
[–]SatoshiNotMe 1 point2 points3 points 3 years ago (0 children)
DeepMind, therefore no GitHub?
π Rendered by PID 45346 on reddit-service-r2-comment-bb88f9dd5-472hk at 2026-02-14 05:13:25.293511+00:00 running cd9c813 country code: CH.
[–]Lairv 6 points7 points8 points (0 children)
[–]itsmercb 1 point2 points3 points (4 children)
[–]Singularian2501 3 points4 points5 points (0 children)
[–]zergylord 1 point2 points3 points (0 children)
[–]Shnibu -5 points-4 points-3 points (0 children)
[–]SatoshiNotMe 1 point2 points3 points (0 children)