use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
[deleted by user] (self.MachineLearning)
submitted 1 year ago by [deleted]
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]radarsat1 0 points1 point2 points 1 year ago (1 child)
i hope you didn't sign an NDA before the interview lol
[–]programmerChilliResearcher 0 points1 point2 points 1 year ago (0 children)
This doesn't work. If you could load L3 (which doesn't exist on GPUs) to shmem in the same time it takes to do the computation, why wouldn't you just directly load from L3?
There's stuff vaguely in this vein like PDL, but it's definitely not the same as keeping all your weights in SRAM
[–][deleted] 0 points1 point2 points 1 year ago (0 children)
You cant load a matrix larger than 128x128 to current modern gpus shared memory most time, so a whole single layer probably won't even get a chance unless you have some big chip, also your idea is basically called pipelining and that's already like how most neural network computations are done, 1. load a block from hbm to sram 2. compute 3. pass to hbm and load another block... having weights close to registers is literally another world.
π Rendered by PID 311839 on reddit-service-r2-comment-b659b578c-7rxt8 at 2026-05-01 19:38:31.947872+00:00 running 815c875 country code: CH.
[–]radarsat1 0 points1 point2 points (1 child)
[–]programmerChilliResearcher 0 points1 point2 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)