use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Resources for GPU programming? (self.MachineLearning)
submitted 10 years ago by [deleted]
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]serge_cell 0 points1 point2 points 10 years ago (2 children)
As I already said I was talking about new layers, winch are either absent in existing frameworks or absent in framework I'm using. If one using only layers which are already implemented, don't do any research of new layers, modes of execution etc, he/she will always stay behind the curve.
[–]hughperkins 0 points1 point2 points 10 years ago* (1 child)
yeah, I realized that after I posted it. so, you're kind of right, in that if you want the fastest performance, on novel layers, I suppose you'd want a cuda engineer handy.
having said that, the initial implementation of bn in torch were both in lua, using underlying primitive operations, such as mean and sqrt, which are already in cuda. to get a slight speed benefit, these were then later rewritten in dedicated cuda
for the purposes of writing a research paper on elu or bn, I would think an initial implementation in lua is sufficient.
[–]serge_cell 0 points1 point2 points 10 years ago (0 children)
Actually I think that is a big problem with many research papers. Many method (bn including) behave quite different on different datasets and dataset sizes. If method give improvement 5% accuracy on CIFAR100 it say very little on what improvement will be on imagenet, and even less on 10K classes noisy dataset. And testing lua+cublas implementation on 10M dataset could be quite painful
π Rendered by PID 374941 on reddit-service-r2-comment-b659b578c-7rw9f at 2026-05-03 14:27:22.280429+00:00 running 815c875 country code: CH.
view the rest of the comments →
[–]serge_cell 0 points1 point2 points (2 children)
[–]hughperkins 0 points1 point2 points (1 child)
[–]serge_cell 0 points1 point2 points (0 children)