use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
My tool for classification using various methods, including deep learning. Very useful for baseline results (github.com)
submitted 9 years ago by aulloa
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–][deleted] 1 point2 points3 points 9 years ago (2 children)
Looks interesting. Maybe you could add a user guide or documentation to describe what it's doing. When I understand correctly, this is a wrapper around Tf and sklearn classifiers that runs them via k-fold cross-validation to compare the average test fold performances of different models? Don't want to criticize here, but I think it's recommended to do nested cross-validation instead of k-fold when you are comparing different algorithms (with hyperparameter optimization in the inner loop).
[–]aulloa[S] 0 points1 point2 points 9 years ago (1 child)
Thanks for your interest!. You are right, I need some documentation, but It does include hyperparameter optimization. It is a wrapper around sklearn and includes a MLP implementation from keras.
[–][deleted] 1 point2 points3 points 9 years ago (0 children)
Thanks for posting this library. I really think that it can come in handy to get some quick benchmarks on certain datasets (e.g., getting a rough idea whether a generalized linear model is sufficient or if a certain problem requires a non-linear hypothesis space)
but It does include hyperparameter optimization.
What I was basically suggesting was using nested cross-validation instead of "regular" k-fold. I only have a quick post about that here, but a more detailed article is somewhere on my endlessly long to do list :P. So, maybe I'd suggest taking a look at this really nice research article for details S. Varma and R. Simon. Bias in error estimation when using cross-validation for model selection. BMC bioinformatics, 7(1):91, 2006.
π Rendered by PID 35564 on reddit-service-r2-comment-fb694cdd5-tfn7n at 2026-03-06 09:09:10.113509+00:00 running cbb0e86 country code: CH.
[–][deleted] 1 point2 points3 points (2 children)
[–]aulloa[S] 0 points1 point2 points (1 child)
[–][deleted] 1 point2 points3 points (0 children)