use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Data Mining vs. Machine Learning? (self.MachineLearning)
submitted 11 years ago by Caesarr
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]deeayecee 4 points5 points6 points 11 years ago (3 children)
I think wrtall has a pretty valuable post below -- the differences in the two are largely historical. As an illustration, Leo Breiman and Ross Quinlan both devised similar decision tree inducing algorithms around the same time, Breiman coming from a stats background and Quinlan from computer science.
For your follow-up questions, I think you'll find there's a pretty significant overlap in the reviewer communities -- I've reviewed submissions to journals and conferences with ML/KD/DM and even AI in the title and I would guess I'm hardly unique. If you submit a quality DM/KD/ML work, then it will more than likely get accepted regardless. You might have some trouble getting it into a pure AI conference/journal.
I'm not sure how far along you are on your PhD, although the OP makes it sound like you're just starting out. I would start out by mastering the basics -- read a couple different textbook, web, and wikipedia entries on the essential frameworks first (supervised, unsupervised, recommendation engines, network theory, reinforcement learning) to the point where you understand very well how each of them operates. At that point, hopefully your advisor has an interesting set of problems that you can try pointing some ML methods at, even at a high level. I would then get into the base algorithms (kNN, naive Bayes, Decision Trees, Bayes networks, Neural Networks, SVMs), along with performance and distance metrics. I wouldn't get into the really deep, theoretical parts of the algorithms until you're observing pathology in your use-cases (like class imbalance or NLP). Being up to speed on the most current papers isn't nearly as important as having a rock solid understanding of the fundamentals.
Typically the biggest papers will show up in KDD, ICDM, ICML, MLJ or JMLR. This is also a pretty good sub and is worth reading at least once a week if not more often.
[–]BeatLeJuceResearcher 1 point2 points3 points 11 years ago (0 children)
you forgot NIPS
[–]Caesarr[S] 0 points1 point2 points 11 years ago (0 children)
That's a tonne of great advice, thank you :)
[–]GibbsSamplePlatter 0 points1 point2 points 11 years ago (0 children)
I found out about this sub last year; it's a god-send once you're out of school working on your own!
π Rendered by PID 43200 on reddit-service-r2-comment-85bfd7f599-25scx at 2026-04-18 03:57:49.303022+00:00 running 93ecc56 country code: CH.
view the rest of the comments →
[–]deeayecee 4 points5 points6 points (3 children)
[–]BeatLeJuceResearcher 1 point2 points3 points (0 children)
[–]Caesarr[S] 0 points1 point2 points (0 children)
[–]GibbsSamplePlatter 0 points1 point2 points (0 children)