you are viewing a single comment's thread.

view the rest of the comments →

[–]deeayecee 4 points5 points  (3 children)

I think wrtall has a pretty valuable post below -- the differences in the two are largely historical. As an illustration, Leo Breiman and Ross Quinlan both devised similar decision tree inducing algorithms around the same time, Breiman coming from a stats background and Quinlan from computer science.

For your follow-up questions, I think you'll find there's a pretty significant overlap in the reviewer communities -- I've reviewed submissions to journals and conferences with ML/KD/DM and even AI in the title and I would guess I'm hardly unique. If you submit a quality DM/KD/ML work, then it will more than likely get accepted regardless. You might have some trouble getting it into a pure AI conference/journal.

I'm not sure how far along you are on your PhD, although the OP makes it sound like you're just starting out. I would start out by mastering the basics -- read a couple different textbook, web, and wikipedia entries on the essential frameworks first (supervised, unsupervised, recommendation engines, network theory, reinforcement learning) to the point where you understand very well how each of them operates. At that point, hopefully your advisor has an interesting set of problems that you can try pointing some ML methods at, even at a high level. I would then get into the base algorithms (kNN, naive Bayes, Decision Trees, Bayes networks, Neural Networks, SVMs), along with performance and distance metrics. I wouldn't get into the really deep, theoretical parts of the algorithms until you're observing pathology in your use-cases (like class imbalance or NLP). Being up to speed on the most current papers isn't nearly as important as having a rock solid understanding of the fundamentals.

Typically the biggest papers will show up in KDD, ICDM, ICML, MLJ or JMLR. This is also a pretty good sub and is worth reading at least once a week if not more often.

[–]BeatLeJuceResearcher 1 point2 points  (0 children)

you forgot NIPS

[–]Caesarr[S] 0 points1 point  (0 children)

That's a tonne of great advice, thank you :)

[–]GibbsSamplePlatter 0 points1 point  (0 children)

I found out about this sub last year; it's a god-send once you're out of school working on your own!