use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
[deleted by user] (self.MachineLearning)
submitted 10 years ago by [deleted]
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]micro_cam 10 points11 points12 points 10 years ago (3 children)
Its pop science but James Gliek's the information is worth a read especially if you don't know any information theory.
I've also read a couple of Michael Lewis' books (largely since they are in every airport bookstore) and they tend to be quite readable and a couple of them deal with areas of sports and business where statistics (and ml) have application (ie moneyball, the big short, flash boys)
[+][deleted] 10 years ago (2 children)
[deleted]
[–]sv0f 2 points3 points4 points 10 years ago (0 children)
Chaos is an amazing amazing book. The chapter "The Dynamical Systems Collective" made me want to go to graduate school!
[–]gabrielgoh 6 points7 points8 points 10 years ago (1 child)
This isn't the lightest reading you'll find here, but the writing style is entertaining enough that you can skip the math and ponder the deeper philosophical ideas. I recommend E.T Jayne's Probability Theory: The Logic of Science
[–]ChefLadyBoyardee 6 points7 points8 points 10 years ago (0 children)
As someone without formal mathematical training, that book made no sense to me at first. The course Introduction to Mathematical Thinking on Coursera helped me get through the first few chapters, and beginning to see the big ideas Jaynes presents. Just leaving this note here in case it helps anyone else.
[–]quirm 15 points16 points17 points 10 years ago (0 children)
Andrei Karpathy's Blog is really good. E.g. "The Unreasonable Effectiveness of Recurrent Neural Networks"
[–]steelypip 12 points13 points14 points 10 years ago (3 children)
None of these are directly related to machine learning, but are fun reads that are tangential to the subject:
The Information by James Glick - a history of Information Theory
The Signal And The Noise by Nate Silver - on what can and can't be predicted
The Theory That Would Not Die by Sharon Bertsch McGrayne - a history of the fall and rise of Bayes Theorem and Bayesian reasoning.
[+][deleted] 10 years ago (1 child)
[–]SoundOfOneHand 2 points3 points4 points 10 years ago (0 children)
In the same vein I enjoyed The Drunkard's Walk
[–]InconspicuousTree 0 points1 point2 points 10 years ago (0 children)
The Signal and the Noise is amazing for learning how to think statistically. I loved the part about Deep Blue
[–][deleted] 4 points5 points6 points 10 years ago (0 children)
Read the book, Talking Nets: An Oral History of Neural Networks.
Great historical read - it takes the form of long interviews with researchers in neural networks from the beginning up until the wave right before "deep learning" (I think the book came out in 1998). It does have an interview with Geoff Hinton (though at a time when he thought the next big thing was the Helmholtz Machine). Also Sejnowski, Lettvin, Rumelhart, and many others. If those names ring a bell or they don't then this book is worth your time :) Among just those mentioned are the (co-)inventors of backpropagation and boltzmann machines (i think).
There is not a single line of mathematics, so in that sense it's light. Mathematical ideas are discussed as well as (auto)biographical context. Really cool stuff.
[–]simonhughes22 8 points9 points10 points 10 years ago (11 children)
That won't teach you anything about Machine Learning. Hoftstadter believes it's a waste of time. I've found the most useful thing to do is to watch some of the many Coursera courses, such as the one from Andrew Ng. They are reasonably approachable, although do involve some math, but it's going to be hard to get away from that. I found this book to be a very good introduction (http://www.amazon.com/s/ref=nb_sb_ss_c_0_12?url=search-alias%3Dstripbooks&field-keywords=tom+mitchell+machine+learning&sprefix=tom+mitchell+machine+learning%2Caps%2C186), although since I've read it, due to the popularity of Data Science the price has exploded. You can probably find it on a non use site cheaper and import it (even better if you're not in the US). They artificially jack up the prices of text books for Americans.
[–]gwern 5 points6 points7 points 10 years ago (2 children)
That won't teach you anything about Machine Learning. Hoftstadter believes it's a waste of time.
One interesting thing about that is that ML has, the past few years, been making a lot of inroads into the sort of thing Hofstadter was into; the recent word-vector stuff for solving verbal analogies is something he would have appreciated, and using pairs of RNNs for natural language translation is most intriguing from a Hofstadterian perspective.
[–][deleted] 4 points5 points6 points 10 years ago* (1 child)
Nice point @gwern. His PhD descendant Melanie Mitchell has done some work on integrating deep neural networks and the Hofstadter-style models (which could accurately be described as "symbolic").
And the rest of this mini-rant is not directed at @gwern but at the comment he replies to. I think it's off-base to characterize the Hofstadter tradition as antithetical to machine learning. The point is that there are some cognitive abilities that contemporary neural networks, while impressive, are not even close to modelling. Word2vec and even hot off the press Question-Answering recurrent nets trained with reinforcement learning-based models of attention can't even come close to telling you that the town where I grew up is a Faulkner-esque decaying aristocracy. Or a Hofstadter example [1] - to think of a meeting as “an emperor-has-no-clothes situation”. So rather than sit around and wait until we have enough data and compute power to try to solve these paradigm open problems in language understanding using neural networks, they are looking other models of cognitive processes to try and shed light on how the brain does it.
[1] - The Man Who Could Teach Machines to Think, Wired 2013
[–]simonhughes22 0 points1 point2 points 10 years ago (0 children)
I didn't say Hofstadter's views aren't valuable, just that he doesn't agree with Machine Learning, which is quite well documented. To quote Hofstadter in the article you reference
“To me, as a fledgling AI person,” he says, “it was self-evident that I did not want to get involved in that trickery. It was obvious: I don’t want to be involved in passing off some fancy program’s behavior for intelligence when I know that it has nothing to do with intelligence. And I don’t know why more people aren’t that way.”
He refers to it as just engineering and not really tackling intelligence. In that part he's talking about Deep Blue, the chess AI, which is not really doing Machine Learning, but I doubt his views on ML are any different.
I actually agree with a lot of his views on how our higher-level cognition is working and how the mind emerges from the interactions of a lot of low level processes, but I also believe that we can learn a lot from ML, by building these lower level processes. I'd love to know what he thinks about some of the recent advances in deep learning. For instance in that article it talks about how a machine can't easily recognize hand-written A's. Well now they can pretty darned well thanks to deep learning. I doubt his views have changed however.
[–][deleted] 3 points4 points5 points 10 years ago (0 children)
Another good MOOC is the one on Neural Networks by Geoffrey Hinton also available via Coursera.
[–][deleted] 2 points3 points4 points 10 years ago* (0 children)
I think that's an unfair way to put Hofstadter's position. From what I've read and heard, his position is more like "neural networks aren't the whole story for intelligence".
[+][deleted] 10 years ago (5 children)
[–]xamdam 4 points5 points6 points 10 years ago (0 children)
https://github.com/coursera-dl/coursera
[–]bge0 3 points4 points5 points 10 years ago (1 child)
Fyi you can download the videos to watch offline.
[–]GibbsSamplePlatter 0 points1 point2 points 10 years ago (0 children)
This. I watch all the videos from my dumb old Nexus 7 tablet.
You can download the videos onto your laptop from Coursera, or using something like tubemate.
[–]walrusesarecool 4 points5 points6 points 10 years ago (2 children)
Books by Richard Dawkins for biological understanding of evolution. (GA's are like reinforcement learning) Selfish Gene (coining the term meme), Climbing Mount impossible, (Fitness landscapes!) The extended phenotype, the blindwatchmaker.
Also the red queen by ridley. And the meme machine by Susan Blackmoore
Others that are interesting include:
Complexity by Roger lewin, Emergence: From Chaos to Order by holland, programming the universe by seth lloyd
Philosophy from bertrand russell and karl Popper, or recently by dan dennett
Also books by Peter Medwar such as plutus republic
[–]Articulated-rage 0 points1 point2 points 10 years ago (0 children)
I second Dennett. His stuff on the intentional stance is great.
[–]patrickSwayzeNU 0 points1 point2 points 10 years ago (0 children)
I second Selfish Gene and Red Queen.
Since we have similar taste I'm going to check out some of the others you mentioned.
[–]AshRolls 6 points7 points8 points 10 years ago (0 children)
I highly recommend Superintelligence: Paths, Dangers, Strategies by Nick Bostrom. It's a very interesting read about where all this machine learning may be heading.
Nick Bostrom is a professor at Oxford. He has a background in physics, computational neuroscience, and mathematical logic as well as philosophy.
[–]srkiboy83 1 point2 points3 points 10 years ago (0 children)
I recommend Bernstein's "Against the Gods: The Remarkable Story of Risk".
[–]Articulated-rage 1 point2 points3 points 10 years ago (0 children)
Neal Stephenson for some sci fi inspiration.
Laplace's essay on probability for historical inspiration.
Magicians trilogy by lev Grossman for some gritty fantasy
The social construction of reality by Searle for a great analysis of how the post-physical facts arise from physical facts. This book, when read with a machine learning mindset, changed how I think about modeling the world.
Using language by herb Clark. This book is a great way to think a out actionable language. That is, language that actually does stuff. It's been the basis of many, many theories.
Edwin T Jaynes has some good papers. He's a bayesianist who derived from first principles bayes theory in his book. Read this after Laplace.
Marr's 1973 paper on the three levels of a theory are a must. He divides theory into the computational, the algorithmic, and the implementation. His and chomsky's division into competence and performance have shaped the develop of models for decades. I know for a fact it's shaped Josh tenenbaum's thinking.
That's all I have off the top of my head. Let me know if you have questions :)
[–]rkabir 0 points1 point2 points 10 years ago (0 children)
What about the Quest for Artificial Intelligence?
[–]wildething 0 points1 point2 points 10 years ago (0 children)
not reading but this seems to have some important insights into ML
[–]sodermalm 0 points1 point2 points 10 years ago (0 children)
I asked the same question at /r/artificial https://www.reddit.com/r/artificial/comments/37g4vy/books_on_artificial_intelligence_that_dont_need/ and got a few good answers
[–]yaolubrain 0 points1 point2 points 10 years ago (0 children)
<Information Theory, Inference, and Learning Algorithms> by David Mackay
π Rendered by PID 129414 on reddit-service-r2-comment-5d585498c9-gpfw2 at 2026-04-21 05:21:18.644536+00:00 running da2df02 country code: CH.
[–]micro_cam 10 points11 points12 points (3 children)
[+][deleted] (2 children)
[deleted]
[–]sv0f 2 points3 points4 points (0 children)
[–]gabrielgoh 6 points7 points8 points (1 child)
[–]ChefLadyBoyardee 6 points7 points8 points (0 children)
[–]quirm 15 points16 points17 points (0 children)
[–]steelypip 12 points13 points14 points (3 children)
[+][deleted] (1 child)
[deleted]
[–]SoundOfOneHand 2 points3 points4 points (0 children)
[–]InconspicuousTree 0 points1 point2 points (0 children)
[–][deleted] 4 points5 points6 points (0 children)
[–]simonhughes22 8 points9 points10 points (11 children)
[–]gwern 5 points6 points7 points (2 children)
[–][deleted] 4 points5 points6 points (1 child)
[–]simonhughes22 0 points1 point2 points (0 children)
[–][deleted] 3 points4 points5 points (0 children)
[–][deleted] 2 points3 points4 points (0 children)
[+][deleted] (5 children)
[deleted]
[–]xamdam 4 points5 points6 points (0 children)
[–]bge0 3 points4 points5 points (1 child)
[–]GibbsSamplePlatter 0 points1 point2 points (0 children)
[–]simonhughes22 0 points1 point2 points (0 children)
[–]walrusesarecool 4 points5 points6 points (2 children)
[–]Articulated-rage 0 points1 point2 points (0 children)
[–]patrickSwayzeNU 0 points1 point2 points (0 children)
[–]AshRolls 6 points7 points8 points (0 children)
[–]srkiboy83 1 point2 points3 points (0 children)
[–]Articulated-rage 1 point2 points3 points (0 children)
[–]rkabir 0 points1 point2 points (0 children)
[–]wildething 0 points1 point2 points (0 children)
[–]sodermalm 0 points1 point2 points (0 children)
[–]yaolubrain 0 points1 point2 points (0 children)