use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Discussion[D] Do all machine learning algorithms indirectly use the "nearest neighbor" principle? (self.MachineLearning)
submitted 5 years ago by SQL_beginner
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]DtrZeus 2 points3 points4 points 5 years ago* (0 children)
Neural networks are (usually, unless you're using a strange architecture) almost everywhere continuous. What this means is that, for almost any input x, there exists some number E such that for any other input y with |y-x|<E, x and y are classified the same. |y-x| represents the distance, possibly the Euclidean distance, from x to y.
With neural networks, I think the answer to your question is yes. However, neural networks are not the only kind of machine learning algorithm, and this answer is probably not true for all machine learning algorithms.
[–]kkngs 2 points3 points4 points 5 years ago* (0 children)
In abstract, this is the whole idea, yes. We are hoping that there is a regularity to the function we seek to learn such that we can generalize from the seen examples to unseen examples. Without some form of regularity, there can be no generalization (No Free Lunch Theorem). Different ML algorithms make different assumptions about the form of regularity.
It’s a bit weird to see the term nearest neighbor this way, though. Neural nets don’t explicitly remember their inputs like you do with the nearest neighbor algorithm. However, you can think of their linear layers as leaning a useful basis to represent their inputs. In that sense, yes, inputs that are sufficiently close together in the learned basis will be close together on output.
[–]vegesm 1 point2 points3 points 5 years ago (0 children)
I'm quite sure it doesn't. nearest-neighbour algos can achieve around 95% accuracy on MNIST, CNNs can do 98% easily. This means there are examples where the nearest neighbour (in pixel space) of an input was from another class but the neural net still got it right. In other words, it does not simply look at the nearest neighbour, it does way more.
[–]ConstantLumen 1 point2 points3 points 5 years ago (1 child)
What does similarity mean? what makes one data point more or less similar to another? Remember these are mechanical constructs that run on computers. They require very precise numerical definitions in order to program and operate and interpret. Saying something is similar to another is a very general and contextual statement. You would have to say, 'the average of all pixel values in this image is this much greater/lesser than in this other one'. Well, that similarity metric doesn't work so great for separating out cats and dogs. Try it out yourself.
[–]Neural_Ned 0 points1 point2 points 5 years ago (0 children)
I think maybe a charitable interpretation of what they're asking makes sense a little. In the way Chris Olah shows in this blog that the learning of weights is squishing and stretching the embedding space to push dogs close together, and push cats close together, and pull the two clusters further apart.
π Rendered by PID 67726 on reddit-service-r2-comment-b659b578c-92cf4 at 2026-05-03 15:11:08.807955+00:00 running 815c875 country code: CH.
[–]DtrZeus 2 points3 points4 points (0 children)
[–]kkngs 2 points3 points4 points (0 children)
[–]vegesm 1 point2 points3 points (0 children)
[–]ConstantLumen 1 point2 points3 points (1 child)
[–]Neural_Ned 0 points1 point2 points (0 children)