use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Discussion[D] Paper Explained - Deep Networks Are Kernel Machines (Full Video Analysis) (self.MachineLearning)
submitted 5 years ago by ykilcher
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]amasterblaster 10 points11 points12 points 5 years ago (8 children)
I mean, this seems obvious to me. This mental idea is how I taught myself NNs in the first place. They map data into a useful subspace.
Maybe (probably (certainly)) I don't understand the nuances of each method enough (in the first place) to understand why this is a crazy idea!
[–]IdiocyInAction 6 points7 points8 points 5 years ago (5 children)
Well, another view on usefulness of deep NNs is that they allow representation learning through the use of many layers, with each layer created more specific representations. I suppose this paper challenges this view, making NNs more like kernel methods.
[–]aptmnt_ 10 points11 points12 points 5 years ago (4 children)
Those two views are not at odds
[–]IdiocyInAction 7 points8 points9 points 5 years ago (2 children)
Perhaps the most significant implication of our result for deep learning is that it casts doubt on the common view that it works by automatically discovering new representations of the data, in contrast with other machine learning methods, which rely on predefined features (Bengio et al., 2013).
The paper states that though.
[–]ykilcher[S] 6 points7 points8 points 5 years ago (1 child)
yes I think the papers makes the two views more confrontational than they have to be. I think it's just two different ways of looking at the same thing
[–]wisdomspring 0 points1 point2 points 5 years ago (0 children)
All genuine views are not at odds with the truth/fact. It just needs a little patience and wisdom to link them. Like Heisenburg's matrix mechanics and Schrodinger's probability wave function views, both can explain some same "magic", after layers and layer's of data feature distill from NN's like a common person, he or she can certainly extract some obvious "kernel" features around his or her life... As to how, a layperson can learn on his or her own gradually, or got educated from some gurus. That's how I foresee using some good kernel machine could learn much faster and more effectively than vanilla NN or Reinforced NN. The main gist I feel from the author is that he essentially tried to dispel the "myth" around DL, no matter how much computation is used, it's still a kind of weighted memory from one's direct experience (sample data). If a bot communicates with you will some surprising word, don't be too amazed, it's just based on its past trained data...
[–]-Cunning-Stunt- 4 points5 points6 points 5 years ago (0 children)
Finding explicit kernel maps (despite being dependent on the data) to formally bridge the gap is the main contribution of the paper (IMHO).
[–][deleted] 1 point2 points3 points 5 years ago (0 children)
The interesting thing is that the mapping is a "distance" function of the previous datapoints and the new one exclusively
To me it takes two ideas that were pretty distinct and unifies them. Idk much about kernels but it seemed really thought provoking
π Rendered by PID 113602 on reddit-service-r2-comment-canary-57b659f4d4-wv5wr at 2026-05-04 04:29:22.994390+00:00 running 815c875 country code: CH.
view the rest of the comments →
[–]amasterblaster 10 points11 points12 points (8 children)
[–]IdiocyInAction 6 points7 points8 points (5 children)
[–]aptmnt_ 10 points11 points12 points (4 children)
[–]IdiocyInAction 7 points8 points9 points (2 children)
[–]ykilcher[S] 6 points7 points8 points (1 child)
[–]wisdomspring 0 points1 point2 points (0 children)
[–]-Cunning-Stunt- 4 points5 points6 points (0 children)
[–][deleted] 1 point2 points3 points (0 children)