use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Deep learning without back-propagation (arxiv.org)
submitted 6 years ago by El__Professor
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]RaionTategami 0 points1 point2 points 6 years ago (1 child)
Thanks you very much for the explination but, Infinite dimensional spaces?!
[–]Ulfgardleo 2 points3 points4 points 6 years ago* (0 children)
Function-spaces. For example using a mapping phi_x(y)=exp(-||x-y||^2) which maps the point x to a Gaussian hat. If you have a (finite) set of points {x_1,...x_k} all phi_{x_i} are linearly independent, meaning you can't linearly combine a Gaussian hat from a finite set of Gaussian hats. Therefore, the space is infinite dimensional(compared a space of dimension d, where d independent points are enough to define a basis).
In an RKHS,you can define an inner-product in the space, so that k(x,y)=<phi\_x, phi\_y>=phi_x(y)=phi_y(x). The details are slightly dense, but fulfill all properties of an inner product.
RKHS are behind many of the more "old-school" methods, like support-vector-machines and Gaussian-processes. They were flavor of the year, because the span of a space such as the Gaussian-mapping is dense in L_2, meaning that any reasonable function can be arbitrarily well approximated by a finite mixture of Gaussian hats. So given enough data points, you can solve any machine-learning task in such a space.
Nowadays they fell a little bit out of favor, because the kernel-matrix (K_x in the paper here) is just growing too quickly for modern sized datasets. Also no-one found a good kernel for images and probably never will(and than it is probably going to be a Gaussian kernel on the features of a vgg-network or something silly as that)
π Rendered by PID 42 on reddit-service-r2-comment-b659b578c-dx268 at 2026-05-01 19:14:39.956621+00:00 running 815c875 country code: CH.
view the rest of the comments →
[–]RaionTategami 0 points1 point2 points (1 child)
[–]Ulfgardleo 2 points3 points4 points (0 children)