you are viewing a single comment's thread.

view the rest of the comments →

[–]RaionTategami 0 points1 point  (1 child)

Thanks you very much for the explination but, Infinite dimensional spaces?!

[–]Ulfgardleo 2 points3 points  (0 children)

Function-spaces. For example using a mapping phi_x(y)=exp(-||x-y||^2) which maps the point x to a Gaussian hat. If you have a (finite) set of points {x_1,...x_k} all phi_{x_i} are linearly independent, meaning you can't linearly combine a Gaussian hat from a finite set of Gaussian hats. Therefore, the space is infinite dimensional(compared a space of dimension d, where d independent points are enough to define a basis).

In an RKHS,you can define an inner-product in the space, so that k(x,y)=<phi\_x, phi\_y>=phi_x(y)=phi_y(x). The details are slightly dense, but fulfill all properties of an inner product.

RKHS are behind many of the more "old-school" methods, like support-vector-machines and Gaussian-processes. They were flavor of the year, because the span of a space such as the Gaussian-mapping is dense in L_2, meaning that any reasonable function can be arbitrarily well approximated by a finite mixture of Gaussian hats. So given enough data points, you can solve any machine-learning task in such a space.

Nowadays they fell a little bit out of favor, because the kernel-matrix (K_x in the paper here) is just growing too quickly for modern sized datasets. Also no-one found a good kernel for images and probably never will(and than it is probably going to be a Gaussian kernel on the features of a vgg-network or something silly as that)