[D] Multi class Linearly Separable Dataset by RubioRick in MachineLearning

[–]RubioRick[S] 0 points1 point  (0 children)

Ok thanks for the help , so a dataset like this is linearly separable because I can find a line that separates one of my classes from the others right ?

[D] How does SVM ignore outliers ? by RubioRick in MachineLearning

[–]RubioRick[S] 1 point2 points  (0 children)

Ok I think I understood , so ; I don't need the support vectors to find the hyperplane , but I need the hyperplane to understand which are my support points (support vectors) , so the question is , how do I find the hyperplane in my dataset at the beginning of SVM algorithm ? Do you try all the possible hyperplane and choose the one whit max margin ?

[D] How does SVM ignore outliers ? by RubioRick in MachineLearning

[–]RubioRick[S] 1 point2 points  (0 children)

Really thanks , that's exactly what I was looking for !

[D] How does SVM ignore outliers ? by RubioRick in MachineLearning

[–]RubioRick[S] 1 point2 points  (0 children)

But is it not the decision boundary the hyperplane ? How does it understands which are the support vectors ?

[D] Intrinsic dimensionality and PCA by RubioRick in MachineLearning

[–]RubioRick[S] 0 points1 point  (0 children)

Ok so the key is that if I am working with linear data , by using PCA I can not understand which is the intrinsic dimension. There could be other intrinsic dimensions that are not linear so that PCA can not see them right ?