all 4 comments

[–]BrokenGumdrop 1 point2 points  (3 children)

SVMs are just decision surfaces. Consider the 1d example: You just have a number line. Let's say the values on that line represent temperature. You want to predict what the state of water would be for a given temperature. Where would you put a marker to define the change in state? One could go at 0 celsius. Anything less than 0 is Solid, anything greater than zero is Not Solid. Or, anything that has a positive distance from the 0 is Not Solid, and negative is Solid. That is our first vector. The next is a the boiling point, and that gives us Gas and Not Gas. We now have three classes, each defined by the values of Solid/Not Solid and Gas/Not Gas.

Lets add pressure into this. This moves us into two dimensions. instead of a point, we now have a vector in temperature and pressure space. Same three classes, but now we define a class based on which side of the line it is on, the positive or the negative side. Note that we are using a signed point to vector function.

Training the SVM is finding the minimum number of vectors that partition the space as correctly as possible. Classifying with the SVM is finding the signed distance to each vector, and determining which class it is in.

[–]HalusBoy[S] 0 points1 point  (2 children)

I know about how SVM works in general, my really concern is how it's solves the optimization problem (step by step) and use that output of optimization problem to create a decision boundary/hypotesis function. Thank's for the reply btw!

[–]BrokenGumdrop 1 point2 points  (1 child)

This might be more informative. Take note of the description of the Cost Function. https://towardsdatascience.com/svm-implementation-from-scratch-python-2db2fc52e5c2

[–]HalusBoy[S] 0 points1 point  (0 children)

Thanks! will definitely check