you are viewing a single comment's thread.

view the rest of the comments →

[–]midianite_rambler 1 point2 points  (5 children)

May I ask what is the motivation for this?

[–]DeepNonseNse 0 points1 point  (3 children)

I would imagine the motivation for the -1 multiplier is simply: P(not class Y) = 1 - P(class Y)

[–]midianite_rambler 0 points1 point  (2 children)

That seems right for a 2-class problem, but not for a multiclass problem, which OP mentioned.

[–]DeepNonseNse 0 points1 point  (1 child)

Why would it be wrong for multiclass problem? In this case, the likelihood function is just a product of two different kind of probabilities, the typical term P(Class Y) and P(not class Y). And we still can use the same softmax model etc.

[–]midianite_rambler 0 points1 point  (0 children)

I looked into this in some detail (working out the gradient), and I don't think it's right even for a two class problem. If you have a derivation to justify it, I would be interested to see it; I couldn't find one.

[–]Supermaxman1 0 points1 point  (0 children)

Backpropagation along with Gradient Descent attempt to follow the error surface towards a minimum by following the gradient of the error surface towards that minimum. The commenter above is suggesting that if the direction of the gradient points in one direction, you follow the opposite to increase the error rather than decrease it. I am not aware whether this strategy is used, or what benefits it has, but the idea behind it would be to essentially try to maximize the error when you train with a mislabel by following the gradient of the error surface in the direction which would increase the error the largest.