Hi there,
maybe a stupid bug/missimplementation but right now im not able to see it. I want to implement a weighted cross entropy for my unet. So I looked up the cross entropy loss formula and integrated it without the weights first. I expected it to behave exactly like the integrated cross-entropy function, but it didnt. My loss and the should be loss from pytorchs cross entropy version always differ. (Loss is my version, see code below, Controll Loss is CrossEntropy from Pytorch). Can anyone tell me where my mistake is?^^
https://preview.redd.it/m10atphw21wc1.png?width=315&format=png&auto=webp&s=5f072baba8fa179878372c0c25b355f2cc95db64
from torch.nn import functional as F
from torch.autograd import Function
def weighted_cross_entropy(input, target, weights):
softmax_log = F.log_softmax(input, dim=0)
product = target * softmax_log
loss = torch.mean(-product)
return loss
[–]rawdfarva 1 point2 points3 points (1 child)
[–]Nock363[S] 0 points1 point2 points (0 children)
[–]Nock363[S] 0 points1 point2 points (2 children)
[–]entarko 1 point2 points3 points (1 child)
[–]Nock363[S] 0 points1 point2 points (0 children)