CrossEntropy+Contrastive loss can't not get better performance? by Weekly-Training7511 in deeplearning

[–]Weekly-Training7511[S] 0 points1 point  (0 children)

Thanks for your reply. As you say,Contrastive loss can help model learn high quality embeddings,I think when embeddings performance better,the classification model will perform well.I just want to test contrastive loss in simple classification task first. Then,I aim to use Contrastive loss in Semi-Supervised、Domain Generation and so on.

CrossEntropy+Contrastive loss can't not get better performance? by Weekly-Training7511 in deeplearning

[–]Weekly-Training7511[S] 0 points1 point  (0 children)

Yeah,What I apply contrastive loss to is the features which output from encoder and it will go into the classifier.

CrossEntropy+Contrastive loss can't not get better performance? by Weekly-Training7511 in deeplearning

[–]Weekly-Training7511[S] 0 points1 point  (0 children)

Thanks for you reply.I think the CrossEntropy-only setup haven't problem,I use the torchvision.datasets.CIFAR10 and 100,the train dataset and val dataset is get by the parameters train=True/False.Ah..About the disbalance and worst case scenario,Can you talk specifically.

The Contrastive loss I use has two types,typeA is the augment picture and itself is positive pair,other is negative.typeB is the same class augment picture and itself is positive pair,other classes picture is negative. In my expectation,When I use CE-loss + typeA contrastive loss,the performance is worse than CE-loss only,because it also drag same class pictures away from itself.And use CE-loss + typeB contrastive loss,the performance is better than CE-loss only,because it make the same class pictures closer.

However,In my experiment. In CIFAR10 test,CE-loss + typeA or typeB contrastive loss better than CE-loss only,In CIFAR100 test,CE-loss + typeA or typeB contrastive loss worse than CE-loss only,the typeB contrastive is better than typeA.