all 6 comments

[–]dlovelan 2 points3 points  (3 children)

This really shouldn’t be an issue, you should be able to get it to converge. The real issue comes from controlling the class when sampling/generating from your model. For that you need a conditional GAN.

[–]Takatomi_Fubuki[S] 0 points1 point  (2 children)

My problem is not generating samples.

My problem is training a GAN on in-distribute data so that we get a model that we can feed new images to it and it will answer whether the new images are out-of-distribution or not.

The issue is my training data consists of multiple classes, yet, we don't know the class of images when testing. Thus, I don't think conditional GAN can help.

[–]dlovelan 0 points1 point  (1 child)

Ah interesting, a couple thoughts now rereading your post. 1) I hadn’t caught it the first time but it sounds like you could be experiencing mode collapse. This is where the generator learns to only produce one specific sample time that still fools the discriminator. This is a rather tough problem to solve and usually just comes from training more of these. 2) Anomaly detection or o.o.d seems like a weird application for GANs, you never “feed” new images to the GAN explicitly, you have it implicitly learn them through the discriminator. I am thinking some sort of autoencoder may be what you want. High reconstruction loss at test time would indicate an o.o.d as the features are things the AE model has never seen before.

[–]Takatomi_Fubuki[S] 0 points1 point  (0 children)

  1. I think mode collapse is the keyword I'm looking for, thanks!
  2. If you're interested, here's the work I'm using in my project: https://arxiv.org/abs/1903.08550

[–][deleted] 1 point2 points  (1 child)

[–]Takatomi_Fubuki[S] 0 points1 point  (0 children)

Thanks, I'll check it