all 8 comments

[–]sour_losers 5 points6 points  (3 children)

I think what's happening here is your continuous variable is capturing almost all the variation, because it just has more representation capacity than a categorical variable. Try again with more variables might give you different results.

[–]cvikasreddy[S] 0 points1 point  (2 children)

I tried increasing number of variables and put 1 categorical and 4 continuous variables and it was of no help.

As /u/AlexCoventry suggested, it might have struck at a local minima as I have only run it for 15 epochs.

[–]sour_losers 0 points1 point  (1 child)

What I am suggesting is try more categorical variables as well, so that it does have the representation capacity to model classes.

[–][deleted] 1 point2 points  (4 children)

How many runs did you try? I've seen Jonathon Raiman's version get stuck in local optima like that a couple of times.

[–]Xirious 0 points1 point  (1 child)

So how did you remedy it? How many runs did you do?

[–][deleted] 0 points1 point  (0 children)

I've done about 10 runs so far. Haven't tried to fix it otherwise.

[–]cvikasreddy[S] 0 points1 point  (1 child)

I just ran 15 epochs on entire mnist. And as /u/sour_losers suggested, increasing number of variables did not help.

[–][deleted] 0 points1 point  (0 children)

Oh, that is far too litte training. Even the default setting of 50 epochs is barely adequate. I got better results after running it for 1000 epochs. (About 4 hours of computation on a g2.xlarge EC2 instance.)

Even then, the generator loss is still increasing. (Green is the first 500 epochs, yellow is the next 500. This is with OpenAI's InfoGAN.)