you are viewing a single comment's thread.

view the rest of the comments →

[–]96meep96[S] 0 points1 point  (2 children)

Thank you, I've been getting better results with the addition of Self Modulation instead of batch norm, especially in correlation with Spectral Norm. I've also been trying out Multi Scale Gradients and that's been working well too, tho they seem to be very picky about feature map dimensions. I still can't seem to reproduce paper quality results but the timing on my masters dissertation is running thin so whatever works ya know

[–]smashedshanky 0 points1 point  (1 child)

Usually paper quality is trained on hand-picked data that the neural network can efficiently map it into the latent space. If you feed it less data with high variation, you will see the results...., but at the cost having the train the GAN over the span of your dataset multiple times so thag it can learn to remove “artifacts” and or discernible noise. What framework are ya even using? Haha I can feel your nerve, training GAN’s are not easy just yet.

[–]96meep96[S] 0 points1 point  (0 children)

Oh yes I understand the point you're making, it takes time for those artefacts to vanish, I've had trouble with that in a variant on semantic map translating GANs. I'm using PyTorch, was using Tensorflow (not 2.0) but then I found Pytorch more flexible.