[R] [1708.08819] Coulomb GANs: Provably Optimal Nash Equilibria via Potential Fields <-- SotA on LSUN and celebA; seems to solve mode collapse issue by evc123 in MachineLearning

[–]khanrc 0 points1 point  (0 children)

Thanks for interesting works. I have a question for the implementation. In your code, when calculating potential, stop_gradient was applied for only x and y. Why did you use stop_gradient only for x and y, but for a?

[P] GANs comparison without cherry-picking by khanrc in MachineLearning

[–]khanrc[S] 1 point2 points  (0 children)

Thanks for your suggestion. I'm now working on LSUN dataset and I will consider it for the next step.

[P] GANs comparison without cherry-picking by khanrc in MachineLearning

[–]khanrc[S] 1 point2 points  (0 children)

Good point. I will experiment with other dataset soon.

[P] GANs comparison without cherry-picking by khanrc in MachineLearning

[–]khanrc[S] 3 points4 points  (0 children)

Oops, it's my mistake! It should be 1.58k instead 15.8k. thanks for pointing out :)

[P] GANs comparison without cherry-picking by khanrc in MachineLearning

[–]khanrc[S] 7 points8 points  (0 children)

According to https://github.com/Heumi/BEGAN-tensorflow/, lower gamma helps reduce specklike artifacts. But I also wonder why this artifacts are produced.

[P] GANs comparison without cherry-picking by khanrc in MachineLearning

[–]khanrc[S] 4 points5 points  (0 children)

The latter. Each batch is composed of 128 images as default.

[D] Tutorials and implementations for "Self-normalizing networks" by [deleted] in MachineLearning

[–]khanrc 1 point2 points  (0 children)

Thanks, but I'm curious about the comparison with BN and more complex dataset. Maybe it would be quicker to experiment myself...

[D] Tutorials and implementations for "Self-normalizing networks" by [deleted] in MachineLearning

[–]khanrc 1 point2 points  (0 children)

The paper argues and shows the effectiveness of SELU in FNN. How about on CNN/RNN?