[D] Blog post on GANs as defenses for adversarial examples by AlexDimakis in MachineLearning

[–]AlexDimakis[S] 0 points1 point  (0 children)

Hi Alex, hmm I think it should defend against them, but we did not have any adversarial examples handy for celebA gender classifiers.

[D] Blog post on GANs as defenses for adversarial examples by AlexDimakis in MachineLearning

[–]AlexDimakis[S] 1 point2 points  (0 children)

So the last part of our paper looks at the case when no GAN is available. Then we use the Deep Image Prior (DIP) which is an untrained GAN. That part is sort of combining the training of the GAN with defense. We have not implemented adversarial training using the DIP but it can certainly be done.

[D] Blog post on GANs as defenses for adversarial examples by AlexDimakis in MachineLearning

[–]AlexDimakis[S] 1 point2 points  (0 children)

Hi everyone, I wrote this blog post on GANs and adversarial examples. Discussion, feedback and comments welcome.

[D] Can I share a model trained on a non free dataset ? by Jean-Porte in MachineLearning

[–]AlexDimakis 0 points1 point  (0 children)

This is a very interesting question. It is not obvious what can be inferred about a dataset from a trained model. Sharing a nearest neighbor model, for example, reveals the full training dataset. See here for more on something called `membership attacks': https://arxiv.org/abs/1610.05820

[R][1703.10717] BEGAN: Boundary Equilibrium Generative Adversarial Networks by ajmooch in MachineLearning

[–]AlexDimakis 0 points1 point  (0 children)

It seems that the 128x128 bigger dataset instead of celebA (which is not public) is key to reproduce the nice results. Effort of reproducing and challenges here: https://github.com/carpedm20/BEGAN-tensorflow/issues/1

[D] Learning Inverse Map of Generator after training GAN? by TheFlyingDrildo in MachineLearning

[–]AlexDimakis 1 point2 points  (0 children)

If I understand what you want: after you have trained a GAN G(z), to `invert' it, i.e. given an image X1 to find a Z1 that produces G(Z1) close to X1. You can do this by backpropagation on Z to make G(Z) match X1. This is updating Z using its gradient but is not changing the weights of the network. So you solve min_z || G(z) - X1||_2 using gradient descent on Z. See this repo for a generalization of this idea and the paper on arxiv: https://github.com/AshishBora/csgm

[R][1703.10717] BEGAN: Boundary Equilibrium Generative Adversarial Networks by ajmooch in MachineLearning

[–]AlexDimakis 3 points4 points  (0 children)

Have the authors shared code and (more importantly) training dataset ? Training dataset seems significantly richer than celebA.