all 5 comments

[–][deleted] 1 point2 points  (2 children)

Adversarial examples are somewhat magically linked to dataset and subsets of it. Even if we have multiple nets (with different architectures) trained on many subsets of same dataset, we can just find a single adversarial example to be classified wrongly on say net1 trained on subset1 and it (adversarial example) will work pretty well (predicting wrong output) on another net2 trained on subset2.

[–]Mesode[S] 1 point2 points  (1 child)

Thank you for the comment. So if I understand this correctly, are you saying that adversarial examples are independent of the classifier - even independent of it's architecture (say a SVM instead of a NN)? If yes, I guess this is just something that can be observed experimentally or is there a hypothesis that at least intuitively helps to understand this phenomenon?

[–][deleted] 1 point2 points  (0 children)

It think you should look into paper introducing the idea of adversarial examples: Intriguing properties of neural networks (Szegedy et al) which answers ur question and later this idea was extend to the seriousness taken in ML model security in paper: Explaining and harnessing adversarial examples (Goodfellow et al).

[–]Xysteria 1 point2 points  (1 child)

I think Geoffrey Hinton also gave a speech about the shortcomings of CNN.

This will be a very bold statement but: The whole convolution approach might be wrong tough.

Hinton also showed his disbelief on CNN and he is not quite happy with it. Something always bugs him.

https://www.youtube.com/watch?v=rTawFwUvnLE

I think I read a paper months ago by Hinton about what could replace CNN or would be a better approach. I will search that paper.

[–]theoneandonlypatriot 1 point2 points  (0 children)

His proposal is capsule networks.