PUBLICATIONS
Robustness of Rotation-Equivariant Networks to Adversarial Perturbations
Deep neural networks have been shown to be vulnerable to adversarial examples:very small perturbations of the input having a dramatic impact on the predictions.A wealth of adversarial attacks and distance metrics to quantify the similarity between natural and adversarial images have been proposed, recently enlarging the scope of adversarial examples with geometric transformations beyond pixel-wise attacks. In this context, we investigate the robustness to adversarial attacks of new Convolutional Neural Network architectures providing equivariance to rotations.We found that rotation equivariant networks are significantly less vulnerable to geometric-based attacks than regular networks on the MNIST, CIFAR-10, and ImageNet datasets.