no code implementations • 28 Sep 2020 • Jacob M. Springer, Bryn Marie Reinstadler, Una-May O'Reilly
Neural networks are well-known to be vulnerable to imperceptible perturbations in the input, called adversarial examples, that result in misclassification.