Paper

Exploring the Back Alleys: Analysing The Robustness of Alternative Neural Network Architectures against Adversarial Attacks

We investigate to what extent alternative variants of Artificial Neural Networks (ANNs) are susceptible to adversarial attacks. We analyse the adversarial robustness of conventional, stochastic ANNs and Spiking Neural Networks (SNNs) in the raw image space, across three different datasets. Our experiments reveal that stochastic ANN variants are almost equally as susceptible as conventional ANNs when faced with simple iterative gradient-based attacks in the white-box setting. However we observe, that in black-box settings, stochastic ANNs are more robust than conventional ANNs, when faced with boundary attacks, transferability and surrogate attacks. Consequently, we propose improved attacks and defence mechanisms for stochastic ANNs in black-box settings. When performing surrogate-based black-box attacks, one can employ stochastic models as surrogates to observe higher attack success on both stochastic and deterministic targets. This success can be further improved with our proposed Variance Mimicking (VM) surrogate training method, against stochastic targets. Finally, adopting a defender's perspective, we investigate the plausibility of employing stochastic switching of model mixtures as a viable hardening mechanism. We observe that such a scheme does provide a partial hardening.

Results in Papers With Code
(↓ scroll down to see all results)