Adversarial training applied to Convolutional Neural Network for photometric redshift predictions

24 Feb 2020  ·  Campagne Jean-Eric ·

The use of Convolutional Neural Networks (CNN) to estimate the galaxy photometric redshift probability distribution by analysing the images in different wavelength bands has been developed in the recent years thanks to the rapid development of the Machine Learning (ML) ecosystem. Authors have set-up CNN architectures and studied their performances and some sources of systematics using standard methods of training and testing to ensure the generalisation power of their models. So far so good, but one piece was missing : does the model generalisation power is well measured? The present article shows clearly that very small image perturbations can fool the model completely and opens the Pandora's box of \textit{adversarial} attack. Among the different techniques and scenarios, we have chosen to use the Fast Sign Gradient one-step Method and its Projected Gradient Descent iterative extension as adversarial generator tool kit. However, as unlikely as it may seem these adversarial samples which fool not only a single model, reveal a weakness both of the model and the classical training. A revisited algorithm is shown and applied by injecting a fraction of adversarial samples during the training phase. Numerical experiments have been conducted using a specific CNN model for illustration although our study could be applied to other models - not only CNN ones - and in other contexts - not only redshift measurements - as it deals with the complexity of the boundary decision surface.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Instrumentation and Methods for Astrophysics Image and Video Processing