Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
120 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adversarial training applied to Convolutional Neural Network for photometric redshift predictions (2002.10154v1)

Published 24 Feb 2020 in astro-ph.IM and eess.IV

Abstract: The use of Convolutional Neural Networks (CNN) to estimate the galaxy photometric redshift probability distribution by analysing the images in different wavelength bands has been developed in the recent years thanks to the rapid development of the Machine Learning (ML) ecosystem. Authors have set-up CNN architectures and studied their performances and some sources of systematics using standard methods of training and testing to ensure the generalisation power of their models. So far so good, but one piece was missing : does the model generalisation power is well measured? The present article shows clearly that very small image perturbations can fool the model completely and opens the Pandora's box of \textit{adversarial} attack. Among the different techniques and scenarios, we have chosen to use the Fast Sign Gradient one-step Method and its Projected Gradient Descent iterative extension as adversarial generator tool kit. However, as unlikely as it may seem these adversarial samples which fool not only a single model, reveal a weakness both of the model and the classical training. A revisited algorithm is shown and applied by injecting a fraction of adversarial samples during the training phase. Numerical experiments have been conducted using a specific CNN model for illustration although our study could be applied to other models - not only CNN ones - and in other contexts - not only redshift measurements - as it deals with the complexity of the boundary decision surface.

Citations (4)

Summary

We haven't generated a summary for this paper yet.