Deblending galaxies with Variational Autoencoders: a joint multi-band, multi-instrument approach (2005.12039v2)
Abstract: Blending of galaxies has a major contribution in the systematic error budget of weak lensing studies, affecting photometric and shape measurements, particularly for ground-based, deep, photometric galaxy surveys, such as the Rubin Observatory Legacy Survey of Space and Time (LSST). Existing deblenders mostly rely on analytic modelling of galaxy profiles and suffer from the lack of flexible yet accurate models. We propose to use generative models based on deep neural networks, namely variational autoencoders (VAE), to learn probabilistic models directly from data. We train a VAE on images of centred, isolated galaxies, which we reuse, as a prior, in a second VAE-like neural network in charge of deblending galaxies. We train our networks on simulated images including six LSST bandpass filters and the visible and near-infrared bands of the Euclid satellite, as our method naturally generalises to multiple bands and can incorporate data from multiple instruments. We obtain median reconstruction errors on ellipticities and $r$-band magnitude between $\pm{0.01}$ and $\pm{0.05}$ respectively in most cases, and ellipticity multiplicative bias of 1.6% for blended objects in the optimal configuration. We also study the impact of decentring and prove the method to be robust. This method only requires the approximate centre of each target galaxy, but no assumptions about the number of surrounding objects, pointing to an iterative detection/deblending procedure we leave for future work. Finally, we discuss future challenges about training on real data and obtain encouraging results when applying transfer learning. Our code is publicly available on GitHub (https://github.com/LSSTDESC/DeblenderVAE).
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.