Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 167 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 31 tok/s Pro
GPT-5 High 29 tok/s Pro
GPT-4o 92 tok/s Pro
Kimi K2 191 tok/s Pro
GPT OSS 120B 434 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Deblending galaxies with Variational Autoencoders: a joint multi-band, multi-instrument approach (2005.12039v2)

Published 25 May 2020 in astro-ph.IM and astro-ph.CO

Abstract: Blending of galaxies has a major contribution in the systematic error budget of weak lensing studies, affecting photometric and shape measurements, particularly for ground-based, deep, photometric galaxy surveys, such as the Rubin Observatory Legacy Survey of Space and Time (LSST). Existing deblenders mostly rely on analytic modelling of galaxy profiles and suffer from the lack of flexible yet accurate models. We propose to use generative models based on deep neural networks, namely variational autoencoders (VAE), to learn probabilistic models directly from data. We train a VAE on images of centred, isolated galaxies, which we reuse, as a prior, in a second VAE-like neural network in charge of deblending galaxies. We train our networks on simulated images including six LSST bandpass filters and the visible and near-infrared bands of the Euclid satellite, as our method naturally generalises to multiple bands and can incorporate data from multiple instruments. We obtain median reconstruction errors on ellipticities and $r$-band magnitude between $\pm{0.01}$ and $\pm{0.05}$ respectively in most cases, and ellipticity multiplicative bias of 1.6% for blended objects in the optimal configuration. We also study the impact of decentring and prove the method to be robust. This method only requires the approximate centre of each target galaxy, but no assumptions about the number of surrounding objects, pointing to an iterative detection/deblending procedure we leave for future work. Finally, we discuss future challenges about training on real data and obtain encouraging results when applying transfer learning. Our code is publicly available on GitHub (https://github.com/LSSTDESC/DeblenderVAE).

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.