Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 165 tok/s
Gemini 2.5 Pro 57 tok/s Pro
GPT-5 Medium 39 tok/s Pro
GPT-5 High 37 tok/s Pro
GPT-4o 106 tok/s Pro
Kimi K2 185 tok/s Pro
GPT OSS 120B 445 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

STCTM: a forward modeling and retrieval framework for stellar contamination and stellar spectra (2508.19297v1)

Published 25 Aug 2025 in astro-ph.IM, astro-ph.EP, and astro-ph.SR

Abstract: Transmission spectroscopy is a key avenue for the near-term study of small-planet atmospheres and the most promising method when it comes to searching for atmospheres on temperate rocky worlds, which are often too cold for planetary emission to be detectable. At the same time, the small planets that are most amenable for such atmospheric probes orbit small and cool M dwarf stars. As the field becomes increasingly ambitious in the search for signs of even thin atmospheres on small exoplanets, the transit light source effect (TLSE), caused by unocculted stellar surface heterogeneities, is becoming a limiting factor: it is imperative to develop robust inference methods to disentangle planetary and stellar contributions to the observed spectra. Here, I present STCTM, the STellar ConTamination Modeling framework, a flexible Bayesian retrieval framework to model the impact of the TLSE on any exoplanet transmission spectrum, and infer the range of stellar surface parameters that are compatible with the observations in the absence of any planetary contribution. With the "exotune" sub-module, users can also perform retrievals directly on out-of-transit stellar spectra in order to place data-driven priors on the extent to which the TLSE can impact any planet's transmission spectrum. The input data formats, stellar models, and fitted parameters are easily tunable using human-readable files and the code is fully parallelized to enable fast inferences. [shortened for arxiv; see full summary in the PDF]

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 0 likes.

Upgrade to Pro to view all of the tweets about this paper: