Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 60 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 30 tok/s Pro
GPT-5 High 35 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 176 tok/s Pro
GPT OSS 120B 448 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Assessing the performance of LTE and NLTE synthetic stellar spectra in a machine learning framework (1911.02602v2)

Published 6 Nov 2019 in astro-ph.IM and astro-ph.SR

Abstract: In the current era of stellar spectroscopic surveys, synthetic spectral libraries are the basis for the derivation of stellar parameters and chemical abundances. In this paper, we compare the stellar parameters determined using five popular synthetic spectral grids (INTRIGOSS, FERRE, AMBRE, PHOENIX, and MPIA/1DNLTE) with our convolutional neural network (CNN, $\texttt{StarNet}$). The stellar parameters are determined for six physical properties (effective temperature, surface gravity, metallicity, [$\alpha$/Fe], radial velocity, and rotational velocity) given the spectral resolution, signal-to-noise, and wavelength range of optical FLAMES-UVES spectra from the Gaia-ESO Survey. Both CNN modelling and epistemic uncertainties are incorporated through training an ensemble of networks. $\texttt{StarNet}$ training was also adapted to mitigate differences between the synthetic grids and observed spectra by augmenting with realistic observational signatures (i.e. resolution matching, wavelength sampling, Gaussian noise, zeroing flux values, rotational and radial velocities, continuum removal, and masking telluric regions). Using the FLAMES-UVES spectra for FGK type dwarfs and giants as a test set, we quantify the accuracy and precision of the stellar label predictions from $\texttt{StarNet}$. We find excellent results over a wide range of parameters when $\texttt{StarNet}$ is trained on the MPIA/1DNLTE synthetic grid, and acceptable results over smaller parameter ranges when trained on the 1DLTE grids. These tests also show that our CNN pipeline is highly adaptable to multiple simulation grids.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube