Finetuning Stellar Spectra Foundation Models with LoRA (2507.20972v1)
Abstract: Foundation models are beginning to impact stellar spectroscopy, where spectra encode rich physical information in a structured, language-like form. A key challenge is adapting these models across heterogeneous surveys with differing resolution and coverage. We apply Low-Rank Adaptation (LoRA) to fine-tune SpecCLIP--a contrastively pre-trained model on LAMOST and Gaia XP spectra--for downstream tasks on DESI Early Data Release (EDR) spectra. We show that LoRA enables few-shot learning on DESI, with performance varying by fine-tuned module and benefiting from Gaia XP knowledge embedded in the pre-trained model. Our results demonstrate that LoRA provides a lightweight and effective strategy for extending spectral foundation models to new instruments and survey domains.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.