Impact of encoder-only versus decoder-only LLM representations on stock return forecasting
Determine how the text representations produced by encoder-only large language models such as DeBERTa, compared to those produced by decoder-only large language models such as Mistral and Llama, affect stock return forecasting performance when fine-tuned to predict forward returns from concatenated financial news sequences.
References
The impact of these different representations on forecasting performance remains an open question.
                — Fine-Tuning Large Language Models for Stock Return Prediction Using Newsflow
                
                (2407.18103 - Guo et al., 25 Jul 2024) in Abstract