Impact of encoder-only versus decoder-only LLM text representations on forecasting performance
Determine how text representations produced by encoder-only large language models such as DeBERTa compared with decoder-only large language models such as Mistral and Llama3 affect the accuracy of stock return forecasting when the models are fine-tuned end-to-end on concatenated financial news sequences for n-step forward return prediction.
References
We propose to compare the encoder-only and decoder-only LLMs, considering they generate text representations in distinct ways. The impact of these different representations on forecasting performance remains an open question.
                — Fine-Tuning Large Language Models for Stock Return Prediction Using Newsflow
                
                (2407.18103 - Guo et al., 25 Jul 2024) in Abstract