Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 27 tok/s Pro
GPT-5 High 29 tok/s Pro
GPT-4o 119 tok/s Pro
Kimi K2 180 tok/s Pro
GPT OSS 120B 418 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Asset Pricing in Pre-trained Transformer (2505.01575v2)

Published 2 May 2025 in q-fin.CP, econ.EM, and q-fin.PR

Abstract: This paper proposes an innovative Transformer model, Single-directional representative from Transformer (SERT), for US large capital stock pricing. It also innovatively applies the pre-trained Transformer models under the stock pricing and factor investment context. They are compared with standard Transformer models and encoder-only Transformer models in three periods covering the entire COVID-19 pandemic to examine the model adaptivity and suitability during the extreme market fluctuations. Namely, pre-COVID-19 period (mild up-trend), COVID-19 period (sharp up-trend with deep down shock) and 1-year post-COVID-19 (high fluctuation sideways movement). The best proposed SERT model achieves the highest out-of-sample R2, 11.2% and 10.91% respectively, when extreme market fluctuation takes place followed by pre-trained Transformer models (10.38% and 9.15%). Their Trend-following-based strategy wise performance also proves their excellent capability for hedging downside risks during market shocks. The proposed SERT model achieves a Sortino ratio 47% higher than the buy-and-hold benchmark in the equal-weighted portfolio and 28% higher in the value-weighted portfolio when the pandemic period is attended. It proves that Transformer models have a great capability to capture patterns of temporal sparsity data in the asset pricing factor model, especially with considerable volatilities. We also find the softmax signal filter as the common configuration of Transformer models in alternative contexts, which only eliminates differences between models, but does not improve strategy-wise performance, while increasing attention heads improve the model performance insignificantly and applying the 'layer norm first' method do not boost the model performance in our case.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 2 tweets and received 8 likes.

Upgrade to Pro to view all of the tweets about this paper: