Dice Question Streamline Icon: https://streamlinehq.com

Evaluating model performance on high-volatility asset classes beyond large-cap equities

Investigate the predictive and strategy-wise performance of the Single-directional Encoder Representations from Transformer (SERT) and the MLP autoencoder pre-trained Transformer on small-cap equities, emerging markets, and cryptocurrencies characterized by high volatility, and ascertain whether their apparent sensitivity to higher variations yields superior risk-adjusted returns compared to large-cap value-weighted portfolios.

Information Square Streamline Icon: https://streamlinehq.com

Background

Empirical results indicate that the proposed Transformer models perform especially well during periods of extreme fluctuations and appear to benefit from higher volatility. However, in value-weighted settings focusing on large-cap stocks, absolute gains diminish due to weights favoring less profitable trades.

The authors therefore point to testing these models in asset classes with inherently higher volatility—small caps, emerging markets, and cryptocurrencies—to determine if their volatility sensitivity translates into better performance.

References

However, some open questions remain for future researchers. Furthermore, although we study the Transformer models on large capital stocks, we find the proposed models are excellent for coping with the extreme fluctuations and higher volatility appears to enhance the performance of the models. However, under the value-weighted portfolio evaluation, the absolute capital gain is highly reduced which is caused by high weights are distributed to the less profitable trades offered by large capital stocks. This implies Transformer models are extra sensitive to higher variations, namely models perform better with highly volatile assets. In that sense, it is possibly worth trying the proposed models on small capital stocks, emerging markets or cryptocurrencies, which have high volatility.

Asset Pricing in Pre-trained Transformer (2505.01575 - Lai, 2 May 2025) in Section 6, Conclusion