Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 89 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 15 tok/s Pro
GPT-5 High 19 tok/s Pro
GPT-4o 90 tok/s Pro
Kimi K2 211 tok/s Pro
GPT OSS 120B 459 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Large Language Models for Financial Aid in Financial Time-series Forecasting (2410.19025v1)

Published 24 Oct 2024 in cs.LG and cs.AI

Abstract: Considering the difficulty of financial time series forecasting in financial aid, much of the current research focuses on leveraging big data analytics in financial services. One modern approach is to utilize "predictive analysis", analogous to forecasting financial trends. However, many of these time series data in Financial Aid (FA) pose unique challenges due to limited historical datasets and high dimensional financial information, which hinder the development of effective predictive models that balance accuracy with efficient runtime and memory usage. Pre-trained foundation models are employed to address these challenging tasks. We use state-of-the-art time series models including pre-trained LLMs (GPT-2 as the backbone), transformers, and linear models to demonstrate their ability to outperform traditional approaches, even with minimal ("few-shot") or no fine-tuning ("zero-shot"). Our benchmark study, which includes financial aid with seven other time series tasks, shows the potential of using LLMs for scarce financial datasets.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (26)
  1. L. Cao, “Ai in finance: challenges, techniques, and opportunities,” ACM Computing Surveys (CSUR), vol. 55, no. 3, pp. 1–38, 2022.
  2. Y. Zu, J. Mi, L. Song, S. Lu, and J. He, “Finformer: A static-dynamic spatiotemporal framework for stock trend prediction,” in 2023 IEEE International Conference on Big Data (BigData).   IEEE, 2023, pp. 1460–1469.
  3. T. Leung and T. Zhao, “Financial time series analysis and forecasting with hilbert–huang transform feature generation and machine learning,” Applied Stochastic Models in Business and Industry, vol. 37, no. 6, pp. 993–1016, 2021.
  4. X. Li, X. Shen, Y. Zeng, X. Xing, and J. Xu, “Finreport: Explainable stock earnings forecasting via news factor analyzing model,” in Companion Proceedings of the ACM on Web Conference 2024, 2024, pp. 319–327.
  5. K. Sako, B. N. Mpinda, and P. C. Rodrigues, “Neural networks for financial time series forecasting,” Entropy, vol. 24, no. 5, p. 657, 2022.
  6. H. Wu, T. Hu, Y. Liu, H. Zhou, J. Wang, and M. Long, “Timesnet: Temporal 2d-variation modeling for general time series analysis,” in International Conference on Learning Representations, 2023.
  7. X. Xu and Y. Zhang, “Commodity price forecasting via neural networks for coffee, corn, cotton, oats, soybeans, soybean oil, sugar, and wheat,” Intelligent Systems in Accounting, Finance and Management, vol. 29, no. 3, pp. 169–181, 2022.
  8. S. Zhang, J. Luo, S. Wang, and F. Liu, “Oil price forecasting: A hybrid gru neural network based on decomposition–reconstruction methods,” Expert Systems with Applications, vol. 218, p. 119617, 2023.
  9. A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever et al., “Language models are unsupervised multitask learners,” OpenAI blog, vol. 1, no. 8, p. 9, 2019.
  10. A. Dosovitskiy, “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
  11. Z. Chen, L. N. Zheng, C. Lu, J. Yuan, and D. Zhu, “Chatgpt informed graph neural network for stock movement prediction,” arXiv preprint arXiv:2306.03763, 2023.
  12. X. Yu, Z. Chen, Y. Ling, S. Dong, Z. Liu, and Y. Lu, “Temporal data meets llm–explainable financial time series forecasting,” arXiv preprint arXiv:2306.11025, 2023.
  13. R. K., B. D., J. Ortagus, K. R., E. A., L. M., and C. J., “State financial aid dataset,” 2023. [Online]. Available: https://informedstates.org/state-financial-aid-dataset-download
  14. N. Gruver, M. Finzi, S. Qiu, and A. G. Wilson, “Large language models are zero-shot time series forecasters,” Advances in Neural Information Processing Systems, vol. 36, 2024.
  15. M. Tan, M. A. Merrill, V. Gupta, T. Althoff, and T. Hartvigsen, “Are language models actually useful for time series forecasting?” arXiv preprint arXiv:2406.16964, 2024.
  16. M. Jin, S. Wang, L. Ma, Z. Chu, J. Y. Zhang, X. Shi, P.-Y. Chen, Y. Liang, Y.-F. Li, S. Pan et al., “Time-llm: Time series forecasting by reprogramming large language models,” arXiv preprint arXiv:2310.01728, 2023.
  17. P. Liu, H. Guo, T. Dai, N. Li, J. Bao, X. Ren, Y. Jiang, and S.-T. Xia, “Taming pre-trained llms for generalised time series forecasting via cross-modal knowledge distillation,” arXiv preprint arXiv:2403.07300, 2024.
  18. T. Zhou, P. Niu, L. Sun, R. Jin et al., “One fits all: Power general time series analysis by pretrained lm,” Advances in neural information processing systems, vol. 36, pp. 43 322–43 355, 2023.
  19. A. Zeng, M. Chen, L. Zhang, and Q. Xu, “Are transformers effective for time series forecasting?” in Proceedings of the AAAI Conference on Artificial Intelligence, 2023.
  20. Y. Liu, T. Hu, H. Zhang, H. Wu, S. Wang, L. Ma, and M. Long, “itransformer: Inverted transformers are effective for time series forecasting,” in The Twelfth International Conference on Learning Representations, 2024. [Online]. Available: https://openreview.net/forum?id=JePfAI8fah
  21. Y. Nie, N. H. Nguyen, P. Sinthong, and J. Kalagnanam, “A time series is worth 64 words: Long-term forecasting with transformers,” arXiv preprint arXiv:2211.14730, 2022.
  22. S. Wang, H. Wu, X. Shi, T. Hu, H. Luo, L. Ma, J. Y. Zhang, and J. Zhou, “Timemixer: Decomposable multiscale mixing for time series forecasting,” arXiv preprint arXiv:2405.14616, 2024.
  23. M. Jin, H. Tang, C. Zhang, Q. Yu, C. Liu, S. Zhu, Y. Zhang, and M. Du, “Time series forecasting with llms: Understanding and enhancing model capabilities,” arXiv preprint arXiv:2402.10835, 2024.
  24. Y. Liang, H. Wen, Y. Nie, Y. Jiang, M. Jin, D. Song, S. Pan, and Q. Wen, “Foundation models for time series analysis: A tutorial and survey,” in Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2024, pp. 6555–6565.
  25. A. Das, W. Kong, R. Sen, and Y. Zhou, “A decoder-only foundation model for time-series forecasting,” arXiv preprint arXiv:2310.10688, 2023.
  26. A. F. Ansari, L. Stella, C. Turkmen, X. Zhang, P. Mercado, H. Shen, O. Shchur, S. S. Rangapuram, S. P. Arango, S. Kapoor et al., “Chronos: Learning the language of time series,” arXiv preprint arXiv:2403.07815, 2024.
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.