Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FinLLMs: A Framework for Financial Reasoning Dataset Generation with Large Language Models (2401.10744v1)

Published 19 Jan 2024 in cs.AI

Abstract: LLMs usually rely on extensive training datasets. In the financial domain, creating numerical reasoning datasets that include a mix of tables and long text often involves substantial manual annotation expenses. To address the limited data resources and reduce the annotation cost, we introduce FinLLMs, a method for generating financial question-answering data based on common financial formulas using LLMs. First, we compile a list of common financial formulas and construct a graph based on the variables these formulas employ. We then augment the formula set by combining those that share identical variables as new elements. Specifically, we explore formulas obtained by manual annotation and merge those formulas with shared variables by traversing the constructed graph. Finally, utilizing GPT-3.5, we generate financial question-answering data that encompasses both tabular information and long textual content, building on the collected formula set. Our experiments demonstrate that synthetic data generated by FinLLMs effectively enhances the performance of several large-scale numerical reasoning models in the financial domain, outperforming two established benchmark financial question-answering datasets.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. Y. Wei, F. Lei, Y. Zhang, J. Zhao, and K. Liu, “Multi-view graph representation learning for answering hybrid numerical reasoning question,” arXiv preprint arXiv:2305.03458, 2023.
  2. F. Lei, S. He, X. Li, J. Zhao, and K. Liu, “Answering numerical reasoning questions in table-text hybrid contents with graph-based encoder and tree-based decoder,” in International Conference on Computational Linguistics, 2022, pp. 1379–1390.
  3. Z. Chen, W. Chen, C. Smiley, S. Shah, I. Borova, D. Langdon, R. Moussa, M. Beane, T.-H. Huang, B. Routledge, and W. Y. Wang, “FinQA: A dataset of numerical reasoning over financial data,” in Conference on Empirical Methods in Natural Language Processing, 2021, pp. 3697–3711.
  4. F. Zhu, W. Lei, Y. Huang, C. Wang, S. Zhang, J. Lv, F. Feng, and T.-S. Chua, “TAT-QA: A question answering benchmark on a hybrid of tabular and textual content in finance,” in Annual Meeting of the Association for Computational Linguistics, 2021, pp. 3277–3287.
  5. X. Li, Y. Zhu, S. Liu, J. Ju, Y. Qu, and G. Cheng, “Dyrren: A dynamic retriever-reranker-generator model for numerical reasoning over tabular and textual data,” AAAI Conference on Artificial Intelligence, vol. 37, no. 11, pp. 13 139–13 147, Jun. 2023.
  6. J. Devlin, M. Chang, K. Lee, and K. Toutanova, “BERT: pre-training of deep bidirectional transformers for language understanding,” in Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019, pp. 4171–4186.
  7. Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov, “Roberta: A robustly optimized BERT pretraining approach,” CoRR, vol. abs/1907.11692, 2019.
  8. Y. Zhao, Y. Li, C. Li, and R. Zhang, “Multihiertt: Numerical reasoning over multi hierarchical tabular and textual data,” in Annual Meeting of the Association for Computational Linguistics, 2022, pp. 6588–6600.
  9. S. Lee, H. Kim, and J. Kang, “LIQUID: A framework for list question answering dataset generation,” in AAAI Conference on Artificial Intelligence, 2023, pp. 13 014–13 024.
  10. D. Dua, Y. Wang, P. Dasigi, G. Stanovsky, S. Singh, and M. Gardner, “DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs,” in Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.   Association for Computational Linguistics, 2019, pp. 2368–2378.
  11. J. Han, U. Barman, J. Hayes, J. Du, E. Burgin, and D. Wan, “Nextgen AML: distributed deep learning based language technologies to augment anti money laundering investigation,” in Conference of the Association for Computational Linguistics.   Association for Computational Linguistics, 2018, pp. 37–42.
  12. C. Alberti, D. Andor, E. Pitler, J. Devlin, and M. Collins, “Synthetic QA corpora generation with roundtrip consistency,” in Conference of the Association for Computational Linguistics.   Association for Computational Linguistics, 2019, pp. 6168–6173.
  13. C. Lyu, L. Shang, Y. Graham, J. Foster, X. Jiang, and Q. Liu, “Improving unsupervised question answering via summarization-informed question generation,” in Conference on Empirical Methods in Natural Language Processing.   Association for Computational Linguistics, 2021, pp. 4134–4148.
  14. G. Tsatsaronis, G. Balikas, P. Malakasiotis, I. Partalas, M. Zschunke, M. R. Alvers, D. Weissenborn, A. Krithara, S. Petridis, D. Polychronopoulos, Y. Almirantis, J. Pavlopoulos, N. Baskiotis, P. Gallinari, T. Artières, A. N. Ngomo, N. Heino, É. Gaussier, L. Barrio-Alvers, M. Schroeder, I. Androutsopoulos, and G. Paliouras, “An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition,” BMC Bioinform., vol. 16, pp. 138:1–138:28, 2015.
  15. W. Chen, H. Wang, J. Chen, Y. Zhang, H. Wang, S. Li, X. Zhou, and W. Y. Wang, “Tabfact: A large-scale dataset for table-based fact verification,” in International Conference on Learning Representations, 2020.
  16. W. Chen, H. Zha, Z. Chen, W. Xiong, H. Wang, and W. Y. Wang, “Hybridqa: A dataset of multi-hop question answering over tabular and textual data,” in Findings of the Association for Computational Linguistics: EMNLP.   Association for Computational Linguistics, 2020, pp. 1026–1036.
  17. A. Rogers, M. Gardner, and I. Augenstein, “QA dataset explosion: A taxonomy of NLP resources for question answering and reading comprehension,” ACM Computing Surveys, vol. 55, no. 10, pp. 197:1–197:45, 2023.
  18. N. Park, M. Mohammadi, K. Gorde, S. Jajodia, H. Park, and Y. Kim, “Data synthesis based on generative adversarial networks,” VLDB Endowment, vol. 11, no. 10, pp. 1071–1083, 2018.
  19. L. Xu, M. Skoularidou, A. Cuesta-Infante, and K. Veeramachaneni, “Modeling tabular data using conditional GAN,” in Conference on Neural Information Processing Systems, 2019, pp. 7333–7343.
  20. Z. Zhao, A. Kunar, R. Birke, and L. Y. Chen, “CTAB-GAN: effective table data synthesizing,” in Asian Conference on Machine Learning.   PMLR, 2021, pp. 97–112.
  21. S. Li, B. Tai, and Y. Huang, “Evaluating variational autoencoder as a private data release mechanism for tabular data,” in IEEE Pacific Rim International Symposium on Dependable Computing, 2019, pp. 198–206.
  22. S. Darabi and Y. Elor, “Synthesising multi-modal minority samples for tabular data,” CoRR, vol. abs/2105.08204, 2021.
  23. V. Borisov, K. Seßler, T. Leemann, M. Pawelczyk, and G. Kasneci, “Language models are realistic tabular data generators,” in International Conference on Learning Representations, 2023.
  24. Y. Tian, L. Fan, K. Chen, D. Katabi, D. Krishnan, and P. Isola, “Learning vision from models rivals learning vision from data,” arXiv preprint arXiv:2312.17742, 2023.
  25. Y. Lu, M. Shen, H. Wang, X. Wang, C. van Rechem, and W. Wei, “Machine learning for synthetic data generation: A review,” arXiv preprint arXiv:2302.04062, 2023.
  26. R. He, S. Sun, X. Yu, C. Xue, W. Zhang, P. Torr, S. Bai, and X. QI, “Is synthetic data from generative models ready for image recognition?” in International Conference on Learning Representations, 2022.
  27. A. Kotelnikov, D. Baranchuk, I. Rubachev, and A. Babenko, “Tabddpm: Modelling tabular data with diffusion models,” in International Conference on Machine Learning, vol. 202.   PMLR, 2023, pp. 17 564–17 579.
  28. W. Wang, T. Hao, and W. Liu, “Automatic question generation for learning evaluation in medicine,” in Advances in Web Based Learning-ICWL International Conference, vol. 4823.   Springer Science & Business Media, 2008, p. 242.
  29. S. Shen, Y. Li, N. Du, X. Wu, Y. Xie, S. Ge, T. Yang, K. Wang, X. Liang, and W. Fan, “On the generation of medical question-answer pairs,” in AAAI Conference on Artificial Intelligence, vol. 34, no. 05, 2020, pp. 8822–8829.
  30. N. Carlini, M. Jagielski, C. Zhang, N. Papernot, A. Terzis, and F. Tramer, “The privacy onion effect: Memorization is relative,” Conference on Neural Information Processing Systems, vol. 35, pp. 13 263–13 276, 2022.
  31. Z. Chen, S. Li, C. Smiley, Z. Ma, S. Shah, and W. Y. Wang, “ConvFinQA: Exploring the chain of numerical reasoning in conversational finance question answering,” in Conference on Empirical Methods in Natural Language Processing, 2022, pp. 6279–6292.
  32. J. Araki, D. Rajagopal, S. Sankaranarayanan, S. Holm, Y. Yamakawa, and T. Mitamura, “Generating questions and multiple-choice answers using semantic analysis of texts,” in International Conference on Computational Linguistics, 2016, pp. 1125–1136.
  33. F. Zhu, M. Li, J. Xiao, F. Feng, C. Wang, and T. S. Chua, “Soargraph: Numerical reasoning over financial table-text data via semantic-oriented hierarchical graphs,” in Companion Proceedings of the ACM Web Conference 2023, 2023, pp. 1236–1244.
  34. A. Singh, J. D. Co-Reyes, R. Agarwal, A. Anand, P. Patil, P. J. Liu, J. Harrison, J. Lee, K. Xu, A. Parisi et al., “Beyond human data: Scaling self-training for problem-solving with language models,” arXiv preprint arXiv:2312.06585, 2023.
  35. H. Brown, K. Lee, F. Mireshghallah, R. Shokri, and F. Tramèr, “What does it mean for a language model to preserve privacy?” in ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 2280–2292.
  36. N. Carlini, F. Tramer, E. Wallace, M. Jagielski, A. Herbert-Voss, K. Lee, A. Roberts, T. Brown, D. Song, U. Erlingsson et al., “Extracting training data from large language models,” in USENIX Security Symposium, 2021, pp. 2633–2650.
  37. C. Song, T. Ristenpart, and V. Shmatikov, “Machine learning models that remember too much,” in ACM SIGSAC Conference on computer and communications security, 2017, pp. 587–601.
  38. S. A. Assefa, D. Dervovic, M. Mahfouz, R. E. Tillman, P. Reddy, and M. Veloso, “Generating synthetic data in finance: opportunities, challenges and pitfalls,” in ACM International Conference on AI in Finance, 2020, pp. 1–8.
  39. I. Shumailov, Z. Shumaylov, Y. Zhao, Y. Gal, N. Papernot, and R. Anderson, “Model dementia: Generated data makes models forget,” arXiv e-prints, pp. arXiv–2305, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Ziqiang Yuan (3 papers)
  2. Kaiyuan Wang (18 papers)
  3. Shoutai Zhu (2 papers)
  4. Ye Yuan (274 papers)
  5. Jingya Zhou (6 papers)
  6. Yanlin Zhu (2 papers)
  7. Wenqi Wei (55 papers)
Citations (4)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets