Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Characterizing Multimodal Long-form Summarization: A Case Study on Financial Reports (2404.06162v3)

Published 9 Apr 2024 in cs.CL, cs.AI, and cs.LG

Abstract: As LLMs expand the power of natural language processing to handle long inputs, rigorous and systematic analyses are necessary to understand their abilities and behavior. A salient application is summarization, due to its ubiquity and controversy (e.g., researchers have declared the death of summarization). In this paper, we use financial report summarization as a case study because financial reports are not only long but also use numbers and tables extensively. We propose a computational framework for characterizing multimodal long-form summarization and investigate the behavior of Claude 2.0/2.1, GPT-4/3.5, and Cohere. We find that GPT-3.5 and Cohere fail to perform this summarization task meaningfully. For Claude 2 and GPT-4, we analyze the extractiveness of the summary and identify a position bias in LLMs. This position bias disappears after shuffling the input for Claude, which suggests that Claude seems to recognize important information. We also conduct a comprehensive investigation on the use of numeric data in LLM-generated summaries and offer a taxonomy of numeric hallucination. We employ prompt engineering to improve GPT-4's use of numbers with limited success. Overall, our analyses highlight the strong capability of Claude 2 in handling long multimodal inputs compared to GPT-4. The generated summaries and evaluation code are available at https://github.com/ChicagoHAI/characterizing-multimodal-long-form-summarization.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (16)
  1. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
  2. Booookscore: A systematic exploration of book-length summarization in the era of llms. arXiv preprint arXiv:2310.00785, 2023.
  3. How reliable are automatic evaluation methods for instruction-tuned llms?, 2024.
  4. Faith and fate: Limits of transformers on compositionality. Advances in Neural Information Processing Systems, 36, 2024.
  5. Chatgpt outperforms crowd workers for text-annotation tasks. Proceedings of the National Academy of Sciences, 120(30):e2305016120, 2023.
  6. News summarization and evaluation in the era of gpt-3, 2023.
  7. Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. arXiv preprint arXiv:1804.11283, 2018.
  8. Chin-Yew Lin. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pp.  74–81, Barcelona, Spain, July 2004. Association for Computational Linguistics. URL https://aclanthology.org/W04-1013.
  9. Lost in the middle: How language models use long contexts. arXiv preprint arXiv:2307.03172, 2023.
  10. Edgar-corpus: Billions of tokens make the world go round. arXiv preprint arXiv:2109.14394, 2021.
  11. Factscore: Fine-grained atomic evaluation of factual precision in long form text generation, 2023.
  12. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pp.  311–318, 2002.
  13. Instruction tuning with gpt-4, 2023.
  14. Summarization is (almost) dead. arXiv preprint arXiv:2309.09558, 2023.
  15. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 11 2019. URL https://arxiv.org/abs/1908.10084.
  16. United States SEC. Form 10-k general instructions. URL https://www.sec.gov/about/forms/form10-k.pdf.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Tianyu Cao (16 papers)
  2. Natraj Raman (13 papers)
  3. Danial Dervovic (24 papers)
  4. Chenhao Tan (89 papers)
Citations (2)