Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

QuOTeS: Query-Oriented Technical Summarization (2306.11832v1)

Published 20 Jun 2023 in cs.IR and cs.CL

Abstract: Abstract. When writing an academic paper, researchers often spend considerable time reviewing and summarizing papers to extract relevant citations and data to compose the Introduction and Related Work sections. To address this problem, we propose QuOTeS, an interactive system designed to retrieve sentences related to a summary of the research from a collection of potential references and hence assist in the composition of new papers. QuOTeS integrates techniques from Query-Focused Extractive Summarization and High-Recall Information Retrieval to provide Interactive Query-Focused Summarization of scientific documents. To measure the performance of our system, we carried out a comprehensive user study where participants uploaded papers related to their research and evaluated the system in terms of its usability and the quality of the summaries it produces. The results show that QuOTeS provides a positive user experience and consistently provides query-focused summaries that are relevant, concise, and complete. We share the code of our system and the novel Query-Focused Summarization dataset collected during our experiments at https://github.com/jarobyte91/quotes.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (22)
  1. PEGASUS: Pre-Training with Extracted Gap-Sentences for Abstractive Summarization. In Proceedings of the 37th International Conference on Machine Learning, ICML’20. JMLR.org, 2020.
  2. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA, July 2002. Association for Computational Linguistics.
  3. Chin-Yew Lin. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain, July 2004. Association for Computational Linguistics.
  4. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65–72, Ann Arbor, Michigan, June 2005. Association for Computational Linguistics.
  5. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379–389, Lisbon, Portugal, September 2015. Association for Computational Linguistics.
  6. Hoa Trang Dang. Overview of DUC 2005. In Proceedings of the document understanding conference, volume 2005, pages 1–12, 2005.
  7. iNeATS: Interactive multi-document summarization. In The Companion Volume to the Proceedings of 41st Annual Meeting of the Association for Computational Linguistics, pages 125–128, Sapporo, Japan, July 2003. Association for Computational Linguistics.
  8. Evaluation of machine-learning protocols for technology-assisted review in electronic discovery. In Proceedings of the 37th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’14, page 153–162, New York, NY, USA, 2014. Association for Computing Machinery.
  9. John Brooke. SUS - A quick and dirty usability scale. Usability evaluation in industry, 189(194):4–7, 1996.
  10. Hoa Trang Dang. Overview of DUC 2006. In Proceedings of the document understanding conference, volume 2006, pages 1–10, 2006.
  11. Hoa Trang Dang. Overview of DUC 2007. In Proceedings of the document understanding conference, volume 2007, pages 1–53, 2007.
  12. Topic Concentration in Query Focused Summarization Datasets. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, volume 30, 2016.
  13. Query-focused scientific paper summarization with localized sentence representation. In BIRNDL@ SIGIR, 2018.
  14. Multi-faceted recall of continuous active learning for technology-assisted review. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’15, page 763–766, New York, NY, USA, 2015. Association for Computing Machinery.
  15. Scalability of continuous active learning for reliable high-recall text classification. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, CIKM ’16, page 1039–1048, New York, NY, USA, 2016. Association for Computing Machinery.
  16. A summarization system for scientific documents. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations, pages 211–216, Hong Kong, China, November 2019. Association for Computational Linguistics.
  17. A new social robot for interactive query-based summarization: Scientific document summarization. In Interactive Collaborative Robotics: 4th International Conference, ICR 2019, Istanbul, Turkey, August 20–25, 2019, Proceedings, page 330–340, Berlin, Heidelberg, 2019. Springer-Verlag.
  18. An interactive query-based approach for summarizing scientific documents. Information Discovery and Delivery, 50(2):176–191, 2021.
  19. Plotly. Dash. Python package, https://plotly.com/dash, 2013. Visited on August 30, 2022.
  20. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 11 2019.
  21. Pallets Projects. Flask: web development, one drop at a time. Python package, https://flask.palletsprojects.com, 2010. Visited on August 30, 2022.
  22. On information and sufficiency. The annals of mathematical statistics, 22(1):79–86, 1951.

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com