Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Analysis of Sentential Neighbors in Implicit Discourse Relation Prediction (2405.09735v2)

Published 16 May 2024 in cs.CL

Abstract: Discourse relation classification is an especially difficult task without explicit context markers (Prasad et al., 2008). Current approaches to implicit relation prediction solely rely on two neighboring sentences being targeted, ignoring the broader context of their surrounding environments (Atwell et al., 2021). In this research, we propose three new methods in which to incorporate context in the task of sentence relation prediction: (1) Direct Neighbors (DNs), (2) Expanded Window Neighbors (EWNs), and (3) Part-Smart Random Neighbors (PSRNs). Our findings indicate that the inclusion of context beyond one discourse unit is harmful in the task of discourse relation classification.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)
  1. Where are we in discourse relation recognition? In SIGDIAL Conferences.
  2. Hongxiao Bai and Zhao Hai. 2018. Deep enhanced representation for implicit discourse relation recognition. In International Conference on Computational Linguistics.
  3. Building a discourse-tagged corpus in the framework of rhetorical structure theory. In SIGDIAL Workshop.
  4. Zeyu Dai and Ruihong Huang. 2018. Improving implicit discourse relation classification by modeling inter-dependencies of discourse units in a paragraph. In North American Chapter of the Association for Computational Linguistics.
  5. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
  6. Extending implicit discourse relation recognition to the pdtb-3. ArXiv, abs/2010.06294.
  7. On the importance of word and sentence representation learning in implicit discourse relation classification. In International Joint Conference on Artificial Intelligence.
  8. Distributed representations of words and phrases and their compositionality. ArXiv, abs/1310.4546.
  9. OpenAI. 2023. Gpt-4 technical report.
  10. Automatic sense prediction for implicit discourse relations in text. In Annual Meeting of the Association for Computational Linguistics.
  11. Easily identifiable discourse relations. In International Conference on Computational Linguistics.
  12. The penn discourse treebank 2.0 annotation manual.
  13. Annotating attribution in the penn discourse treebank.
  14. The penn discourse treebank 2.0. In International Conference on Language Resources and Evaluation.
  15. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. ArXiv, abs/1910.01108.
  16. Wei Shi and Vera Demberg. 2019. Next sentence prediction helps implicit discourse relation classification within and across domains. In Conference on Empirical Methods in Natural Language Processing.
  17. The penn discourse treebank 3.0 annotation manual.
  18. A label dependence-aware sequence generation model for multi-level implicit discourse relation recognition. In AAAI Conference on Artificial Intelligence.
  19. Zheng Zhao and Bonnie Lynn Webber. 2022. Revisiting shallow discourse parsing in the pdtb-3: Handling intra-sentential implicits. ArXiv, abs/2204.00350.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets