Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Side-by-side Comparison of Transformers for English Implicit Discourse Relation Classification (2307.03378v1)

Published 7 Jul 2023 in cs.CL

Abstract: Though discourse parsing can help multiple NLP fields, there has been no wide LLM search done on implicit discourse relation classification. This hinders researchers from fully utilizing public-available models in discourse analysis. This work is a straightforward, fine-tuned discourse performance comparison of seven pre-trained LLMs. We use PDTB-3, a popular discourse relation annotated dataset. Through our model search, we raise SOTA to 0.671 ACC and obtain novel observations. Some are contrary to what has been reported before (Shi and Demberg, 2019b), that sentence-level pre-training objectives (NSP, SBO, SOP) generally fail to produce the best performing model for implicit discourse relation classification. Counterintuitively, similar-sized PLMs with MLM and full attention led to better performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Bruce W. Lee (17 papers)
  2. BongSeok Yang (2 papers)
  3. Jason Hyung-Jong Lee (6 papers)