Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DialogRE^C+: An Extension of DialogRE to Investigate How Much Coreference Helps Relation Extraction in Dialogs (2308.04498v2)

Published 8 Aug 2023 in cs.CL

Abstract: Dialogue relation extraction (DRE) that identifies the relations between argument pairs in dialogue text, suffers much from the frequent occurrence of personal pronouns, or entity and speaker coreference. This work introduces a new benchmark dataset DialogREC+, introducing coreference resolution into the DRE scenario. With the aid of high-quality coreference knowledge, the reasoning of argument relations is expected to be enhanced. In DialogREC+ dataset, we manually annotate total 5,068 coreference chains over 36,369 argument mentions based on the existing DialogRE data, where four different coreference chain types namely speaker chain, person chain, location chain and organization chain are explicitly marked. We further develop 4 coreference-enhanced graph-based DRE models, which learn effective coreference representations for improving the DRE task. We also train a coreference resolution model based on our annotations and evaluate the effect of automatically extracted coreference chains demonstrating the practicality of our dataset and its potential to other domains and tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yiyun Xiong (1 paper)
  2. Mengwei Dai (1 paper)
  3. Fei Li (233 papers)
  4. Hao Fei (105 papers)
  5. Bobo Li (23 papers)
  6. Shengqiong Wu (36 papers)
  7. Donghong Ji (50 papers)
  8. Chong Teng (23 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.