Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Rationale-Enhanced Language Models are Better Continual Relation Learners (2310.06547v1)

Published 10 Oct 2023 in cs.CL and cs.AI

Abstract: Continual relation extraction (CRE) aims to solve the problem of catastrophic forgetting when learning a sequence of newly emerging relations. Recent CRE studies have found that catastrophic forgetting arises from the model's lack of robustness against future analogous relations. To address the issue, we introduce rationale, i.e., the explanations of relation classification results generated by LLMs (LLM), into CRE task. Specifically, we design the multi-task rationale tuning strategy to help the model learn current relations robustly. We also conduct contrastive rationale replay to further distinguish analogous relations. Experimental results on two standard benchmarks demonstrate that our method outperforms the state-of-the-art CRE models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Weimin Xiong (13 papers)
  2. Yifan Song (48 papers)
  3. Peiyi Wang (48 papers)
  4. Sujian Li (82 papers)
Citations (8)