Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Recall, Retrieve and Reason: Towards Better In-Context Relation Extraction (2404.17809v1)

Published 27 Apr 2024 in cs.CL and cs.AI

Abstract: Relation extraction (RE) aims to identify relations between entities mentioned in texts. Although LLMs have demonstrated impressive in-context learning (ICL) abilities in various tasks, they still suffer from poor performances compared to most supervised fine-tuned RE methods. Utilizing ICL for RE with LLMs encounters two challenges: (1) retrieving good demonstrations from training examples, and (2) enabling LLMs exhibit strong ICL abilities in RE. On the one hand, retrieving good demonstrations is a non-trivial process in RE, which easily results in low relevance regarding entities and relations. On the other hand, ICL with an LLM achieves poor performance in RE while RE is different from LLMing in nature or the LLM is not large enough. In this work, we propose a novel recall-retrieve-reason RE framework that synergizes LLMs with retrieval corpora (training examples) to enable relevant retrieving and reliable in-context reasoning. Specifically, we distill the consistently ontological knowledge from training datasets to let LLMs generate relevant entity pairs grounded by retrieval corpora as valid queries. These entity pairs are then used to retrieve relevant training examples from the retrieval corpora as demonstrations for LLMs to conduct better ICL via instruction tuning. Extensive experiments on different LLMs and RE datasets demonstrate that our method generates relevant and valid entity pairs and boosts ICL abilities of LLMs, achieving competitive or new state-of-the-art performance on sentence-level RE compared to previous supervised fine-tuning methods and ICL-based methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Guozheng Li (19 papers)
  2. Peng Wang (831 papers)
  3. Wenjun Ke (9 papers)
  4. Yikai Guo (9 papers)
  5. Ke Ji (27 papers)
  6. Ziyu Shang (8 papers)
  7. Jiajun Liu (61 papers)
  8. Zijie Xu (9 papers)
Citations (3)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets