Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Recall of Large Language Models: A Model Collaboration Approach for Relational Triple Extraction (2404.09593v1)

Published 15 Apr 2024 in cs.CL

Abstract: Relation triple extraction, which outputs a set of triples from long sentences, plays a vital role in knowledge acquisition. LLMs can accurately extract triples from simple sentences through few-shot learning or fine-tuning when given appropriate instructions. However, they often miss out when extracting from complex sentences. In this paper, we design an evaluation-filtering framework that integrates LLMs with small models for relational triple extraction tasks. The framework includes an evaluation model that can extract related entity pairs with high precision. We propose a simple labeling principle and a deep neural network to build the model, embedding the outputs as prompts into the extraction process of the large model. We conduct extensive experiments to demonstrate that the proposed method can assist LLMs in obtaining more accurate extraction results, especially from complex sentences containing multiple relational triples. Our evaluation model can also be embedded into traditional extraction models to enhance their extraction precision from complex sentences.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Zepeng Ding (7 papers)
  2. Wenhao Huang (98 papers)
  3. Jiaqing Liang (62 papers)
  4. Deqing Yang (55 papers)
  5. Yanghua Xiao (151 papers)
Citations (4)