Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

C-ICL: Contrastive In-context Learning for Information Extraction (2402.11254v2)

Published 17 Feb 2024 in cs.CL

Abstract: There has been increasing interest in exploring the capabilities of advanced LLMs in the field of information extraction (IE), specifically focusing on tasks related to named entity recognition (NER) and relation extraction (RE). Although researchers are exploring the use of few-shot information extraction through in-context learning with LLMs, they tend to focus only on using correct or positive examples for demonstration, neglecting the potential value of incorporating incorrect or negative examples into the learning process. In this paper, we present c-ICL, a novel few-shot technique that leverages both correct and incorrect sample constructions to create in-context learning demonstrations. This approach enhances the ability of LLMs to extract entities and relations by utilizing prompts that incorporate not only the positive samples but also the reasoning behind them. This method allows for the identification and correction of potential interface errors. Specifically, our proposed method taps into the inherent contextual information and valuable information in hard negative samples and the nearest positive neighbors to the test and then applies the in-context learning demonstrations based on LLMs. Our experiments on various datasets indicate that c-ICL outperforms previous few-shot in-context learning methods, delivering substantial enhancements in performance across a broad spectrum of related tasks. These improvements are noteworthy, showcasing the versatility of our approach in miscellaneous scenarios.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Ying Mo (5 papers)
  2. Jian Yang (503 papers)
  3. Jiahao Liu (72 papers)
  4. Shun Zhang (105 papers)
  5. Jingang Wang (71 papers)
  6. Zhoujun Li (122 papers)
  7. Qifan Wang (129 papers)
Citations (10)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets