Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Retrieval-Augmented Code Generation for Universal Information Extraction (2311.02962v1)

Published 6 Nov 2023 in cs.AI, cs.CL, and cs.IR

Abstract: Information Extraction (IE) aims to extract structural knowledge (e.g., entities, relations, events) from natural language texts, which brings challenges to existing methods due to task-specific schemas and complex text expressions. Code, as a typical kind of formalized language, is capable of describing structural knowledge under various schemas in a universal way. On the other hand, LLMs trained on both codes and texts have demonstrated powerful capabilities of transforming texts into codes, which provides a feasible solution to IE tasks. Therefore, in this paper, we propose a universal retrieval-augmented code generation framework based on LLMs, called Code4UIE, for IE tasks. Specifically, Code4UIE adopts Python classes to define task-specific schemas of various structural knowledge in a universal way. By so doing, extracting knowledge under these schemas can be transformed into generating codes that instantiate the predefined Python classes with the information in texts. To generate these codes more precisely, Code4UIE adopts the in-context learning mechanism to instruct LLMs with examples. In order to obtain appropriate examples for different tasks, Code4UIE explores several example retrieval strategies, which can retrieve examples semantically similar to the given texts. Extensive experiments on five representative IE tasks across nine datasets demonstrate the effectiveness of the Code4UIE framework.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Yucan Guo (4 papers)
  2. Zixuan Li (63 papers)
  3. Xiaolong Jin (38 papers)
  4. Yantao Liu (13 papers)
  5. Yutao Zeng (18 papers)
  6. Wenxuan Liu (28 papers)
  7. Xiang Li (1003 papers)
  8. Pan Yang (11 papers)
  9. Long Bai (87 papers)
  10. Jiafeng Guo (161 papers)
  11. Xueqi Cheng (274 papers)
Citations (24)