Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Open Information Extraction with Large Language Models: A Study on Demonstration Uncertainty (2309.03433v1)

Published 7 Sep 2023 in cs.CL

Abstract: Open Information Extraction (OIE) task aims at extracting structured facts from unstructured text, typically in the form of (subject, relation, object) triples. Despite the potential of LLMs like ChatGPT as a general task solver, they lag behind state-of-the-art (supervised) methods in OIE tasks due to two key issues. First, LLMs struggle to distinguish irrelevant context from relevant relations and generate structured output due to the restrictions on fine-tuning the model. Second, LLMs generates responses autoregressively based on probability, which makes the predicted relations lack confidence. In this paper, we assess the capabilities of LLMs in improving the OIE task. Particularly, we propose various in-context learning strategies to enhance LLM's instruction-following ability and a demonstration uncertainty quantification module to enhance the confidence of the generated relations. Our experiments on three OIE benchmark datasets show that our approach holds its own against established supervised methods, both quantitatively and qualitatively.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Chen Ling (65 papers)
  2. Xujiang Zhao (26 papers)
  3. Xuchao Zhang (44 papers)
  4. Yanchi Liu (41 papers)
  5. Wei Cheng (175 papers)
  6. Haoyu Wang (309 papers)
  7. Zhengzhang Chen (32 papers)
  8. Takao Osaki (4 papers)
  9. Katsushi Matsuda (4 papers)
  10. Haifeng Chen (99 papers)
  11. Liang Zhao (353 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.