Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Open Information Extraction on Scientific Text: An Evaluation (1802.05574v2)

Published 15 Feb 2018 in cs.CL

Abstract: Open Information Extraction (OIE) is the task of the unsupervised creation of structured information from text. OIE is often used as a starting point for a number of downstream tasks including knowledge base construction, relation extraction, and question answering. While OIE methods are targeted at being domain independent, they have been evaluated primarily on newspaper, encyclopedic or general web text. In this article, we evaluate the performance of OIE on scientific texts originating from 10 different disciplines. To do so, we use two state-of-the-art OIE systems applying a crowd-sourcing approach. We find that OIE systems perform significantly worse on scientific text than encyclopedic text. We also provide an error analysis and suggest areas of work to reduce errors. Our corpus of sentences and judgments are made available.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Paul Groth (51 papers)
  2. Michael Lauruhn (2 papers)
  3. Antony Scerri (2 papers)
  4. Ron Daniel Jr (4 papers)
Citations (19)

Summary

We haven't generated a summary for this paper yet.