Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FewRel: A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation (1810.10147v2)

Published 24 Oct 2018 in cs.LG, cs.AI, cs.CL, and stat.ML

Abstract: We present a Few-Shot Relation Classification Dataset (FewRel), consisting of 70, 000 sentences on 100 relations derived from Wikipedia and annotated by crowdworkers. The relation of each sentence is first recognized by distant supervision methods, and then filtered by crowdworkers. We adapt the most recent state-of-the-art few-shot learning methods for relation classification and conduct a thorough evaluation of these methods. Empirical results show that even the most competitive few-shot learning models struggle on this task, especially as compared with humans. We also show that a range of different reasoning skills are needed to solve our task. These results indicate that few-shot relation classification remains an open problem and still requires further research. Our detailed analysis points multiple directions for future research. All details and resources about the dataset and baselines are released on http://zhuhao.me/fewrel.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Xu Han (270 papers)
  2. Hao Zhu (212 papers)
  3. Pengfei Yu (20 papers)
  4. Ziyun Wang (27 papers)
  5. Yuan Yao (292 papers)
  6. Zhiyuan Liu (433 papers)
  7. Maosong Sun (337 papers)
Citations (578)

Summary

FewRel: A Novel Dataset for Few-Shot Relation Classification

FewRel presents a large-scale dataset aimed at advancing research on few-shot relation classification (RC), a critical challenge in NLP. The FewRel dataset comprises 70,000 sentences across 100 relations, extracted from Wikipedia and refined through a combination of distant supervision and human annotation. This paper evaluates recent state-of-the-art few-shot learning methods on the constructed dataset, revealing that these models achieve lower accuracy compared to human performance, thus highlighting the complexity and open nature of the task.

Dataset Construction and Characteristics

The FewRel dataset was designed to address limitations in training data availability for RC models. Relations are initially identified using distant supervision, aligning Wikipedia textual content with facts from Wikidata. To ensure dataset quality, human annotators filtered the resulting data to remove instances marked by incorrect labeling. The final dataset boasts significant diversity, both lexically and semantically, with each relation containing 700 instances and the dataset encompassing 124,577 unique tokens.

Evaluation Framework

The authors evaluate several models including traditional neural networks (CNN, PCNN) in conjunction with simple training strategies and recent few-shot learning methods. These include Meta Networks, Graph Neural Networks (GNN), SNAIL, and Prototypical Networks, among others, applied to scenarios such as 5-way 1-shot and 10-way 5-shot tasks.

The results indicate that while few-shot learning approaches perform better than traditional models, they fall short of human performance, especially in complex sentences requiring advanced reasoning skills such as commonsense and co-reference reasoning.

Implications and Future Directions

The FewRel dataset establishes a challenging benchmark for the RC community, particularly for few-shot learning contexts. The performance gap with human capabilities suggests significant room for improvement and the need for innovative models capable of capturing nuanced relationships between entities. The paper suggests that enhancing model reasoning capabilities and incorporating external knowledge sources could be future research areas.

Practically, FewRel's unique structure allows for advancements in applications that require robust understanding of rare or novel relational patterns, a frequent requirement in dynamic information extraction tasks.

The paper propels the few-shot learning domain forward by presenting a rigorous dataset accompanied by a comprehensive evaluation. This work provides inspiration for future exploration into sophisticated NLP models capable of reasoning under limited data conditions, addressing both practical AI applications and theoretical model development.

Youtube Logo Streamline Icon: https://streamlinehq.com