Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Prompt-Learning for Fine-Grained Entity Typing (2108.10604v1)

Published 24 Aug 2021 in cs.CL and cs.AI

Abstract: As an effective approach to tune pre-trained LLMs (PLMs) for specific tasks, prompt-learning has recently attracted much attention from researchers. By using \textit{cloze}-style language prompts to stimulate the versatile knowledge of PLMs, prompt-learning can achieve promising results on a series of NLP tasks, such as natural language inference, sentiment classification, and knowledge probing. In this work, we investigate the application of prompt-learning on fine-grained entity typing in fully supervised, few-shot and zero-shot scenarios. We first develop a simple and effective prompt-learning pipeline by constructing entity-oriented verbalizers and templates and conducting masked LLMing. Further, to tackle the zero-shot regime, we propose a self-supervised strategy that carries out distribution-level optimization in prompt-learning to automatically summarize the information of entity types. Extensive experiments on three fine-grained entity typing benchmarks (with up to 86 classes) under fully supervised, few-shot and zero-shot settings show that prompt-learning methods significantly outperform fine-tuning baselines, especially when the training data is insufficient.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Ning Ding (122 papers)
  2. Yulin Chen (134 papers)
  3. Xu Han (270 papers)
  4. Guangwei Xu (18 papers)
  5. Pengjun Xie (85 papers)
  6. Hai-Tao Zheng (94 papers)
  7. Zhiyuan Liu (433 papers)
  8. Juanzi Li (144 papers)
  9. Hong-Gee Kim (5 papers)
Citations (139)