Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LabelPrompt: Effective Prompt-based Learning for Relation Classification (2302.08068v2)

Published 16 Feb 2023 in cs.CL, cs.AI, cs.IR, and cs.LG

Abstract: Recently, prompt-based learning has gained popularity across many NLP tasks by reformulating them into a cloze-style format to better align pre-trained LLMs (PLMs) with downstream tasks. However, applying this approach to relation classification poses unique challenges. Specifically, associating natural language words that fill the masked token with semantic relation labels (\textit{e.g.} \textit{org:founded\_by}'') is difficult. To address this challenge, this paper presents a novel prompt-based learning method, namely LabelPrompt, for the relation classification task. Motivated by the intuition toGIVE MODEL CHOICES!'', we first define additional tokens to represent relation labels, which regard these tokens as the verbaliser with semantic initialisation and explicitly construct them with a prompt template method. Then, to mitigate inconsistency between predicted relations and given entities, we implement an entity-aware module with contrastive learning. Last, we conduct an attention query strategy within the self-attention layer to differentiates prompt tokens and sequence tokens. Together, these strategies enhance the adaptability of prompt-based learning, especially when only small labelled datasets is available. Comprehensive experiments on benchmark datasets demonstrate the superiority of our method, particularly in the few-shot scenario.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Wenjie Zhang (138 papers)
  2. Xiaoning Song (14 papers)
  3. Zhenhua Feng (27 papers)
  4. Tianyang Xu (53 papers)
  5. Xiaojun Wu (94 papers)
Citations (3)