Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Generative Prompt Tuning for Relation Classification (2210.12435v1)

Published 22 Oct 2022 in cs.CL

Abstract: Using prompts to explore the knowledge contained within pre-trained LLMs for downstream tasks has now become an active topic. Current prompt tuning methods mostly convert the downstream tasks to masked LLMing problems by adding cloze-style phrases and mapping all labels to verbalizations with fixed length, which has proven effective for tasks with simple label spaces. However, when applied to relation classification exhibiting complex label spaces, vanilla prompt tuning methods may struggle with label verbalizations with arbitrary lengths due to rigid prompt restrictions. Inspired by the text infilling task for pre-training generative models that can flexibly predict missing spans, we propose a novel generative prompt tuning method to reformulate relation classification as an infilling problem, which frees our approach from limitations of current prompt based approaches and thus fully exploits rich semantics of entity and relation types. In addition, we design entity-guided decoding and discriminative relation scoring to generate and align relations effectively and efficiently during inference. Extensive experiments under fully supervised settings and low-resource settings demonstrate the effectiveness of our approach.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jiale Han (14 papers)
  2. Shuai Zhao (116 papers)
  3. Bo Cheng (51 papers)
  4. Shengkun Ma (4 papers)
  5. Wei Lu (325 papers)
Citations (19)

Summary

We haven't generated a summary for this paper yet.