Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ContrastNER: Contrastive-based Prompt Tuning for Few-shot NER (2305.17951v1)

Published 29 May 2023 in cs.CL and cs.AI

Abstract: Prompt-based LLMs have produced encouraging results in numerous applications, including Named Entity Recognition (NER) tasks. NER aims to identify entities in a sentence and provide their types. However, the strong performance of most available NER approaches is heavily dependent on the design of discrete prompts and a verbalizer to map the model-predicted outputs to entity categories, which are complicated undertakings. To address these challenges, we present ContrastNER, a prompt-based NER framework that employs both discrete and continuous tokens in prompts and uses a contrastive learning approach to learn the continuous prompts and forecast entity types. The experimental results demonstrate that ContrastNER obtains competitive performance to the state-of-the-art NER methods in high-resource settings and outperforms the state-of-the-art models in low-resource circumstances without requiring extensive manual prompt engineering and verbalizer design.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Amirhossein Layegh (1 paper)
  2. Amir H. Payberah (3 papers)
  3. Ahmet Soylu (7 papers)
  4. Dumitru Roman (6 papers)
  5. Mihhail Matskin (2 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.