Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Alleviating the Object Bias in Prompt Tuning-based Factual Knowledge Extraction (2306.03378v2)

Published 6 Jun 2023 in cs.IR

Abstract: Many works employed prompt tuning methods to automatically optimize prompt queries and extract the factual knowledge stored in Pretrained LLMs. In this paper, we observe that the optimized prompts, including discrete prompts and continuous prompts, exhibit undesirable object bias. To handle this problem, we propose a novel prompt tuning method called MeCoD. consisting of three modules: Prompt Encoder, Object Equalization and Biased Object Obstruction. Experimental results show that MeCoD can significantly reduce the object bias and at the same time improve accuracy of factual knowledge extraction.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yuhang Wang (54 papers)
  2. Dongyuan Lu (6 papers)
  3. Chao Kong (9 papers)
  4. Jitao Sang (71 papers)
Citations (6)