DynaMaR: Dynamic Prompt with Mask Token Representation (2206.02982v1)
Abstract: Recent research has shown that LLMs pretrained using unsupervised approaches can achieve significant performance improvement on many downstream tasks. Typically when adapting these LLMs to downstream tasks, like a classification or regression task, we employ a fine-tuning paradigm in which the sentence representation from the LLM is input to a task-specific head; the model is then fine-tuned end-to-end. However, with the emergence of models like GPT-3, prompt-based fine-tuning has been proven to be a successful approach for few-shot tasks. Inspired by this work, we study discrete prompt technologies in practice. There are two issues that arise with the standard prompt approach. First, it can overfit on the prompt template. Second, it requires manual effort to formulate the downstream task as a LLM problem. In this paper, we propose an improvement to prompt-based fine-tuning that addresses these two issues. We refer to our approach as DynaMaR -- Dynamic Prompt with Mask Token Representation. Results show that DynaMaR can achieve an average improvement of 10% in few-shot settings and improvement of 3.7% in data-rich settings over the standard fine-tuning approach on four e-commerce applications.
- Xiaodi Sun (3 papers)
- Sunny Rajagopalan (1 paper)
- Priyanka Nigam (8 papers)
- Weiyi Lu (5 papers)
- Yi Xu (302 papers)
- Belinda Zeng (16 papers)
- Trishul Chilimbi (22 papers)