More Samples or More Prompts? Exploring Effective In-Context Sampling for LLM Few-Shot Prompt Engineering (2311.09782v2)
Abstract: While most existing works on LLM prompting techniques focus only on how to select a better set of data samples inside one single prompt input (In-Context Learning or ICL), why can not we design and leverage multiple prompts together to further improve the LLM's performance? In this work, we propose In-Context Sampling (ICS), a low-resource LLM prompting technique to produce confident predictions by optimizing the construction of multiple ICL prompt inputs. Extensive experiments with three open-source LLMs (FlanT5-XL, Mistral-7B, and Mixtral-8x7B) on four NLI datasets (e-SNLI, Multi-NLI, ANLI, and Contract-NLI) and one QA dataset (CommonsenseQA) illustrate that ICS can consistently enhance LLMs' performance. An in-depth evaluation with three data similarity-based ICS strategies suggests that these strategies can further elevate LLM's performance, which sheds light on a new yet promising future research direction.
- Bingsheng Yao (49 papers)
- Guiming Chen (4 papers)
- Ruishi Zou (6 papers)
- Yuxuan Lu (26 papers)
- Jiachen Li (144 papers)
- Shao Zhang (18 papers)
- Sijia Liu (204 papers)
- James Hendler (11 papers)
- Dakuo Wang (87 papers)
- Yisi Sang (13 papers)