Introspective Tips: Large Language Model for In-Context Decision Making (2305.11598v1)
Abstract: The emergence of LLMs has substantially influenced natural language processing, demonstrating exceptional results across various tasks. In this study, we employ ``Introspective Tips" to facilitate LLMs in self-optimizing their decision-making. By introspectively examining trajectories, LLM refines its policy by generating succinct and valuable tips. Our method enhances the agent's performance in both few-shot and zero-shot learning situations by considering three essential scenarios: learning from the agent's past experiences, integrating expert demonstrations, and generalizing across diverse games. Importantly, we accomplish these improvements without fine-tuning the LLM parameters; rather, we adjust the prompt to generalize insights from the three aforementioned situations. Our framework not only supports but also emphasizes the advantage of employing LLM in in-contxt decision-making. Experiments involving over 100 games in TextWorld illustrate the superior performance of our approach.
- Liting Chen (6 papers)
- Lu Wang (329 papers)
- Hang Dong (65 papers)
- Yali Du (63 papers)
- Jie Yan (25 papers)
- Fangkai Yang (45 papers)
- Shuang Li (203 papers)
- Pu Zhao (82 papers)
- Si Qin (24 papers)
- Saravan Rajmohan (85 papers)
- Qingwei Lin (81 papers)
- Dongmei Zhang (193 papers)