Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Introspective Tips: Large Language Model for In-Context Decision Making (2305.11598v1)

Published 19 May 2023 in cs.AI and cs.CL

Abstract: The emergence of LLMs has substantially influenced natural language processing, demonstrating exceptional results across various tasks. In this study, we employ ``Introspective Tips" to facilitate LLMs in self-optimizing their decision-making. By introspectively examining trajectories, LLM refines its policy by generating succinct and valuable tips. Our method enhances the agent's performance in both few-shot and zero-shot learning situations by considering three essential scenarios: learning from the agent's past experiences, integrating expert demonstrations, and generalizing across diverse games. Importantly, we accomplish these improvements without fine-tuning the LLM parameters; rather, we adjust the prompt to generalize insights from the three aforementioned situations. Our framework not only supports but also emphasizes the advantage of employing LLM in in-contxt decision-making. Experiments involving over 100 games in TextWorld illustrate the superior performance of our approach.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Liting Chen (6 papers)
  2. Lu Wang (329 papers)
  3. Hang Dong (65 papers)
  4. Yali Du (63 papers)
  5. Jie Yan (25 papers)
  6. Fangkai Yang (45 papers)
  7. Shuang Li (203 papers)
  8. Pu Zhao (82 papers)
  9. Si Qin (24 papers)
  10. Saravan Rajmohan (85 papers)
  11. Qingwei Lin (81 papers)
  12. Dongmei Zhang (193 papers)
Citations (19)
Youtube Logo Streamline Icon: https://streamlinehq.com