Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Language Model Prompting in Support of Semi-autonomous Task Learning (2209.07636v2)

Published 13 Sep 2022 in cs.LG, cs.AI, and cs.CL

Abstract: LLMs offer potential as a source of knowledge for agents that need to acquire new task competencies within a performance environment. We describe efforts toward a novel agent capability that can construct cues (or "prompts") that result in useful LLM responses for an agent learning a new task. Importantly, responses must not only be "reasonable" (a measure used commonly in research on knowledge extraction from LLMs) but also specific to the agent's task context and in a form that the agent can interpret given its native language capacities. We summarize a series of empirical investigations of prompting strategies and evaluate responses against the goals of targeted and actionable responses for task learning. Our results demonstrate that actionable task knowledge can be obtained from LLMs in support of online agent task learning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. James R. Kirk (8 papers)
  2. Robert E. Wray (9 papers)
  3. Peter Lindes (4 papers)
  4. John E. Laird (15 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.