Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Interactive Task and Concept Learning from Natural Language Instructions and GUI Demonstrations (1909.00031v2)

Published 30 Aug 2019 in cs.HC and cs.AI

Abstract: Natural language programming is a promising approach to enable end users to instruct new tasks for intelligent agents. However, our formative study found that end users would often use unclear, ambiguous or vague concepts when naturally instructing tasks in natural language, especially when specifying conditionals. Existing systems have limited support for letting the user teach agents new concepts or explaining unclear concepts. In this paper, we describe a new multi-modal domain-independent approach that combines natural language programming and programming-by-demonstration to allow users to first naturally describe tasks and associated conditions at a high level, and then collaborate with the agent to recursively resolve any ambiguities or vagueness through conversations and demonstrations. Users can also define new procedures and concepts by demonstrating and referring to contents within GUIs of existing mobile apps. We demonstrate this approach in PUMICE, an end-user programmable agent that implements this approach. A lab study with 10 users showed its usability.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Toby Jia-Jun Li (57 papers)
  2. Marissa Radensky (9 papers)
  3. Justin Jia (2 papers)
  4. Kirielle Singarajah (1 paper)
  5. Tom M. Mitchell (20 papers)
  6. Brad A. Myers (16 papers)
Citations (90)

Summary

We haven't generated a summary for this paper yet.