Beyond Prompts: Learning from Human Communication for Enhanced AI Intent Alignment (2405.05678v1)
Abstract: AI intent alignment, ensuring that AI produces outcomes as intended by users, is a critical challenge in human-AI interaction. The emergence of generative AI, including LLMs, has intensified the significance of this problem, as interactions increasingly involve users specifying desired results for AI systems. In order to support better AI intent alignment, we aim to explore human strategies for intent specification in human-human communication. By studying and comparing human-human and human-LLM communication, we identify key strategies that can be applied to the design of AI systems that are more effective at understanding and aligning with user intent. This study aims to advance toward a human-centered AI system by bringing together human communication strategies for the design of AI systems.
- Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901.
- The Prompt Artists. In Proceedings of the 15th Conference on Creativity and Cognition (Virtual Event, USA) (C&C ’23). Association for Computing Machinery, New York, NY, USA, 75–87. https://doi.org/10.1145/3591196.3593515
- Brian Christian. 2020. The alignment problem: Machine learning and human values. WW Norton & Company.
- Common ground, cooperation, and recipient design in human-computer interactions. Journal of Pragmatics 193 (2022), 4–20.
- Grammarly. [n. d.]. AI Writing Assistance — Grammarly. https://www.grammarly.com/ai. (Accessed on 03/02/2024).
- HP Grice. 1975. Logic and Conversation. Syntax and Semantics 3 (1975), 43–58.
- Discovering the Syntax and Strategies of Natural Language Programming with Generative Language Models. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 386, 19 pages. https://doi.org/10.1145/3491102.3501870
- Understanding Users’ Dissatisfaction with ChatGPT Responses: Types, Resolving Tactics, and the Effect of Knowledge Level. arXiv preprint arXiv:2311.07434 (2023).
- Jakob Nielsen. 2023. AI: First New UI Paradigm in 60 Years. https://www.nngroup.com/articles/ai-paradigm/. (Accessed on 02/18/2024).
- Andy Norman. 2016. Why we reason: intention-alignment and the genesis of human rationality. Biology & Philosophy 31 (2016), 685–704. https://doi.org/10.1007/s10539-016-9532-4
- Notion. [n. d.]. Notion AI. https://www.notion.so/help/guides/category/ai. (Accessed on 03/02/2024).
- OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774 [cs.CL]
- Building safe artificial intelligence: specification, robustness, and assurance — by DeepMind Safety Research — Medium. https://deepmindsafetyresearch.medium.com/building-safe-artificial-intelligence-52f5f75058f1. (Accessed on 02/19/2024).
- Mathematical discoveries from program search with large language models. Nature (2023), 1–3.
- Vidya Setlur and Melanie Tory. 2022. How do you Converse with an Analytical Chatbot? Revisiting Gricean Maxims for Designing Analytical Conversational Behavior. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 29, 17 pages. https://doi.org/10.1145/3491102.3501972
- Bridging the Gulf of Envisioning: Cognitive Challenges in Prompt Based Interactions with LLMs. https://doi.org/10.1145/3613904.3642754
- AI Alignment in the Design of Interactive AI: Specification Alignment, Process Alignment, and Evaluation Support. arXiv preprint arXiv:2311.00710 (2023).
- Understanding and sharing intentions: The origins of cultural cognition. Behavioral and Brain Sciences 28, 5 (2005), 675–691. https://doi.org/10.1017/S0140525X05000129
- Why Johnny Can’t Prompt: How Non-AI Experts Try (and Fail) to Design LLM Prompts. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 437, 21 pages. https://doi.org/10.1145/3544548.3581388
- Yoonsu Kim (8 papers)
- Kihoon Son (7 papers)
- Seoyoung Kim (17 papers)
- Juho Kim (56 papers)