Papers
Topics
Authors
Recent
Search
2000 character limit reached

Beyond Prompts: Learning from Human Communication for Enhanced AI Intent Alignment

Published 9 May 2024 in cs.HC and cs.CL | (2405.05678v1)

Abstract: AI intent alignment, ensuring that AI produces outcomes as intended by users, is a critical challenge in human-AI interaction. The emergence of generative AI, including LLMs, has intensified the significance of this problem, as interactions increasingly involve users specifying desired results for AI systems. In order to support better AI intent alignment, we aim to explore human strategies for intent specification in human-human communication. By studying and comparing human-human and human-LLM communication, we identify key strategies that can be applied to the design of AI systems that are more effective at understanding and aligning with user intent. This study aims to advance toward a human-centered AI system by bringing together human communication strategies for the design of AI systems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)
  1. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901.
  2. The Prompt Artists. In Proceedings of the 15th Conference on Creativity and Cognition (Virtual Event, USA) (C&C ’23). Association for Computing Machinery, New York, NY, USA, 75–87. https://doi.org/10.1145/3591196.3593515
  3. Brian Christian. 2020. The alignment problem: Machine learning and human values. WW Norton & Company.
  4. Common ground, cooperation, and recipient design in human-computer interactions. Journal of Pragmatics 193 (2022), 4–20.
  5. Grammarly. [n. d.]. AI Writing Assistance — Grammarly. https://www.grammarly.com/ai. (Accessed on 03/02/2024).
  6. HP Grice. 1975. Logic and Conversation. Syntax and Semantics 3 (1975), 43–58.
  7. Discovering the Syntax and Strategies of Natural Language Programming with Generative Language Models. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 386, 19 pages. https://doi.org/10.1145/3491102.3501870
  8. Understanding Users’ Dissatisfaction with ChatGPT Responses: Types, Resolving Tactics, and the Effect of Knowledge Level. arXiv preprint arXiv:2311.07434 (2023).
  9. Jakob Nielsen. 2023. AI: First New UI Paradigm in 60 Years. https://www.nngroup.com/articles/ai-paradigm/. (Accessed on 02/18/2024).
  10. Andy Norman. 2016. Why we reason: intention-alignment and the genesis of human rationality. Biology & Philosophy 31 (2016), 685–704. https://doi.org/10.1007/s10539-016-9532-4
  11. Notion. [n. d.]. Notion AI. https://www.notion.so/help/guides/category/ai. (Accessed on 03/02/2024).
  12. OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774 [cs.CL]
  13. Building safe artificial intelligence: specification, robustness, and assurance — by DeepMind Safety Research — Medium. https://deepmindsafetyresearch.medium.com/building-safe-artificial-intelligence-52f5f75058f1. (Accessed on 02/19/2024).
  14. Mathematical discoveries from program search with large language models. Nature (2023), 1–3.
  15. Vidya Setlur and Melanie Tory. 2022. How do you Converse with an Analytical Chatbot? Revisiting Gricean Maxims for Designing Analytical Conversational Behavior. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 29, 17 pages. https://doi.org/10.1145/3491102.3501972
  16. Bridging the Gulf of Envisioning: Cognitive Challenges in Prompt Based Interactions with LLMs. https://doi.org/10.1145/3613904.3642754
  17. AI Alignment in the Design of Interactive AI: Specification Alignment, Process Alignment, and Evaluation Support. arXiv preprint arXiv:2311.00710 (2023).
  18. Understanding and sharing intentions: The origins of cultural cognition. Behavioral and Brain Sciences 28, 5 (2005), 675–691. https://doi.org/10.1017/S0140525X05000129
  19. Why Johnny Can’t Prompt: How Non-AI Experts Try (and Fail) to Design LLM Prompts. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 437, 21 pages. https://doi.org/10.1145/3544548.3581388
Citations (1)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.