Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 154 tok/s
Gemini 2.5 Pro 37 tok/s Pro
GPT-5 Medium 21 tok/s Pro
GPT-5 High 23 tok/s Pro
GPT-4o 96 tok/s Pro
Kimi K2 169 tok/s Pro
GPT OSS 120B 347 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Exploring the Design Space of Extra-Linguistic Expression for Robots (2306.15828v1)

Published 27 Jun 2023 in cs.RO and cs.HC

Abstract: In this paper, we explore the new design space of extra-linguistic cues inspired by graphical tropes used in graphic novels and animation to enhance the expressiveness of social robots. To achieve this, we identified a set of cues that can be used to generate expressions, including smoke/steam/fog, water droplets, and bubbles. We prototyped devices that can generate these fluid expressions for a robot and conducted design sessions where eight designers explored the use and utility of the cues in conveying the robot's internal states in various design scenarios. Our analysis of the 22 designs, the associated design justifications, and the interviews with designers revealed patterns in how each cue was used, how they were combined with nonverbal cues, and where the participants drew their inspiration from. These findings informed the design of an integrated module called EmoPack, which can be used to augment the expressive capabilities of any robot platform.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (69)
  1. PufferBot: Actuated Expandable Structures for Aerial Robots. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 1338–1343. https://doi.org/10.1109/IROS45743.2020.9341088
  2. BreathScreen – Design and Evaluation of an Ephemeral UI. https://doi.org/10.1145/3025453.3025973
  3. Projecting robot intentions into human environments. In 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). 294–301. https://doi.org/10.1109/ROMAN.2016.7745145
  4. Bruno G Bara and Maurizio Tirassa. 1999. A mentalist framework for linguistic and extralinguistic communication. (1999).
  5. Transparent Robot Behavior Using Augmented Reality in Close Human-Robot Interaction. In 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). 1–7. https://doi.org/10.1109/RO-MAN46459.2019.8956296
  6. Cynthia Breazeal. 2004. Designing sociable robots. MIT press.
  7. Playing catch with robots: Incorporating social gestures into physical interactions. In The 23rd IEEE International Symposium on Robot and Human Interactive Communication. 231–236. https://doi.org/10.1109/ROMAN.2014.6926258
  8. ModLight: Designing a modular light signaling tool for human-robot interaction. In 2017 IEEE International Conference on Robotics and Automation (ICRA). 1654–1661. https://doi.org/10.1109/ICRA.2017.7989195
  9. Victoria Clarke and Virginia Braun. 2014. Thematic Analysis. Vol. 3. 1947–1952. https://doi.org/10.1037/13620-004
  10. Effects of Robot Motion on Human-Robot Collaboration. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction (Portland, Oregon, USA) (HRI ’15). Association for Computing Machinery, New York, NY, USA, 51–58. https://doi.org/10.1145/2696454.2696473
  11. Follow me: Communicating Intentions with a Spherical Robot. https://doi.org/10.1109/ROMAN.2016.7745189
  12. Charles Forceville. 2016. Conceptual metaphor theory, blending theory, and other cognitivist perspectives on comics. The visual narrative reader (2016), 89–114.
  13. Akatsuka Fujio. 2018. Denshiban Tensai Bakabon. Vol. 5. Shogakukan, 2-3-1 Hitotsubashi, Chiyoda, Tokyo.
  14. Fujiko F. Fujio. 1974. Doraemon Vol.1. Vol. 1. Shogakukan, 2-3-1 Hitotsubashi, Chiyoda, Tokyo.
  15. Using Socially Expressive Mixed Reality Arms for Enhancing Low-Expressivity Robots. In 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) (New Delhi, India). IEEE Press, 1–8. https://doi.org/10.1109/RO-MAN46459.2019.8956458
  16. Yijie Guo and Fumihide Tanaka. 2020. Robot That Sweats to Remind the Elderly of High-Temperature. In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (Cambridge, United Kingdom) (HRI ’20). Association for Computing Machinery, New York, NY, USA, 221–223. https://doi.org/10.1145/3371382.3378383
  17. The information percolator: ambient information display in a decorative object. In Proceedings of the 12th annual ACM symposium on User interface software and technology. 141–148.
  18. Exploring the Role of Social Robot Behaviors in a Creative Activity. In Designing Interactive Systems Conference 2021. 1380–1389.
  19. Yuhan Hu and Guy Hoffman. 2020. Using Skin Texture Change to Design Emotion Expression in Social Robots. In Proceedings of the 14th ACM/IEEE International Conference on Human-Robot Interaction (Daegu, Republic of Korea) (HRI ’19). IEEE Press, 2–10.
  20. Jonas Jørgensen. 2019. Constructing Soft Robot Aesthetics-Art, Sensation, and Materiality in Practice.
  21. Is a soft robot more “natural”? Exploring the perception of soft robotics in human–robot interaction. International Journal of Social Robotics (2022), 1–19.
  22. Expressive path shape (swagger): Simple features that illustrate a robot’s attitude toward its goal in real time. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 1475–1482. https://doi.org/10.1109/IROS.2016.7759240
  23. Fumiyo Kono. 2018. Gigatown Manpu Zufu. Asahi Shimbun Publications Inc., 5-3-2 Tsukiji, Chuo, Tokyo.
  24. Methods that may stimulate creativity and their use in architectural design education. International Journal of Technology and Design Education 20 (2010), 453–476.
  25. HAL Laboratory. 2022. Kirby and the Forgotten Land. Nintendo, Nintendo Switch.
  26. Interactive Volumetric Fog Display. In SIGGRAPH Asia 2015 Emerging Technologies (Kobe, Japan) (SA ’15). Association for Computing Machinery, New York, NY, USA, Article 13, 2 pages. https://doi.org/10.1145/2818466.2818488
  27. Design for Serendipitous Interaction: BubbleBot - Bringing People Together with Bubbles. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 759–760. https://doi.org/10.1109/HRI.2019.8673265
  28. O&O: A DIY Toolkit for Designing and Rapid Prototyping Olfactory Interfaces. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 637, 21 pages. https://doi.org/10.1145/3491102.3502033
  29. Roboscopie: A theatre performance for a human and a robot. In Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction. 427–428.
  30. David V Lu and William D Smart. 2011. Human-robot interactions as theatre. In 2011 Ro-Man. IEEE, 473–478.
  31. Research through Design Approaches in Human-Robot Interaction. In Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (Boulder, CO, USA) (HRI ’21 Companion). Association for Computing Machinery, New York, NY, USA, 685–687. https://doi.org/10.1145/3434074.3444868
  32. Placing the face in context: cultural differences in the perception of facial emotion. Journal of personality and social psychology 94, 3 (2008), 365.
  33. Understanding creativity methods in design. In Proceedings of the 2017 conference on designing interactive systems. 839–851.
  34. Communicating Inferred Goals With Passive Augmented Reality and Active Haptic Feedback. IEEE Robotics and Automation Letters 6 (2021), 8522–8529.
  35. A Storytelling Robot: Modeling and Evaluation of Human-like Gaze Behavior. Proceedings of the 2006 6th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS 518-523, 518 – 523. https://doi.org/10.1109/ICHR.2006.321322
  36. HydroMorph: Shape Changing Water Membrane for Display and Interaction. In Proceedings of the TEI ’16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction (Eindhoven, Netherlands) (TEI ’16). Association for Computing Machinery, New York, NY, USA, 512–517. https://doi.org/10.1145/2839462.2856517
  37. Supporting perception of weight through motion-induced sensory conflicts in robot teleoperation. In Proceedings of the 2020 ACM/IEEE International Conference on Human-robot Interaction. 509–517.
  38. Robotics facial expression of anger in collaborative human–robot interaction. International Journal of Advanced Robotic Systems 16, 1 (2019), 1729881418817972.
  39. Tiago Ribeiro and Ana Paiva. 2012. The Illusion of Robotic Life: Principles and Practices of Animation for Robots. In Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction (Boston, Massachusetts, USA) (HRI ’12). Association for Computing Machinery, New York, NY, USA, 383–390. https://doi.org/10.1145/2157689.2157814
  40. Communicating Robot Arm Motion Intent Through Mixed Reality Head-Mounted Displays. 301–316. https://doi.org/10.1007/978-3-030-28619-4_26
  41. Junpei Sanda and Masayoshi Kanoh. 2021. Effectiveness of Manga Technique in Expressing Facial Expressions of Welfare Robot. 189–194. https://doi.org/10.1007/978-3-030-78642-7_25
  42. Humans’ Perception of a Robot Moving Using a Slow in and Slow Out Velocity Profile. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 594–595. https://doi.org/10.1109/HRI.2019.8673239
  43. Animation Techniques in Human-Robot Interaction User Studies: A Systematic Literature Review. J. Hum.-Robot Interact. 8, 2, Article 12 (jun 2019), 22 pages. https://doi.org/10.1145/3317325
  44. Understanding Object Weight from Human and Humanoid Lifting Actions. IEEE Transactions on Autonomous Mental Development 6, 2 (2014), 80–92. https://doi.org/10.1109/TAMD.2014.2312399
  45. Sato Shuho. 2002. Give My Regards to Black Jack Vol.1. Vol. 1. Kodansha, 2-12-21 Otowa, Bunkyo, Tokyo.
  46. Ashish Singh and James E. Young. 2013a. A dog tail for communicating robotic states. In 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 417–417. https://doi.org/10.1109/HRI.2013.6483625
  47. Ashish Singh and James E Young. 2013b. A dog tail for utility robots: exploring affective properties of tail movement. In Human-Computer Interaction–INTERACT 2013: 14th IFIP TC 13 International Conference, Cape Town, South Africa, September 2-6, 2013, Proceedings, Part II 14. Springer, 403–419.
  48. Sichao Song and Seiji Yamada. 2018. Bioluminescence-Inspired Human-Robot Interaction: Designing Expressive Lights That Affect Human’s Willingness to Interact with a Robot. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (Chicago, IL, USA) (HRI ’18). Association for Computing Machinery, New York, NY, USA, 224–232. https://doi.org/10.1145/3171221.3171249
  49. Design Method for Gushed Light Field: Aerosol-Based Aerial and Instant Display. In Proceedings of the 8th Augmented Human International Conference (Silicon Valley, California, USA) (AH ’17). Association for Computing Machinery, New York, NY, USA, Article 1, 10 pages. https://doi.org/10.1145/3041164.3041170
  50. Augmented Reality and Robotics: A Survey and Taxonomy for AR-Enhanced Human-Robot Interaction and Robotic Interfaces. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 553, 33 pages. https://doi.org/10.1145/3491102.3517719
  51. Liquids, smoke, and soap bubbles: reflections on materials for ephemeral user interfaces. 269–270. https://doi.org/10.1145/1709886.1709941
  52. Communication of Intent in Assistive Free Flyers. In Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction (Bielefeld, Germany) (HRI ’14). Association for Computing Machinery, New York, NY, USA, 358–365. https://doi.org/10.1145/2559636.2559672
  53. Communication of intent in assistive free flyers. In Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction. 358–365.
  54. Communicating Directionality in Flying Robots. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction (Portland, Oregon, USA) (HRI ’15). Association for Computing Machinery, New York, NY, USA, 19–26. https://doi.org/10.1145/2696454.2696475
  55. Expressing Thought: Improving Robot Readability with Animation Principles. In Proceedings of the 6th International Conference on Human-Robot Interaction (Lausanne, Switzerland) (HRI ’11). Association for Computing Machinery, New York, NY, USA, 69–76. https://doi.org/10.1145/1957656.1957674
  56. Designing Social Cues for Collaborative Robots: The Role of Gaze and Breathing in Human-Robot Collaboration. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (Cambridge, United Kingdom) (HRI ’20). Association for Computing Machinery, New York, NY, USA, 343–357. https://doi.org/10.1145/3319502.3374829
  57. The illusion of life: Disney animation. Hyperion New York.
  58. MistForm: Adaptive Shape Changing Fog Screens. 4383–4395. https://doi.org/10.1145/3025453.3025608
  59. Cross-cultural perspectives on emotion expressive humanoid robotic head: recognition of facial expressions and symbols. International Journal of Social Robotics 5 (2013), 515–527.
  60. Encounters with kismet and cog: Children respond to relational artifacts. Digital media: Transformations in human communication 120 (2006).
  61. TV Tropes. 2023. Animation Tropes. https://tvtropes.org/pmwiki/pmwiki.php/Main/AnimationTropes Accessed: 02-13-2023.
  62. Communicating Robot Motion Intent with Augmented Reality. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (Chicago, IL, USA) (HRI ’18). Association for Computing Machinery, New York, NY, USA, 316–324. https://doi.org/10.1145/3171221.3171253
  63. Human Perceptions of a Curious Robot That Performs Off-Task Actions. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (Cambridge, United Kingdom) (HRI ’20). Association for Computing Machinery, New York, NY, USA, 529–538. https://doi.org/10.1145/3319502.3374821
  64. Experimental Study on Abstract Expression of Human-Robot Emotional Communication. Symmetry 13, 9 (2021). https://doi.org/10.3390/sym13091693
  65. https://youtube.com/watch?v=p4-FeTK3zcE&si=EnSIkaIECMiOmarE=02m45s
  66. Teardrop Glasses: Pseudo Tears Induce Sadness in You and Those Around You. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 508, 12 pages. https://doi.org/10.1145/3411764.3445741
  67. Implementing Bubblegrams: The Use of Haar-Like Features for Human-Robot Interaction. In 2006 IEEE International Conference on Automation Science and Engineering. 298–303. https://doi.org/10.1109/COASE.2006.326897
  68. Robot Expressionism through Cartooning. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (Arlington, Virginia, USA) (HRI ’07). Association for Computing Machinery, New York, NY, USA, 309–316. https://doi.org/10.1145/1228716.1228758
  69. Research through Design as a Method for Interaction Design Research in HCI. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’07). Association for Computing Machinery, New York, NY, USA, 493–502. https://doi.org/10.1145/1240624.1240704
Citations (3)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube