Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 167 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 31 tok/s Pro
GPT-5 High 31 tok/s Pro
GPT-4o 106 tok/s Pro
Kimi K2 187 tok/s Pro
GPT OSS 120B 443 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Data-driven architecture to encode information in the kinematics of robots and artificial avatars (2403.06557v1)

Published 11 Mar 2024 in eess.SY, cs.LG, cs.RO, and cs.SY

Abstract: We present a data-driven control architecture for modifying the kinematics of robots and artificial avatars to encode specific information such as the presence or not of an emotion in the movements of an avatar or robot driven by a human operator. We validate our approach on an experimental dataset obtained during the reach-to-grasp phase of a pick-and-place task.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (24)
  1. C. Becchio, A. Koul, C. Ansuini, C. Bertone, and A. Cavallo, “Seeing mental states: An experimental strategy for measuring the observability of other minds,” Physics of life reviews, vol. 24, pp. 67–80, 2018.
  2. J. Podda, C. Ansuini, R. Vastano, A. Cavallo, and C. Becchio, “The heaviness of invisible objects: Predictive weight judgments from observed real and pantomimed grasps,” Cognition, vol. 168, pp. 140–145, 2017.
  3. E. Scaliti, K. Pullar, G. Borghini, A. Cavallo, S. Panzeri, and C. Becchio, “Kinematic priming of action predictions,” Current Biology, vol. 33, no. 13, pp. 2717–2727.e6, 2023.
  4. J. P. Gallivan, C. S. Chapman, D. M. Wolpert, and J. R. Flanagan, “Decision-making in sensorimotor control,” Nature Reviews Neuroscience, vol. 19, no. 9, pp. 519–534, 2018.
  5. N. J. Wispinski, J. P. Gallivan, and C. S. Chapman, “Models, movements, and minds: bridging the gap between decision making and action,” Annals of the New York Academy of Sciences, vol. 1464, no. 1, pp. 30–51, 2020.
  6. N. Montobbio, A. Cavallo, D. Albergo, C. Ansuini, F. Battaglia, J. Podda, L. Nobili, S. Panzeri, and C. Becchio, “Intersecting kinematic encoding and readout of intention in autism,” in Proceedings of the National Academy of Sciences, vol. 119, no. 5.   National Acad Sciences, 2022, p. e2114648119.
  7. L. McEllin, N. Sebanz, and G. Knoblich, “Identifying others’ informative intentions from movement kinematics,” Cognition, vol. 180, pp. 246–258, 2018.
  8. J. W. Strachan, A. Curioni, M. D. Constable, G. Knoblich, and M. Charbonneau, “Evaluating the relative contributions of copying and reconstruction processes in cultural transmission episodes,” Plos one, vol. 16, no. 9, p. e0256901, 2021.
  9. A. Mörtl, T. Lorenz, and S. Hirche, “Rhythm patterns interaction-synchronization behavior for human-robot joint action,” PloS one, vol. 9, no. 4, p. e95195, 2014.
  10. X. Huang, W. Wu, and H. Qiao, “Connecting model-based and model-free control with emotion modulation in learning systems,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 51, no. 8, pp. 4624–4638, 2021.
  11. X. Huang, W. Wu, H. Qiao, and Y. Ji, “Brain-inspired motion learning in recurrent neural network with emotion modulation,” IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 4, pp. 1153–1164, 2018.
  12. J.-A. Claret, G. Venture, and L. Basañez, “Exploiting the robot kinematic redundancy for emotion conveyance to humans as a lower priority task,” International Journal of Social Robotics, vol. 9, no. 2, pp. 277–292, 2017.
  13. T. Lourens, R. Van Berkel, and E. Barakova, “Communicating emotions and mental states to robots in a real time parallel framework using Laban movement analysis,” Robotics and Autonomous Systems, vol. 58, no. 12, pp. 1256–1265, 2010.
  14. U. Bernardet, S. Fdili Alaoui, K. Studd, K. Bradley, P. Pasquier, and T. Schiphorst, “Assessing the reliability of the laban movement analysis system,” PloS one, vol. 14, no. 6, p. e0218179, 2019.
  15. K. Wu, L. Chen, K. Wang, M. Wu, W. Pedrycz, and K. Hirota, “Robotic arm trajectory generation based on emotion and kinematic feature,” in 2022 International Power Electronics Conference (IPEC-Himeji 2022- ECCE Asia), 2022, pp. 1332–1336.
  16. M. Lombardi, D. Liuzza, and M. di Bernardo, “Dynamic input deep learning control of artificial avatars in a multi-agent joint motor task,” Frontiers in Robotics and AI, vol. 8, 2021.
  17. A. Atkinson, W. Dittrich, A. Gemmell, and A. Young, “Emotion perception from dynamic and static body expressions in point-light and full-light displays,” Perception, vol. 33, no. 6, pp. 717–46, 2004.
  18. J. Llobera, V. Jacquat, C. Calabrese, and C. Charbonnier, “Playing the mirror game in virtual reality with an autonomous character,” Scientific Reports, vol. 12, no. 1, p. 21329, 2022.
  19. A. Melzer, T. Shafir, and R. P. Tsachor, “How do we recognize emotion from movement? specific motor components contribute to the recognition of each emotion,” Frontiers in Psychology, vol. 10, 2019.
  20. M. Spezialetti, G. Placidi, and S. Rossi, “Emotion recognition for human-robot interaction: Recent advances and future perspectives,” Frontiers in Robotics and AI, vol. 7, 2020.
  21. G. Turri, A. Cavallo, L. Romeo, M. Pontil, A. Sanfey, S. Panzeri, and C. Becchio, “Decoding social decisions from movement kinematics,” iScience, vol. 25, no. 12, p. 105550, 2022.
  22. V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, 2015.
  23. F. De Lellis, M. Coraggio, G. Russo, M. Musolesi, and M. di Bernardo, “Guaranteeing control requirements via reward shaping in reinforcement learning,” arXiv preprint arXiv:2311.10026, 2023.
  24. F. De Lellis, M. Coraggio, G. Russo, M. Musolesi, and M. di Bernardo, “CT-DQN: Control-tutored deep reinforcement learning,” in Proceedings of The 5th Annual Learning for Dynamics and Control Conference (L4DC 2023), vol. 211.   PMLR, 2023, pp. 941–953.

Summary

  • The paper introduces a novel control architecture that encodes emotions within robot kinematics during reach-to-grasp movements.
  • It employs feedforward neural networks to map extensive human motion data into adaptive robot movements that reflect targeted emotional states.
  • The approach ensures task functionality by integrating correction terms that satisfy initial and terminal movement constraints in real-time.

Data-driven Control Architecture for Encoding Information in Robot Kinematics

Introduction

Developments in robotics and artificial intelligence have escalated the interest in creating robots and avatars capable of conveying complex information, including emotional states, through their movements. One of the profound challenges in social robotics and human-computer interaction is the modification of robot kinematics to encode specific information, such as emotions, that could significantly enhance human-robot interaction. This paper introduces a novel data-driven control architecture designed to manipulate the kinematics of robots and artificial avatars to encode desired information within their movements. By focusing on the "reach-to-grasp" motion, a fundamental movement in various interaction tasks, the research addresses encoding emotional states like fear in avatars controlled by human operators in virtual reality (VR).

State of the Art

The incorporation of emotional states into robotic systems has seen various methodological approaches, ranging from decision-making frameworks influenced by emotional responses to trajectory planning that factors in emotional aspects using Laban Movement Analysis. Despite these advancements, a concrete system for dynamically adjusting robots' or avatars' movements to encode specific emotions in real-time based on human operator inputs remains underexplored. This paper builds upon existing research by proposing a trainable control architecture that manipulates movement kinematics based on a comprehensive dataset of human motions, aiming to encode and decode emotional states effectively.

Control of Body Movement to Express Emotion

At the core of this research is the premise that human movements carry abundant information about the mover's intentions and emotional states. The ability to read and encode this information into artificial agents' movements can significantly augment the interpretability and naturalness of human-robot interactions. The paper systematically defines the problem, introduces the necessary preliminaries, and lays out a mathematical formalization to solve the problem of encoding emotions in robot kinematics, focusing particularly on the emotion of fear during a reach-to-grasp motion.

A Data-driven Solution Framework

A key innovation presented in this paper is a data-driven architecture that leverages a comprehensive dataset of human movements to encode desired emotions into robot movements. The architecture uses feedforward neural networks to approximate the encoding function that categorizes movements based on the presence of specific emotions. By projecting the human movement onto a dataset segmented by emotional encoding, the system selects a reference movement that closely matches the desired emotional state. It then calculates a blending coefficient to dynamically adjust the robot's or avatar's movement in real-time to encode the target emotion.

Enforcing Initial and Terminal Conditions

Another significant aspect of the research is the attention to practical constraints, such as ensuring that the modified kinematics still enable the completion of intended tasks (e.g., reaching an object). The team proposes a solution that not only encodes the desired emotion but also satisfies these physical constraints by introducing a correction term into the altered motion. This approach demonstrates an understanding of the complex balance between achieving realistic emotional encoding and maintaining the functional objectives of robot movements.

Validation and Implications

The paper's validation section provides empirical evidence supporting the proposed architecture's efficacy, showcasing successful encoding of fear in reach-to-grasp movements. The researchers meticulously train and evaluate the encoding function, showing strong performance in classifying emotional states from movement kinematics.

The implications of this work extend beyond robotics, potentially revolutionizing how artificial avatars in virtual and augmented reality environments interact with users. By enabling more nuanced and emotionally aware interactions, the technology could foster deeper connections between humans and artificial agents, enhancing the user experience in entertainment, training, and therapeutic applications.

In conclusion, this research contributes significantly to the fields of robotics and human-computer interaction by offering a robust framework for encoding emotional information into robot and avatar movements. As future work explores applying this architecture to other forms of interaction and refining the encoding capabilities, we move closer to realizing robots and avatars capable of truly natural and meaningful communication with humans.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We found no open problems mentioned in this paper.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 18 likes.

Upgrade to Pro to view all of the tweets about this paper: