2000 character limit reached
Data-driven architecture to encode information in the kinematics of robots and artificial avatars (2403.06557v1)
Published 11 Mar 2024 in eess.SY, cs.LG, cs.RO, and cs.SY
Abstract: We present a data-driven control architecture for modifying the kinematics of robots and artificial avatars to encode specific information such as the presence or not of an emotion in the movements of an avatar or robot driven by a human operator. We validate our approach on an experimental dataset obtained during the reach-to-grasp phase of a pick-and-place task.
- C. Becchio, A. Koul, C. Ansuini, C. Bertone, and A. Cavallo, “Seeing mental states: An experimental strategy for measuring the observability of other minds,” Physics of life reviews, vol. 24, pp. 67–80, 2018.
- J. Podda, C. Ansuini, R. Vastano, A. Cavallo, and C. Becchio, “The heaviness of invisible objects: Predictive weight judgments from observed real and pantomimed grasps,” Cognition, vol. 168, pp. 140–145, 2017.
- E. Scaliti, K. Pullar, G. Borghini, A. Cavallo, S. Panzeri, and C. Becchio, “Kinematic priming of action predictions,” Current Biology, vol. 33, no. 13, pp. 2717–2727.e6, 2023.
- J. P. Gallivan, C. S. Chapman, D. M. Wolpert, and J. R. Flanagan, “Decision-making in sensorimotor control,” Nature Reviews Neuroscience, vol. 19, no. 9, pp. 519–534, 2018.
- N. J. Wispinski, J. P. Gallivan, and C. S. Chapman, “Models, movements, and minds: bridging the gap between decision making and action,” Annals of the New York Academy of Sciences, vol. 1464, no. 1, pp. 30–51, 2020.
- N. Montobbio, A. Cavallo, D. Albergo, C. Ansuini, F. Battaglia, J. Podda, L. Nobili, S. Panzeri, and C. Becchio, “Intersecting kinematic encoding and readout of intention in autism,” in Proceedings of the National Academy of Sciences, vol. 119, no. 5. National Acad Sciences, 2022, p. e2114648119.
- L. McEllin, N. Sebanz, and G. Knoblich, “Identifying others’ informative intentions from movement kinematics,” Cognition, vol. 180, pp. 246–258, 2018.
- J. W. Strachan, A. Curioni, M. D. Constable, G. Knoblich, and M. Charbonneau, “Evaluating the relative contributions of copying and reconstruction processes in cultural transmission episodes,” Plos one, vol. 16, no. 9, p. e0256901, 2021.
- A. Mörtl, T. Lorenz, and S. Hirche, “Rhythm patterns interaction-synchronization behavior for human-robot joint action,” PloS one, vol. 9, no. 4, p. e95195, 2014.
- X. Huang, W. Wu, and H. Qiao, “Connecting model-based and model-free control with emotion modulation in learning systems,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 51, no. 8, pp. 4624–4638, 2021.
- X. Huang, W. Wu, H. Qiao, and Y. Ji, “Brain-inspired motion learning in recurrent neural network with emotion modulation,” IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 4, pp. 1153–1164, 2018.
- J.-A. Claret, G. Venture, and L. Basañez, “Exploiting the robot kinematic redundancy for emotion conveyance to humans as a lower priority task,” International Journal of Social Robotics, vol. 9, no. 2, pp. 277–292, 2017.
- T. Lourens, R. Van Berkel, and E. Barakova, “Communicating emotions and mental states to robots in a real time parallel framework using Laban movement analysis,” Robotics and Autonomous Systems, vol. 58, no. 12, pp. 1256–1265, 2010.
- U. Bernardet, S. Fdili Alaoui, K. Studd, K. Bradley, P. Pasquier, and T. Schiphorst, “Assessing the reliability of the laban movement analysis system,” PloS one, vol. 14, no. 6, p. e0218179, 2019.
- K. Wu, L. Chen, K. Wang, M. Wu, W. Pedrycz, and K. Hirota, “Robotic arm trajectory generation based on emotion and kinematic feature,” in 2022 International Power Electronics Conference (IPEC-Himeji 2022- ECCE Asia), 2022, pp. 1332–1336.
- M. Lombardi, D. Liuzza, and M. di Bernardo, “Dynamic input deep learning control of artificial avatars in a multi-agent joint motor task,” Frontiers in Robotics and AI, vol. 8, 2021.
- A. Atkinson, W. Dittrich, A. Gemmell, and A. Young, “Emotion perception from dynamic and static body expressions in point-light and full-light displays,” Perception, vol. 33, no. 6, pp. 717–46, 2004.
- J. Llobera, V. Jacquat, C. Calabrese, and C. Charbonnier, “Playing the mirror game in virtual reality with an autonomous character,” Scientific Reports, vol. 12, no. 1, p. 21329, 2022.
- A. Melzer, T. Shafir, and R. P. Tsachor, “How do we recognize emotion from movement? specific motor components contribute to the recognition of each emotion,” Frontiers in Psychology, vol. 10, 2019.
- M. Spezialetti, G. Placidi, and S. Rossi, “Emotion recognition for human-robot interaction: Recent advances and future perspectives,” Frontiers in Robotics and AI, vol. 7, 2020.
- G. Turri, A. Cavallo, L. Romeo, M. Pontil, A. Sanfey, S. Panzeri, and C. Becchio, “Decoding social decisions from movement kinematics,” iScience, vol. 25, no. 12, p. 105550, 2022.
- V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, 2015.
- F. De Lellis, M. Coraggio, G. Russo, M. Musolesi, and M. di Bernardo, “Guaranteeing control requirements via reward shaping in reinforcement learning,” arXiv preprint arXiv:2311.10026, 2023.
- F. De Lellis, M. Coraggio, G. Russo, M. Musolesi, and M. di Bernardo, “CT-DQN: Control-tutored deep reinforcement learning,” in Proceedings of The 5th Annual Learning for Dynamics and Control Conference (L4DC 2023), vol. 211. PMLR, 2023, pp. 941–953.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.