Papers
Topics
Authors
Recent
Search
2000 character limit reached

FRAC-Q-Learning: A Reinforcement Learning with Boredom Avoidance Processes for Social Robots

Published 26 Nov 2023 in cs.RO, cs.HC, and cs.LG | (2311.15327v6)

Abstract: The reinforcement learning algorithms have often been applied to social robots. However, most reinforcement learning algorithms were not optimized for the use of social robots, and consequently they may bore users. We proposed a new reinforcement learning method specialized for the social robot, the FRAC-Q-learning, that can avoid user boredom. The proposed algorithm consists of a forgetting process in addition to randomizing and categorizing processes. This study evaluated interest and boredom hardness scores of the FRAC-Q-learning by a comparison with the traditional Q-learning. The FRAC-Q-learning showed significantly higher trend of interest score, and indicated significantly harder to bore users compared to the traditional Q-learning. Therefore, the FRAC-Q-learning can contribute to develop a social robot that will not bore users. The proposed algorithm has a potential to apply for Web-based communication and educational systems. This paper presents the entire process, detailed implementation and a detailed evaluation method of the of the FRAC-Q-learning for the first time.

Summary

  • The paper introduces FRAC-Q-Learning, a novel algorithm integrating boredom avoidance processes into traditional Q-learning for social robots.
  • It enhances learning through forgetting, randomization, and action categorization, leading to diversified and adaptive interactions.
  • Experiments with the Manami robot show that users experienced more engaging interactions compared to standard Q-learning.

Introduction to FRAC-Q-Learning

Social robots are becoming increasingly common in everyday scenarios, from education and therapy to general companionship. A critical challenge in their development is ensuring these robots maintain the interest of humans and do not cause boredom over time. This paper introduces an innovative approach called FRAC-Q-learning that is designed to keep users engaged when interacting with social robots.

Key Concepts of FRAC-Q-Learning

Reinforcement learning (RL) is the backbone of this innovative methodology. Traditional RL has been successful in various applications, but it often falls short when applied to social robots, as it does not cater specifically to the social interaction context. As an extension to the well-known Q-learning, FRAC-Q-learning incorporates additional processes to adapt its behavior based on user interactions.

Implementing FRAC-Q-Learning in Social Robots

The novel FRAC-Q-learning algorithm was implemented in a handmade stuffed social robot called Manami. The algorithm consists of three key enhancements to traditional Q-learning: a forgetting process, randomization, and categorization of actions. These enhancements aim to avoid user boredom through diversified and adaptive interactions.

Evaluation and Potential Applications

The effectiveness of FRAC-Q-learning was evaluated against traditional Q-learning through experiments involving the social robot Manami. Participants indicated that the interactions with the robot using FRAC-Q-learning were more interesting and less likely to bore them.

Moreover, FRAC-Q-learning holds potential beyond the field of social robotics. The algorithm can be valuable for web-based communication systems, educational platforms, and web advertising, where user engagement is crucial. The ability to adapt to user feedback ensures sustained interest and interaction, which is pivotal in educational and therapeutic contexts.

In conclusion, FRAC-Q-learning is a significant step forward in developing social robots that can sustain human interest, preventing the onset of boredom and disengagement. This enhanced learning approach paves the way for more sophisticated and socially adept robots capable of maintaining long-term relationships with users.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 0 likes about this paper.