Teaching Social Behavior through Human Reinforcement for Ad hoc Teamwork -The STAR Framework (1809.07880v3)
Abstract: As AI technology continues to develop, more and more agents will become capable of long term autonomy alongside people. Thus, a recent line of research has studied the problem of teaching autonomous agents the concept of ethics and human social norms. Most existing work considers the case of an individual agent attempting to learn a predefined set of rules. In reality, however, social norms are not always pre-defined and are very difficult to represent algorithmically. Moreover, the basic idea behind the social norms concept is ensuring that one's actions do not negatively influence others' utilities, which is inherently a multiagent concept. Thus, here we investigate a way to teach agents, as a team, how to act according to human social norms. In this research, we introduce the STAR framework used to teach an ad hoc team of agents to act in accordance with human social norms. Using a hybrid team (agents and people), when taking an action considered to be socially unacceptable, the agents receive negative feedback from the human teammate(s) who has(have) an awareness of the team's norms. We view STAR as an important step towards teaching agents to act more consistently with respect to human morality.
- Shani Alkoby (2 papers)
- Avilash Rath (2 papers)
- Peter Stone (184 papers)