Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Altruistic Behaviours in Reinforcement Learning without External Rewards (2107.09598v4)

Published 20 Jul 2021 in cs.AI, cs.LG, and cs.MA

Abstract: Can artificial agents learn to assist others in achieving their goals without knowing what those goals are? Generic reinforcement learning agents could be trained to behave altruistically towards others by rewarding them for altruistic behaviour, i.e., rewarding them for benefiting other agents in a given situation. Such an approach assumes that other agents' goals are known so that the altruistic agent can cooperate in achieving those goals. However, explicit knowledge of other agents' goals is often difficult to acquire. In the case of human agents, their goals and preferences may be difficult to express fully; they might be ambiguous or even contradictory. Thus, it is beneficial to develop agents that do not depend on external supervision and learn altruistic behaviour in a task-agnostic manner. We propose to act altruistically towards other agents by giving them more choice and allowing them to achieve their goals better. Some concrete examples include opening a door for others or safeguarding them to pursue their objectives without interference. We formalize this concept and propose an altruistic agent that learns to increase the choices another agent has by preferring to maximize the number of states that the other agent can reach in its future. We evaluate our approach in three different multi-agent environments where another agent's success depends on altruistic behaviour. Finally, we show that our unsupervised agents can perform comparably to agents explicitly trained to work cooperatively, in some cases even outperforming them.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Tim Franzmeyer (10 papers)
  2. Mateusz Malinowski (41 papers)
  3. João F. Henriques (55 papers)
Citations (7)

Summary

Learning altruistic behaviors in reinforcement learning (RL) without external rewards involves training agents to assist others in achieving their goals without explicit knowledge or rewards derived from those goals. The paper "Learning Altruistic Behaviours in Reinforcement Learning without External Rewards" (Franzmeyer et al., 2021 ) explores this concept by suggesting that agents can be trained to act altruistically by maximizing the number of reachable states for other agents, thereby increasing their options and aiding their success. This approach does not depend on external supervision or precise knowledge of other agents’ specific goals.

The idea hinges on the broader challenge within RL to develop intrinsic motivation mechanisms, which has been addressed in various ways across the literature:

  1. Intrinsic Motivation and Curiosity: Curiosity-driven learning motivates agents through intrinsic rewards based on prediction errors rather than externally defined rewards. This allows agents to explore and learn effectively in the absence of explicit reward signals from the environment. The large-scale paper on curiosity-driven learning demonstrates that agents can perform well across many environments using internal reward mechanisms derived from curiosity (Burda et al., 2018 ).
  2. Generative Intrinsic Goals: AMIGo (Adversarially Motivated Intrinsic Goals) presents another strategy where a "teacher" agent generates challenging goals for a "student" agent to achieve, promoting the learning of general skills without relying on external rewards. This approach fosters the development of versatile agents capable of handling various tasks through a generated curriculum of intrinsic goals (Campero et al., 2020 ).
  3. Interactive Learning and Human Feedback: Human-in-the-loop reinforcement learning, such as the PEBBLE framework, leverages human feedback to train agents efficiently. These methods use pre-trained models and interactive feedback to prevent the exploitation of reward signals and to ensure the learning of complex tasks despite sparse external rewards (Lee et al., 2021 ).
  4. Reward Modeling and Imitation Learning: Reward modeling involves learning a reward function from user interactions, allowing RL agents to align closely with human intentions even in the absence of explicit goals. This technique aligns the agent's actions more closely with human expectations and can mitigate the challenge of specifying detailed reward functions (Leike et al., 2018 ). Similarly, approaches for learning perceptual reward functions from demonstrations reduce the need for manual reward specification, enabling agents to perform tasks based on intrinsic visual cues (Sermanet et al., 2016 ).
  5. Task-Agnostic Altruistic Behavior: The core proposition of the target paper is unique in that it formulates altruistic behavior through a task-agnostic approach. By preferring states that maximize another agent’s future reachable states, agents can learn to assist others effectively, even surpassing traditional cooperation strategies in certain environments (Franzmeyer et al., 2021 ).

In summary, the concept of learning altruistic behaviors in RL without external rewards is situated within a broader research continuum exploring intrinsic motivation and interaction-based learning. Techniques such as curiosity-driven learning, goal conditioning, and feedback integration all contribute to developing agents capable of acting altruistically and autonomously in various environments.

X Twitter Logo Streamline Icon: https://streamlinehq.com