Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Be Considerate: Objectives, Side Effects, and Deciding How to Act (2106.02617v1)

Published 4 Jun 2021 in cs.AI and cs.LG

Abstract: Recent work in AI safety has highlighted that in sequential decision making, objectives are often underspecified or incomplete. This gives discretion to the acting agent to realize the stated objective in ways that may result in undesirable outcomes. We contend that to learn to act safely, a reinforcement learning (RL) agent should include contemplation of the impact of its actions on the wellbeing and agency of others in the environment, including other acting agents and reactive processes. We endow RL agents with the ability to contemplate such impact by augmenting their reward based on expectation of future return by others in the environment, providing different criteria for characterizing impact. We further endow these agents with the ability to differentially factor this impact into their decision making, manifesting behavior that ranges from self-centred to self-less, as demonstrated by experiments in gridworld environments.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Parand Alizadeh Alamdari (2 papers)
  2. Toryn Q. Klassen (11 papers)
  3. Rodrigo Toro Icarte (14 papers)
  4. Sheila A. McIlraith (22 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.