Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

I Need Your Advice... Human Perceptions of Robot Moral Advising Behaviors (2104.06963v1)

Published 14 Apr 2021 in cs.RO, cs.CY, and cs.HC

Abstract: Due to their unique persuasive power, language-capable robots must be able to both act in line with human moral norms and clearly and appropriately communicate those norms. These requirements are complicated by the possibility that humans may ascribe blame differently to humans and robots. In this work, we explore how robots should communicate in moral advising scenarios, in which the norms they are expected to follow (in a moral dilemma scenario) may be different from those their advisees are expected to follow. Our results suggest that, in fact, both humans and robots are judged more positively when they provide the advice that favors the common good over an individual's life. These results raise critical new questions regarding people's moral responses to robots and the design of autonomous moral agents.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Nichole D. Starr (1 paper)
  2. Bertram Malle (1 paper)
  3. Tom Williams (24 papers)
Citations (1)