Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Weak Human Preference Supervision For Deep Reinforcement Learning (2007.12904v2)

Published 25 Jul 2020 in cs.AI

Abstract: The current reward learning from human preferences could be used to resolve complex reinforcement learning (RL) tasks without access to a reward function by defining a single fixed preference between pairs of trajectory segments. However, the judgement of preferences between trajectories is not dynamic and still requires human input over thousands of iterations. In this study, we proposed a weak human preference supervision framework, for which we developed a human preference scaling model that naturally reflects the human perception of the degree of weak choices between trajectories and established a human-demonstration estimator via supervised learning to generate the predicted preferences for reducing the number of human inputs. The proposed weak human preference supervision framework can effectively solve complex RL tasks and achieve higher cumulative rewards in simulated robot locomotion -- MuJoCo games -- relative to the single fixed human preferences. Furthermore, our established human-demonstration estimator requires human feedback only for less than 0.01\% of the agent's interactions with the environment and significantly reduces the cost of human inputs by up to 30\% compared with the existing approaches. To present the flexibility of our approach, we released a video (https://youtu.be/jQPe1OILT0M) showing comparisons of the behaviours of agents trained on different types of human input. We believe that our naturally inspired human preferences with weakly supervised learning are beneficial for precise reward learning and can be applied to state-of-the-art RL systems, such as human-autonomy teaming systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Zehong Cao (31 papers)
  2. KaiChiu Wong (1 paper)
  3. Chin-Teng Lin (78 papers)
Citations (5)