Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Human Decision Makings on Curriculum Reinforcement Learning with Difficulty Adjustment (2208.02932v1)

Published 4 Aug 2022 in cs.AI, cs.HC, and cs.LG

Abstract: Human-centered AI considers human experiences with AI performance. While abundant research has been helping AI achieve superhuman performance either by fully automatic or weak supervision learning, fewer endeavors are experimenting with how AI can tailor to humans' preferred skill level given fine-grained input. In this work, we guide the curriculum reinforcement learning results towards a preferred performance level that is neither too hard nor too easy via learning from the human decision process. To achieve this, we developed a portable, interactive platform that enables the user to interact with agents online via manipulating the task difficulty, observing performance, and providing curriculum feedback. Our system is highly parallelizable, making it possible for a human to train large-scale reinforcement learning applications that require millions of samples without a server. The result demonstrates the effectiveness of an interactive curriculum for reinforcement learning involving human-in-the-loop. It shows reinforcement learning performance can successfully adjust in sync with the human desired difficulty level. We believe this research will open new doors for achieving flow and personalized adaptive difficulties.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yilei Zeng (5 papers)
  2. Jiali Duan (14 papers)
  3. Yang Li (1142 papers)
  4. Emilio Ferrara (197 papers)
  5. Lerrel Pinto (81 papers)
  6. C. -C. Jay Kuo (176 papers)
  7. Stefanos Nikolaidis (65 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.