Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reward Engineering for Object Pick and Place Training (2001.03792v1)

Published 11 Jan 2020 in cs.AI and cs.RO

Abstract: Robotic grasping is a crucial area of research as it can result in the acceleration of the automation of several Industries utilizing robots ranging from manufacturing to healthcare. Reinforcement learning is the field of study where an agent learns a policy to execute an action by exploring and exploiting rewards from an environment. Reinforcement learning can thus be used by the agent to learn how to execute a certain task, in our case grasping an object. We have used the Pick and Place environment provided by OpenAI's Gym to engineer rewards. Hindsight Experience Replay (HER) has shown promising results with problems having a sparse reward. In the default configuration of the OpenAI baseline and environment the reward function is calculated using the distance between the target location and the robot end-effector. By weighting the cost based on the distance of the end-effector from the goal in the x,y and z-axes we were able to almost halve the learning time compared to the baselines provided by OpenAI, an intuitive strategy that further reduced learning time. In this project, we were also able to introduce certain user desired trajectories in the learnt policies (city-block / Manhattan trajectories). This helps us understand that by engineering the rewards we can tune the agent to learn policies in a certain way even if it might not be the most optimal but is the desired manner.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Raghav Nagpal (1 paper)
  2. Achyuthan Unni Krishnan (1 paper)
  3. Hanshen Yu (2 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.