Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Practical Reinforcement Learning For MPC: Learning from sparse objectives in under an hour on a real robot (2003.03200v2)

Published 6 Mar 2020 in cs.RO and math.OC

Abstract: Model Predictive Control (MPC) is a powerful control technique that handles constraints, takes the system's dynamics into account, and optimizes for a given cost function. In practice, however, it often requires an expert to craft and tune this cost function and find trade-offs between different state penalties to satisfy simple high level objectives. In this paper, we use Reinforcement Learning and in particular value learning to approximate the value function given only high level objectives, which can be sparse and binary. Building upon previous works, we present improvements that allowed us to successfully deploy the method on a real world unmanned ground vehicle. Our experiments show that our method can learn the cost function from scratch and without human intervention, while reaching a performance level similar to that of an expert-tuned MPC. We perform a quantitative comparison of these methods with standard MPC approaches both in simulation and on the real robot.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Napat Karnchanachari (3 papers)
  2. Miguel I. Valls (2 papers)
  3. David Hoeller (15 papers)
  4. Marco Hutter (165 papers)
Citations (36)

Summary

We haven't generated a summary for this paper yet.