Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning passive policies with virtual energy tanks in robotics (2301.12759v4)

Published 30 Jan 2023 in cs.RO, cs.SY, and eess.SY

Abstract: Within a robotic context, we merge the techniques of passivity-based control (PBC) and reinforcement learning (RL) with the goal of eliminating some of their reciprocal weaknesses, as well as inducing novel promising features in the resulting framework. We frame our contribution in a scenario where PBC is implemented by means of virtual energy tanks, a control technique developed to achieve closed-loop passivity for any arbitrary control input. Albeit the latter result is heavily used, we discuss why its practical application at its current stage remains rather limited, which makes contact with the highly debated claim that passivity-based techniques are associated with a loss of performance. The use of RL allows us to learn a control policy that can be passivized using the energy tank architecture, combining the versatility of learning approaches and the system theoretic properties which can be inferred due to the energy tanks. Simulations show the validity of the approach, as well as novel interesting research directions in energy-aware robotics.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (25)
  1. Stefano Stramigioli “Energy-Aware robotics” In Lecture Notes in Control and Information Sciences 461, 2015, pp. 37–50
  2. Arjan Schaft “L2-Gain and Passivity Techniques in Nonlinear Control” Springer Publishing Company, Incorporated, 2016
  3. “Modeling and Control of Complex Physical Systems” In Modeling and Control of Complex Physical Systems, 2009
  4. “Interconnection and damping assignment passivity-based control: A survey” In European Journal of Control 10.5, 2004, pp. 432–450
  5. “On the Use of Energy Tanks for Robotic Systems” In Human-Friendly Robotics 2022 Cham: Springer International Publishing, 2023, pp. 174–188
  6. Cristian Secchi, Stefano Stramigioli and Cesare Fantuzzi “Position drift compensation in port-Hamiltonian based telemanipulation” In IEEE International Conference on Intelligent Robots and Systems IEEE, 2006, pp. 4211–4216
  7. “Port-based asymptotic curve tracking for mechanical systems” In European Journal of Control 10.5 Elsevier, 2004, pp. 411–420
  8. “Online Stability in Human-Robot Cooperation with Admittance Control” In IEEE Transactions on Haptics 9.2, 2016, pp. 267–278
  9. “Learning Synergies between Pushing and Grasping with Self-supervised Deep Reinforcement Learning”, 2018
  10. “End-To-End Robotic Reinforcement Learning without Reward Engineering”, 2019
  11. “robosuite: A Modular Simulation Framework and Benchmark for Robot Learning”, 2020
  12. “Reinforcement learning for control: Performance, stability, and deep approximators” In Annual Reviews in Control 46, 2018, pp. 8–28
  13. Erfan Shahriari, Lars Johannsmeier and Sami Haddadin “Valve-based Virtual Energy Tanks: A Framework to Simultaneously Passify Controls and Embed Control Objectives” In Proc. of the American Control Conference 2018-June, 2018, pp. 3634–3641
  14. “Adaptive Tank-based Control for Aerial Physical Interaction with Uncertain Dynamic Environments Using Energy-Task Estimation” In IEEE Robotics and Automation Letters 7.4 IEEE, 2022, pp. 9129–9136
  15. “Power Flow Regulation, Adaptation, and Learning for Intrinsically Robust Virtual Energy Tanks” In IEEE Robotics and Automation Letters 5.1 IEEE, 2020, pp. 211–218
  16. Federico Califano, Daniel van Dijk and Wesley Roozing “A Task-Based Post-Impact Safety Protocol Based on Energy Tanks” In IEEE Robotics and Automation Letters 7.4, 2022, pp. 8791–8798
  17. “An Energy-Based Control Architecture for Shared Autonomy” In IEEE Transactions on Robotics, 2022, pp. 1–19
  18. Beatrice Capelli, Cristian Secchi and Lorenzo Sabattini “Passivity and Control Barrier Functions: Optimizing the Use of Energy” In IEEE Robotics and Automation Letters 7.2 IEEE, 2022, pp. 1356–1363
  19. “Unified passivity-based Cartesian force/impedance control for rigid and flexible joint robots via task-energy tanks” In Proc. - IEEE International Conference on Robotics and Automation 2015-June.June IEEE, 2015, pp. 440–447
  20. Richard S Sutton and Andrew G Barto “Reinforcement learning: An introduction”, 2018
  21. “[SAC] Soft actor-critic” In 35th International Conference on Machine Learning, ICML 2018 5, 2018, pp. 2976–2989
  22. Laura Smith, Ilya Kostrikov and Sergey Levine “A Walk in the Park: Learning to Walk in 20 Minutes With Model-Free Reinforcement Learning”, 2022
  23. Steven D. Whitehead and Long Ji Lin “Reinforcement learning of non-Markov decision processes” In Artificial Intelligence 73.1-2, 1995, pp. 271–306
  24. Emanuel Todorov, Tom Erez and Yuval Tassa “MuJoCo: A physics engine for model-based control” In IEEE International Conference on Intelligent Robots and Systems IEEE, 2012, pp. 5026–5033
  25. “Soft Actor-Critic Algorithms and Applications”, 2018
Citations (2)

Summary

We haven't generated a summary for this paper yet.