Contact-rich SE(3)-Equivariant Robot Manipulation Task Learning via Geometric Impedance Control (2308.14984v2)
Abstract: This paper presents a differential geometric control approach that leverages SE(3) group invariance and equivariance to increase transferability in learning robot manipulation tasks that involve interaction with the environment. Specifically, we employ a control law and a learning representation framework that remain invariant under arbitrary SE(3) transformations of the manipulation task definition. Furthermore, the control law and learning representation framework are shown to be SE(3) equivariant when represented relative to the spatial frame. The proposed approach is based on utilizing a recently presented geometric impedance control (GIC) combined with a learning variable impedance control framework, where the gain scheduling policy is trained in a supervised learning fashion from expert demonstrations. A geometrically consistent error vector (GCEV) is fed to a neural network to achieve a gain scheduling policy that remains invariant to arbitrary translation and rotations. A comparison of our proposed control and learning framework with a well-known Cartesian space learning impedance control, equipped with a Cartesian error vector-based gain scheduling policy, confirms the significantly superior learning transferability of our proposed approach. A hardware implementation on a peg-in-hole task is conducted to validate the learning transferability and feasibility of the proposed approach.
- H. Ravichandar et al., “Recent advances in robot learning from demonstration,” Annual review of control, robotics, and autonomous systems, vol. 3, pp. 297–330, 2020.
- X. Zhang et al., “Learning variable impedance control via inverse reinforcement learning for force-related tasks,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 2225–2232, 2021.
- C. C. Beltran-Hernandez et al., “Variable compliance control for robotic peg-in-hole assembly: A deep-reinforcement-learning approach,” Applied Sciences, vol. 10, no. 19, p. 6923, 2020.
- T. Cohen and M. Welling, “Group equivariant convolutional networks,” in International conference on machine learning. PMLR, 2016, pp. 2990–2999.
- E. J. Bekkers et al., “Roto-translation covariant convolutional networks for medical image analysis,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2018: 21st International Conference, Granada, Spain, September 16-20, 2018, Proceedings, Part I. Springer, 2018, pp. 440–448.
- J. Seo et al., “Geometric impedance control on SE(3) for robotic manipulators,” IFAC World Congress 2023, Yokohama, Japan, 2023.
- A. Zeng et al., “Transporter networks: Rearranging the visual world for robotic manipulation,” in Conference on Robot Learning. PMLR, 2021, pp. 726–747.
- A. Simeonov et al., “Neural descriptor fields: SE(3)-equivariant object representations for manipulation,” in 2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022, pp. 6394–6400.
- H. Ryu et al., “Equivariant descriptor fields: SE(3)-equivariant energy-based models for end-to-end visual robotic manipulation learning,” in The Eleventh International Conference on Learning Representations (ICLR), 2023.
- ——, “Diffusion-edfs: Bi-equivariant denoising generative modeling on se (3) for visual robotic manipulation,” arXiv preprint arXiv:2309.02685, 2023.
- J. Kim et al., “Robotic manipulation learning with equivariant descriptor fields: Generative modeling, bi-equivariance, steerability, and locality,” in RSS 2023 Workshop on Symmetries in Robot Learning, 2023.
- C. Pan et al., “Tax-pose: Task-specific cross-pose estimation for robot manipulation,” in Conference on Robot Learning. PMLR, 2023, pp. 1783–1792.
- H. Ha and S. Song, “Flingbot: The unreasonable effectiveness of dynamic manipulation for cloth unfolding,” in Conference on Robot Learning. PMLR, 2022, pp. 24–33.
- S. Kim et al., “SE(2)-equivariant pushing dynamics models for tabletop object manipulations,” in Conference on Robot Learning. PMLR, 2023, pp. 427–436.
- V. der Pol et al., “MDP homomorphic networks: Group symmetries in reinforcement learning,” Advances in Neural Information Processing Systems, vol. 33, pp. 4199–4210, 2020.
- D. Wang et al., “Equivariant q𝑞qitalic_q learning in spatial action spaces,” in Conference on Robot Learning. PMLR, 2022, pp. 1713–1723.
- D. Wang, R. Walters, and R. Platt, “SO(2)-equivariant reinforcement learning,” arXiv preprint arXiv:2203.04439, 2022.
- T. Inoue et al., “Deep reinforcement learning for high precision assembly tasks,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017, pp. 819–825.
- S. Kozlovsky, E. Newman, and M. Zacksenhouse, “Reinforcement learning of impedance policies for peg-in-hole tasks: Role of asymmetric matrices,” IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 10 898–10 905, 2022.
- O. Khatib, “A unified approach for motion and force control of robot manipulators: The operational space formulation,” IEEE Journal on Robotics and Automation, vol. 3, no. 1, pp. 43–53, 1987.
- F. Bullo and R. M. Murray, “Tracking for fully actuated mechanical systems: a geometric framework,” Automatica, vol. 35, no. 1, pp. 17–34, 1999.
- T. Lee et al., “Geometric tracking control of a quadrotor uav on SE(3),” in 49th IEEE conference on decision and control (CDC). IEEE, 2010, pp. 5420–5425.
- F. Caccavale et al., “Six-dof impedance control based on angle/axis representations,” IEEE Transactions on Robotics and Automation, vol. 15, no. 2, pp. 289–300, 1999.
- Y. Zhu et al., “robosuite: A modular simulation framework and benchmark for robot learning,” in arXiv preprint arXiv:2009.12293, 2020.
- H. Ochoa and R. Cortesão, “Impedance control architecture for robotic-assisted mold polishing based on human demonstration,” IEEE Transactions on Industrial Electronics, vol. 69, no. 4, pp. 3822–3830, 2021.
- S. Shaw, B. Abbatematteo, and G. Konidaris, “RMPs for safe impedance control in contact-rich manipulation,” in 2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022, pp. 2707–2713.
- E. Todorov, T. Erez, and Y. Tassa, “Mujoco: A physics engine for model-based control,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2012, pp. 5026–5033.
- “Berkeley RL Kit,” https://github.com/rail-berkeley/rlkit, accessed: 2023-07-01.
- J. Tobin et al., “Domain randomization for transferring deep neural networks from simulation to the real world,” in 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, 2017, pp. 23–30.
- X. Zhang et al., “Efficient sim-to-real transfer of contact-rich manipulation skills with online admittance residual learning,” arXiv preprint arXiv:2310.10509, 2023.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.