Fuse Existing Knowledge into Model‑Free Reinforcement Learning for Peg‑in‑Hole Assembly
Develop a principled method to fuse existing knowledge into model‑free reinforcement learning for robotic peg‑in‑hole assembly in a natural and effective manner, so that prior information can be incorporated during policy learning without relying on explicit contact state recognition.
Sponsor
References
However, for robotic peg-in-hole assembly, it is not clear how to fuse the existing knowledge into a model-free learning process naturally.
— Compare Contact Model-based Control and Contact Model-free Learning: A Survey of Robotic Peg-in-hole Assembly Strategies
(1904.05240 - Xu et al., 2019) in Subsubsection ‘Learning from environments (LFE)’, Section 1: Introduction