Integrating Prior Knowledge into Model-free Reinforcement Learning for Peg-in-hole Assembly

Determine how to fuse prior knowledge for robotic peg-in-hole assembly, such as geometric and physical constraints and expert demonstrations, into model-free reinforcement learning algorithms in a natural and principled manner to improve data efficiency and practical applicability.

Background

The survey reviews model-free and model-based reinforcement learning (RL) for contact-rich peg-in-hole assembly. While model-free RL has shown promise, it often suffers from poor data efficiency and instability. The paper notes that many works attempt to leverage prior information (e.g., geometric knowledge, controller baselines, demonstrations) to improve practicality, yet how to incorporate such knowledge seamlessly into model-free RL for peg-in-hole remains unresolved.

This open question targets the design of principled mechanisms to inject prior knowledge into model-free RL—beyond ad hoc heuristics—to accelerate learning and improve robustness during real-world assembly tasks.

References

However, for robotic peg-in-hole assembly, it is not clear how to fuse the existing knowledge into a model-free learning process naturally.

Compare Contact Model-based Control and Contact Model-free Learning: A Survey of Robotic Peg-in-hole Assembly Strategies (1904.05240 - Xu et al., 2019) in Section 1.1, Learning from environments (LFE)