Safe high-precision assembly in unstructured environments

Establish safe and reliable methods for executing complex high-precision robotic assembly tasks in unstructured environments.

Background

Industrial robot manipulators are increasingly used for assembly, yet existing passive and active compliance strategies often require manual modeling and tuning and may not generalize well to environmental variations. Reinforcement learning has shown promise but remains challenging to deploy robustly on real, position-controlled industrial robots.

This paper proposes a deep reinforcement learning framework that jointly learns motion trajectories and variable compliance control for peg-in-hole tasks under goal uncertainty, using sim-to-real transfer and domain randomization. While this advances practical performance on contact-rich tasks, the broader challenge of safely achieving complex high-precision assembly in unstructured environments persists as an open problem.

References

Though peg-in-hole assembly is a common industrial task that has been extensively researched, safely solving complex high-precision assembly in an unstructured environment remains an open problem.