- The paper introduces a novel self-calibration framework that uses an eye-in-hand 3D sensor to optimize the robot’s kinematic model with a modified ICP algorithm.
- The method leverages bundle adjustment and noise filtering to accurately register overlapping point clouds and minimize calibration errors.
- Experimental results on a 7-DOF KUKA robot demonstrate that increased data and sensor precision are critical to achieving high calibration accuracy.
Overview of Robot Self-Calibration Using Actuated 3D Sensors
The paper "Robot Self-Calibration Using Actuated 3D Sensors" by Arne Peters addresses a significant challenge in robotic systems: the calibration of robot kinematic models. The focus is on enabling robots to self-calibrate autonomously using only an eye-in-hand 3D sensor, eliminating the need for external calibration objects or sensors. This self-calibration is conceptualized as an offline SLAM problem, adapting the Iterative Closest Point (ICP) methodology to estimate optimal parameters for the robot's kinematic model.
Methodology and Approach
The proposed calibration framework leverages a depth sensor mounted on the robot's end effector, which captures multiple overlapping point clouds of an arbitrary scene. This approach uses a modified ICP algorithm to perform bundle adjustment across the collected data, optimizing the robot's kinematic model parameters to minimize projection error. The kinematic chain is modeled using a Complete and Parametrically Continuous (CPC) model. Key aspects of the implementation include:
- Normalization: Scaling of parameters to balance the effects of translation and rotation errors.
- Point Cloud Registration: Detection and filtering of noise and edges in the data, point matching using nearest neighbor strategies, and validation through point-to-plane metrics.
- Optimization: Use of the Levenberg-Marquardt algorithm to solve for the optimal set of kinematic parameters.
Numerical Results
The framework was evaluated on a seven-degree-of-freedom KUKA LBR iiwa R840 robot, utilizing different types of 3D sensors: Hokuyo UTM-30LX LiDAR, Wenglor MLSL236 line scanner, Microsoft Kinect Azure, and PhotoNeo MotionCam 3D. Experimental results demonstrate calibration precision comparable to those obtained using a dedicated external 3D tracking system. The results indicate that the precision of the self-calibration is influenced by the sensor's noise characteristics and the complexity of the calibration scene.
Key Observations
- Data Sufficiency: Seven scans are often insufficient; increased data leads to improved precision, evidenced by reduced orientation and position errors.
- Precision of Sensors: Higher precision sensors like the PhotoNeo MotionCam 3D yield results comparable to the reference system; however, consumer-grade sensors like Kinect Azure also achieve acceptable precision.
- Scene Complexity: Less complex scenes, perhaps due to fewer complexities in spatial data alignment, appear to yield better calibration results.
Implications and Future Directions
The presented framework has profound implications for the deployment and maintenance of robotic systems, particularly in dynamic environments where recalibration is frequently necessary due to mechanical wear, temperature changes, or physical disturbances. The ability to self-calibrate without human intervention or external aids can significantly lower operational costs and simplify robotic maintenance.
Future research could explore expanding this framework to handle intrinsic sensor parameter calibration as part of the optimization process, accommodating less reliable actuator data such as odometry. Additionally, introducing optimization techniques to expedite the runtime of the ICP algorithm in this context could enhance its applicability to real-time scenarios.
In conclusion, this paper contributes an innovative solution to a long-standing challenge in robotic system calibration, offering a practical and versatile methodology that broadens the scope for autonomous robot operation in diverse settings.