Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Robot Self-Calibration Using Actuated 3D Sensors (2206.03430v1)

Published 7 Jun 2022 in cs.RO and cs.CV

Abstract: Both, robot and hand-eye calibration haven been object to research for decades. While current approaches manage to precisely and robustly identify the parameters of a robot's kinematic model, they still rely on external devices, such as calibration objects, markers and/or external sensors. Instead of trying to fit the recorded measurements to a model of a known object, this paper treats robot calibration as an offline SLAM problem, where scanning poses are linked to a fixed point in space by a moving kinematic chain. As such, the presented framework allows robot calibration using nothing but an arbitrary eye-in-hand depth sensor, thus enabling fully autonomous self-calibration without any external tools. My new approach is utilizes a modified version of the Iterative Closest Point algorithm to run bundle adjustment on multiple 3D recordings estimating the optimal parameters of the kinematic model. A detailed evaluation of the system is shown on a real robot with various attached 3D sensors. The presented results show that the system reaches precision comparable to a dedicated external tracking system at a fraction of its cost.

Citations (6)

Summary

  • The paper introduces a novel self-calibration framework that uses an eye-in-hand 3D sensor to optimize the robot’s kinematic model with a modified ICP algorithm.
  • The method leverages bundle adjustment and noise filtering to accurately register overlapping point clouds and minimize calibration errors.
  • Experimental results on a 7-DOF KUKA robot demonstrate that increased data and sensor precision are critical to achieving high calibration accuracy.

Overview of Robot Self-Calibration Using Actuated 3D Sensors

The paper "Robot Self-Calibration Using Actuated 3D Sensors" by Arne Peters addresses a significant challenge in robotic systems: the calibration of robot kinematic models. The focus is on enabling robots to self-calibrate autonomously using only an eye-in-hand 3D sensor, eliminating the need for external calibration objects or sensors. This self-calibration is conceptualized as an offline SLAM problem, adapting the Iterative Closest Point (ICP) methodology to estimate optimal parameters for the robot's kinematic model.

Methodology and Approach

The proposed calibration framework leverages a depth sensor mounted on the robot's end effector, which captures multiple overlapping point clouds of an arbitrary scene. This approach uses a modified ICP algorithm to perform bundle adjustment across the collected data, optimizing the robot's kinematic model parameters to minimize projection error. The kinematic chain is modeled using a Complete and Parametrically Continuous (CPC) model. Key aspects of the implementation include:

  • Normalization: Scaling of parameters to balance the effects of translation and rotation errors.
  • Point Cloud Registration: Detection and filtering of noise and edges in the data, point matching using nearest neighbor strategies, and validation through point-to-plane metrics.
  • Optimization: Use of the Levenberg-Marquardt algorithm to solve for the optimal set of kinematic parameters.

Numerical Results

The framework was evaluated on a seven-degree-of-freedom KUKA LBR iiwa R840 robot, utilizing different types of 3D sensors: Hokuyo UTM-30LX LiDAR, Wenglor MLSL236 line scanner, Microsoft Kinect Azure, and PhotoNeo MotionCam 3D. Experimental results demonstrate calibration precision comparable to those obtained using a dedicated external 3D tracking system. The results indicate that the precision of the self-calibration is influenced by the sensor's noise characteristics and the complexity of the calibration scene.

Key Observations

  1. Data Sufficiency: Seven scans are often insufficient; increased data leads to improved precision, evidenced by reduced orientation and position errors.
  2. Precision of Sensors: Higher precision sensors like the PhotoNeo MotionCam 3D yield results comparable to the reference system; however, consumer-grade sensors like Kinect Azure also achieve acceptable precision.
  3. Scene Complexity: Less complex scenes, perhaps due to fewer complexities in spatial data alignment, appear to yield better calibration results.

Implications and Future Directions

The presented framework has profound implications for the deployment and maintenance of robotic systems, particularly in dynamic environments where recalibration is frequently necessary due to mechanical wear, temperature changes, or physical disturbances. The ability to self-calibrate without human intervention or external aids can significantly lower operational costs and simplify robotic maintenance.

Future research could explore expanding this framework to handle intrinsic sensor parameter calibration as part of the optimization process, accommodating less reliable actuator data such as odometry. Additionally, introducing optimization techniques to expedite the runtime of the ICP algorithm in this context could enhance its applicability to real-time scenarios.

In conclusion, this paper contributes an innovative solution to a long-standing challenge in robotic system calibration, offering a practical and versatile methodology that broadens the scope for autonomous robot operation in diverse settings.

Youtube Logo Streamline Icon: https://streamlinehq.com