Calibration-Free Robot–Sensor Calibration approach based on Second-Order Cone Programming

. In order to overcome the restrictions of traditional robot-sensor calibration method which solve the tool-camera transformation and robot-world transformation rely on calibration target, a calibration-free approach that solve the robot-sensor calibration problem of the form AX = YB based on Second-Order Cone Programming is proposed. First, a Structure-From-Motion approach was used to recover the camera motion matrix up to scaling. Then rotation and translation matrix in calibration equation were parameterized by dual quaternion theory. Finally, the Second-Order Cone Programming method was used to simultaneously solve the optimal solution of camera motion matrix scale factor, the robot-world and hand-eye relation. The experimental results indicate that the calibration precision of rotation relative error is 3.998% and the translation relative error is 0.117% in defect of calibration target as 3D benchmark. Compared with similar methods, the proposed method can effectively improve the calibration accuracy of the robot-world and hand-eye relation, and extend the application field of robot-sensor calibration method.


Introduction
In order to relate the measurements made by a camera mounted on a robotic gripper to the tool's coordinate frame, a homogeneous transformation from the tool to the camera needs to be determined. This problem usually called robot-sensor calibration and it is divided into two forms: AX = XB and AX = YB [1]. Figure 1 illustrates schematically the geometry of AX = YB robot-sensor calibration, which defines a problem in which the relationship between the world frame and robot base frame is unknown in addition to the unknown relationship between the robot tool and the rigidly mounted sensor, and it has been well studied in [2][3][4][5][6]. The common aspect of all the existing methods is that they do not work with the camera measurements directly, but rather with the camera poses derived from them by observing a known calibration target. Since the calibration target has known dimensions, camera poses with correct scale can be obtained [7]. However, there are many situations when using an accurately manufactured calibration target is not convenient or is not possible at all. Indeed, using a calibration target in applications such as mobile robotics or endoscopy may be unacceptable due to the restrictions in the limited onboard weight or in the strict sterile condition [8].
To solve this tough problem, later on, a method for calibration-free robot-sensor calibration based on the Structure-From-Motion (SFM) was proposed by Andreff [9]. The modified method which approached a wider problem named extended robot-sensor calibration, which made use of SFM to recover the unknown camera poses.
Since SFM can recover camera poses up to scale, the formulation introduced an explicit scaling factor to the robot-sensor calibration equation. Some similar methods were presented in [10,11], where a scaling factor was included into branch and bound traversal and angular reprojection error formulation [12].
Recently, a number of globally optimal solutions to common structure and motion geometrical problems in computer vision were introduced [13]. Second-Order Cone Programming (SOCP) is actually a kind of global optimization, which is not necessary to find an initial value for the optimization process. In this paper we presented the form AX = YB robot-sensor calibration approach without calibration target based on the SFM. First, we estimated the rotational part of the robot-sensor calibration separately using dual quaternion method. Next, we used SOCP to estimate the translational part based on SFM. Finally, relevant experiment system was established to validate the calibration accuracy. The experiment proved that our method has the advantages of simple operation, low cost and wide applicability.

Problem formulation
Suppose a camera has been rigidly attached to a robot's gripper, as shown in Figure 1, A represents the transformation from camera to target world, B represents the transformation from tool to robot base. The unknown X represents the fixed homogeneous transformation between tool and camera, and the unknown Y represents the fixed homogeneous transformation from robot base Let RA, RB, RX and RY denote the respective 3×3 rotation matrices of A，B，X and Y, and let tA, tB, tX and tY denote the respective three-dimensional translational vectors. Equation (1) can thus be rewritten as.
One may easily decompose equation (2)

into a rotation equation and a position equation
In the method proposed in [2], the first equation is solved by least square minimization of a linear system, which obtained by using the quaternion algebra representation of the rotation matrices. Once RY is known, the second equation about tX and tY is easily solved with linear least squares techniques.

Formulation by orthonormal matrix
Consider camera poses with up to scaling factor λ using SFM, equation (1)

homogeneous transformations A(λ), B, X and Y are rewritten by the following relation
Note that Equation (4) can be solved for unknown rotational part RX and RY regardless of the value of λ, rotational part can be obtained by standard methods. Once RY is known, we can restructure above equation for all robot motions (i=1, 2 ,..., N) as the linear system Then we can build the error function about unknown translational part and scaling factor It is easily solvable using non-linear optimization such as (Levenberg Marquardt, LM). Considering the presence of the measurement noise may be increase the risk of convergence to the local minima, we have to introduce dual quaternions theory parameterize the above error function.

Formulation by dual quaternions
This section shows how the estimation of rotation, translation and scale factor can be formulated using dual quaternions. As quaternions are only a representation for 3-D rotations, but dual quaternions treat rotations and translations in a unified way. By using dual quaternions parameterization, the equation (4) can be transformed to Where all symbols are denoted by quaternions, and there are relationships in dual quaternions that 2q′ = q •tq. Using the multiplication of quaternions, and performing some algebraic operations, the first rotation equation can be concluded that The above 4×4 quaternion matrix M(a) and M(b) + can be respectively presented as Where V (a, b) is a 4×8 matrix defined for each pair of quaternions (a, b). The unit quaternion qx and qy for the rotation is determined in the least square sense by the SVD decomposition. The translation equation for quaternion q ′ x ,q ′ y and scaling factor λ formulation is also obtained as follows Where C(a′, b′) is a 4×9 matrix defined for each pair of quaternions(a′, b′), d i is a three-dimensional vector.

Solution
This section will show how to convert equation (9) to a problem of Second-Order Cone Program. It was observed in [14] that general SOCP can be expressed in the form Where K is assumed to be quadratic cones such as equation (11) and equation (12), and x i denote a variable quantity of unknown paramete x. Moreover, we assume that coefficient matrix F i ∈R m×n , g i ∈R m and c∈R n have conforming dimensions. A primal solution x is said to be feasible if it satisfies all the constraints of equation (10), otherwise it is infeasible. Quadratic cone Rotated quadratic cone Many optimization problems can be expressed in the above form. Some examples are linear, convex quadratic, and convex quadratic constrained optimization. Other examples are the problem of minimizing the sum of norms and robust linear optimization. In this paper we used it to solve the robot-sensor calibration convex optimization problem. Suppose L 2 norm optimization problems (9) can be written in the following form It is obviously seen that this is a convex optimization problem and it can be transformed by introducing an additional variable δ into a Second-Order Cone Program problem of the below form It is easily solvable using commonly available Matlab toolboxes, for example SeDuMi or CVX [15].

Experiments
Figure 2(a) shows an overview of the setup used for the real data experiment. DENSO VS-G serial manipulator with DAHENG MER-500-14GM/GC Industrial CMOS camera and COMPUTAR 12mm lens is used to acquire the data for the experiment. The robot was instructed to move the gripper along the surface of a sphere, the sphere's radius was approximately 600mm, and was centred in the middle of the scene objects. The position of the gripper is adjusted with ten different locations. The camera resolution is 2592×1944 pixels. Two image sets for two different scenes were acquired-a scene with a calibration target used for obtaining internal camera calibration and a scene with general objects to show the precision differences of the calibration-free method over the classical hand-eye calibration approach that rely on a known calibration target.   Fig. 3. Sample images of test scenes taken by the camera mounted on the grip of the DENSO robot.

Calibration scene
Because of no ground truth value is available for verification, to evaluate the correctness of the solution obtained by our proposed method, we have to compare it with those obtained by calibration target with the same data. Calibration scene, as Figure 3(a) shows, was firstly used for internal camera calibration [16] and method [3] (M1) was used to finish robot-sensor calibration that relied on a known calibration target. However since the calibration procedure outputs the matched feature points as its by-product, then the camera motion homogeneous transformation A(λ) was obtained from the correspondences and the robot homogeneous transformation B was obtained from the known tool-tobase transformation. Finally, we used the same correspondence points for robot-sensor calibration base on structure-from-motion (SFM) approach with linear kronecker product [4] (M2), nonlinear dual quaternion [5] (M3) and our SOCP method (M4), then compared their results to initial robot-sensor transformation (M1). To measure the errors in rotation and translation, we choose the common relative error as ( ) The average results obtained for ten repeated experiments are given in Figure 4. It can be seen that M1 gives the smallest error in rotation and translation due to the exact spatial constraints. As for M2 and M3, they give larger errors as expected since the 3D model is not used. However, the second-order cone programming method (M4) yields a higher accuracy that very closest to the result of the method M1, the calibration precision of rotation relative error is 3.998% and the translation relative error is 0.117%, which show validity of the proposed method.

General scene
In the second experiment, as Figure 3(b-c) show, calibration target was removed, and general scene was used to show the performance of the method in realworld conditions. SIFT image features and Bundler algorithms [17] were used to obtain the camera poses. Figure 2( The results of general scene experiment are given in Table 1. It can be seen that M4 is nearest to the results of the method M1 for the two general scenes among the three methods. Even though there are no space constraints in the general scene, compared with other similar methods our proposed method can perform well.

Conclusion
Using a precise calibration target is not possible for many applications of robot-sensor calibration. In such situations, we proposed a robot-sensor self-calibration method, which allow computing the tool-camera transformation and robot-world transformation without the necessity for using a calibration target in order to obtain the camera poses. Instead, SFM approach was used to recover the camera poses up to scaling. However, due to its inherent scale ambiguity, SFM technique will bring an additional over-parameterization to the solution of nonlinear equation. This paper addressed this drawback by formulating the estimation of the robotsensor displacement as a second-order cone programming optimization problem. Compared with similar robot-sensor calibration methods without a calibration target, our proposed method has maximally guaranteed to obtain the optimal solution, although the cost of releasing the space constraint is a small degradation of the numerical accuracy, the advantages of the proposed method may outweigh this drawback, which break the restrictions of mobile robot and endoscopic surgery robot in the limited on-board weight or in the strict sterile condition.