Research of Real-time Grabbing Yarn Tube System Based on Machine Vision

The current yarn tube manipulator just finishes yarn tube grabbing work according to the fixed coordinates. In the actual production process, equipment problems or human factors which make the spindles not on fixed coordinates cause the damage of the manipulator. Real-time grabbing yarn tube system with visual sensing has been designed and a extraction algorithm of spindles coordinates based on a mixed image morphology and Hough transform algorithm has been proposed. Through the combination of the yarn tube image characteristics which are extracted by the algorithm and the visual measurement model which is established by pinhole imaging principle, the mapping relation of yarn tube image coordinates and world coordinates has been gained to get the location information of yarn tube in real time. Results show that the proposed method could make the robot complete the grabbing job precisely and efficiently, under which the system meet the requirement of spinning and dyeing production line.


Introduction
In recent years, with the rapid development of automation control theory and computer technology, the yarn tube manipulator is more widely applied to the field of industrial automation production.It's a machinery, which can imitate the part of the action of human's hand, automatically complete fetch, handle or operate according to the given program, track and requirements [1][2] .It's very important for raising product quality, increasing production efficiency, improving labor condition, lightening the working strength of workers and the rapid upgrading of the product.Currently, the yarn tubes' coordinates are written in the program of manipulator ,that is applied to the spinning and dyeing production line in advance [3] .If a yarn tube's position occurs error in the process of production, the manipulator will not be able to do grabbing and will lead to fault of the entire production line.The paper, based on the study of the yarn tube manipulator on automatic production line, is aimed to innovate, optimize and design the structure of the manipulator.It could realize that yarn tube manipulator has automatic regulating cardiac function and accurately grabbed yarn tube on the processing of each step of work position accordingly.Flexible use of image processing and vision measuring method can realise efficient online fetch better, improve the flexibility and automation of the production line of the yarn tube [4][5] .

Design of real-time grabbing yarn tube system
The system uses machine vision technology to process the yarn tube images collected on the current moment by image preprocessing, analyzes and concludes the yarn tube location information according to the features of the yarn tube, and then passes on to the actuator for tubes fetch [6][7] .The overall system structure design as shown in figure 1, including manipulator1, circular light source2, execution subsystem3, yarn tube4, camera5, image processing and analysis software 6, control subsystem 7, yarn tube plate 8. p  The system adopts the style of hand-eye.Camera adopts Stingray -F201C industrial camera of German AVT cameras (Allied Vision Technologies) company, which has continuous shooting, external trigger to shoot, single photo shooting and other works.The light source adopts circular LED light source in order to better display the edge contour of the yarn tube, increase the contrast between yarn tube with surrounding environment and reduce the difficulty and pressure of subsequent image processing.The system software development environment is built by open source computer vision library OpenCV2.4.9 in Visual Studio 2010 programming environment.Fetch subsystem is completed by manipulator.

Camera calibration
The demarcation of OpenCV is completed by cvCalibrateCamera2() [8] .The experiment device shoots images of calibration plate in different location with GigEViewer software.Calibration precision is associated with the number of images, so that 10 ~ 20 images have to be chosen at least.This experiment has selected 20 images to calculate the parameter vector matrix and distortion with the use of OpenCV.For that the corner of the lens distortion is the largest, the location of calibration plate in the selected images should be able to cover the four corners of the images, and then more accurate distortion coefficient k can be got.Part of the images are shown in figure 2.  The images, that are shot of the objects on the conveyor belt, can not directly analyze and process, and are usually in combination with characteristics of attribute.firstly, image gaussian filtering and thresholding are proceeded.
And then morphological transform is proceeded.Finally the image analysis is proceeded.
Because the tested parts on the stain, the stain on lens, the impurities in the air, light intensity, the electromagnetic radiation of electronic equipment and other factors of interference, these interference data which include in the data collection side or acquisition after the completion of the transfer process, bring the sharpness of the image information for the yarn tube collected by the camera.In the experiment, image smoothing uses template for 3 * 3 gaussian smooth, then the image thresholding can be applied by CvThreshold () function.Threshold is set to 70.After the image threshold processing, the image contains a lot of noise, using erode and dilate of morphological transform to eliminate noise.Formula (1) and formula (2) are erode and dilate formulas of image respectively.time to eliminate the white noise and dilates for 1 time to fill the concave hole, so that to make the target accurate.
After image processing, it's very easy to get the regional characteristics of the image of target.Due to the yarn tube as the standard parameters, the centroid artifacts can be got with hough circle transform.The processing effect and time consuming is shown in figure 3.

The localization algorithm of the yarn tube
In order to obtain the position of the yarn tube information, yarn tube coordinates can be calculated by space concept at the moment of fetching the yarn tube.The world coordinates of artifact can be calculated by camera imaging principle.The camera imaging model as shown in figure 4, f is the camera focal length, Z is the distance between the camera to the object, P as material points, points P1 for imaging.The relations between the world coordinate system of artifacts and the location of the imaging plane coordinate system should be determined to complete that the manipulator determines the yarn tube's position.The relationship between the calculating process is as follows.
(1)Image processing can get the pixel coordinates P1 (x, y) of the centroid of artifacts.Entering the camera calibration parameters within the center of the image coordinates (Cx, Cy), two coordinates by subtracting get the pixel displacement.
(2)The dx and dy by calculating the camera calibration respectively multiply (1) in the X axis and Y axis direction of the pixel displacement , get the actual distance from image target point to the center of the image (dx, dy).

The experimental results and analysis
As shown in figure 5, initialization of manipulator makes the camera view within the effective working range, and then open the camera to start image acquisition and processing.Base on shape feature extraction, the system recognizes the yarn tube, and calibrates the recognition result and gets the location information.Table 3 is the results of 10 times tests, including positioning coordinates, actual coordinates and the error on the x axis and y axis.The experimental result shows that the manipulator can fetch the yarn tube according to the result of the image processing, and the time-consuming of image processing algorithm is short, the accuracy of yarn tube identification and positioning is high.Table 3 is the 10 times results of positioning coordinates, actual coordinates and on the x axis and y axis error of the size.the absolute value dx dy is that the positioning coordinates minus the actual coordinates.

( a )
Working principle of the system DOI: 10.1051/ C Owned by the authors, published by EDP Sciences, 2015 Vertical view of yarn tube plate

Figure 1 .
Figure 1.The overall structure design of System.

( 3 ) 4 )
According to the camera imaging model, the movement of manipulator displacement are obtained by To take the next image, which in turn to the processing of 1 ~ 3 until the system stops.and this should be taken into account when preparing them.

Figure 5 .
Figure 5.The interface diagram of extracting yarn tube position.

Table 1 .
The camera internal parameters calibration results of this experiment are shown in table 1, and distortion parameters is shown in table 2. In table 1, f is the camera focal length, the pixel spacing of dx and dy can be (Cx, Cy) is the pixel coordinates of the images center.The camera internal parameters. ,

Table 2 .
The camera distortion parameters.

Table 3 .
The error analysis of the test results. 03007-p.3