A fast template matching method for LED chip Localization

Efficiency determines the profits of the semiconductor producers. So the producers spare no effort to enhance the efficiency of every procedure. The purpose of the paper is to present a method to shorten the time to locate the LED chips on wafer. The method consists of 3 steps. Firstly, image segmentation and blob analyzation are used to predict the positions of potential chips. Then predict the orientations of potential chips based on their dominant orientations. Finally, according to the positions and orientations predicted above, locate the chips precisely based on gradient orientation features. Experiments show that the algorithm is faster than the traditional method we choose to locate the LED chips. Besides, even the orientations of the chips on wafer are of big deviation to the orientation of the template, the efficiency of this method won't be affected.


Introduction
Machine vision, as an advanced manufacturing technology, is widely used in LED manufacturing equipment.Inspection and sorting of LED chips are two imperative procedures in LED manufacturing process.Before LED chip inspection and sorting, it is inevitable to use machine vision to scan the LED chips to locate the LED chips precisely [1].Now, efficiency determines the profits of semiconductor producer.So the producers spare no effort to enhance the efficiency of every procedure of LED chip fabrication.Before inspection and sorting of LED chips, the chips on wafer have been cut and the blue membrane has been expanded under tension.So the chips on wafer have been set apart, as shown in figure 1(a).If the distance between two chips is too short, it is advised to exclude these chips in case that they are polycrystalline (Polycrystalline defect means a chip contains a part of another chip and is bigger than a normal chip).
Template matching methods [2][3][4] are always used to detect a given object with high speed, high pose accuracy.There are 3 main kinds of template matching methods, namely template matching based on gray value [5], geometric features [6][7][8][9] and gradient orientation features [10,11].The first method can get relatively high pose accuracy when the image edges are fuzzy, but this method requires high illumination stability and uniformity.Template matching based on geometric features is very efficient, but it's easily disturbed by image noise.Template matching based on gradient orientation features is robust to illumination variation and nonuniformity.
Reference [12] proposed a new algorithm for object detection.Before object detection, Alwin Anbu et al. firstly made use of the motion information to obtain regions of interest in video surveillance and then used the segmentation methods to get the regions of interest further, which reduced the detection area greatly.In addition, references [13,14] presented the similar ideas that, before object detection or matching, image segmentation, blob features and some other features analyzation can be used to obtain the potential regions, called regions of interest, which would reduce computation complexity greatly.In this paper, image segmentation and blob analyzation are used to predict the positions, and dominant orientation of potential objects are used to help to predict the orientations of the chips, before the LED chips on wafers located precisely.

Presented method 2.1 Position Prediction
Here, image segmentation and blob analyzation are used to predict the positions of potential objects.LED chips are often produced on blue membrane.When a wafer is under suitable coaxial light source and ring light source, LED chips are brighter than background, as shown in figure 1.So it can use the difference to segment the input image to get blobs.Image segmentation is the first step of the presented algorithm, and the quality of image segmentation determines the effectiveness of the algorithm.represents the pixel gray distribution of the foreground (chips).Here, Otsu method is used to compute the threshold for image segmentation to get blobs.Figure2 shows 3 images at different orientations.To begin with, Otsu method is used to obtain a suitable threshold for image segmentation.Figure 3 shows the results of image segmentation.Then exclude the abnormal blobs based on the area and rectangularity of the blobs and get the normal blobs and regions of interest.Figure 4 shows the normal blobs and figure 5 shows the regions of interest.At last, compute the area centers of the normal blobs, which are close to the centers of potential objects.This process presents low computation complexity and it reduces the area for further object detection greatly.

Orientation Prediction
Here, the dominant orientations of the potential objects are used to help predict the orientations.The gradient orientation angle Θ (0 ≤ Θ < 2π) of every pixel can be computed through the ratio of Gx and Gy, which denote horizontal and vertical gradient magnitude respectively and can be obtained through some differential operators like Sobel Operator.Then, encode the gradient orientation angle through Eq. 1. c(і , j) is the code of pixel, and Δ Θ can be determined by the grades Num through Eq. 2. A Where [ ] is round operation.
LED chip consists of luminous zone, electrodes and gold threads, as figure 6(a) shows.It had better choose the pixels on the edges (between the background and the chip, between the electrodes and the luminous zone, between the gold threads and the luminous zone) to compute the dominant orientation of each chip, which can help predict the orientation of the chip very well.It is easy to choose a threshold T m for G in Eq. 3 to limit the pixels to those edges, as figure 6(b) shows that the red points represent the pixels on the edges.5(c).Because the orientation obtained through template matching is relative to the orientation of template, the deviation of the dominant orientations between the template and the potential object is be used to predict the orientation of the potential object.It is easy to find that the computation complexity of this process is definitely low.

Localization based on gradient orientation
This process here is used to discriminate object and determine its pose precisely, according to the position and orientation of the potential object predicted above.In order to reduce the computation complexity, the gradient orientation features of pixels on the edges are chosen for matching, during the template making process.
) Here, t(μ , ν ) is the selected pixel's code of the template.m t is the mean value and s 2 t is the variance.C(і +μ , j+ν ) is the corresponding pixel's code of the input image at a shifted position of the template region.m I is the mean value and s 2 I is the variance.The bigger the absolute value of NCC is, the more similar the template is to the detected region in the input image.Firstly, according to the poses predicted above, use NCC to locate the LED chips at pixel accuracy level.Then, according to the poses at pixel accuracy level, locate the LED chips at subpixel accuracy level based on the demand of accuracy.Figure 8(a)-(c) show the NCC of every potential region around the position and orientation predicted.Because the paper only chooses the gradient orientation codes of some certain pixels to match the potential chips in some certain regions, the computation complexity of this process is low.

Experiments
Here, the paper compares the presented method to the traditional method chosen for locating LED chips on wafer to verify the efficiency of the presented method.The traditional method is also a template matching method based on gradient orientation and it directly searches the objects globally.Both methods are tested in a computer with an Intel Core E6500 CPU of 2.94 GHz and RAM of 4 GB.The orientations of most chips on wafer are within 20°, compared to that of the template.Figure 9(a)-(c) are three images of many chips with different orientations.However, when the wafer is just put on the worktable by the manipulator, the orientations of chips on the wafer are sometimes far more than 20°, but within 90°.In order to meet the automation demand of LED equipment, it has to match these chips to help to correct the worktable.Figure 10(a) and (b) are two images with many chips with orientations far more than 20°.Table 1 shows that the time both the presented method and the traditional method take to locate the chips on wafer.
Table 1 suggests that the presented method is more efficient than the traditional method.When the orientations of the chips on wafer are of big deviation to the orientation of the template, the traditional method takes much more time than it takes when the orientations of the chips on wafer are of small deviation to that of the template.However, it seems that the orientations of the chips on wafer don't affect the time the presented method takes to locate these chips.This is because the presented method uses image segmentation, blob analyzation and the dominant orientations of regions of interest to predict the poses of the potential chips, before chips are localized precisely.Whereas, the traditional method directly searches the chips on wafer globally, which is time consuming.

Conclusion
This paper presents a fast method to locate the LED chips on wafer.Before LED chips on wafer localized precisely, it uses image segmentation, blob analyzation and the dominant orientation of region of interest to predict the pose of each potential object, which reduces the area for object matching greatly.The method consists of 3 steps, and the computation complexity of each step is low.Experiments show that the algorithm is faster than the traditional method we choose to locate the LED chips.
And the orientations of chips on wafer don't affect the efficient of the presented method.
The method can be used on such kinds of LED manufacturing equipment to locate LED chips as LED Sorter, LED Prober and LED Automation Optic Inspection.In addition, it's likely that this method can also applied to other kinds of chips or devices, like power optical devices.
If the difference between the background and the chips becomes small due to improper illumination condition, it's better to improve the contrast between the background and the chips before image segmentation.Therefore, in the future, we should come up with a method to improve the contrast between the background and the chips without affecting the efficiency.

Figure 1 (
b) shows the gray histogram of figure 1(a).Apart from the gray value 255 of many pixels, figure 1(b) appears the bimodal structure, which means it can use the Otsu method [15] and histogram-based methods, which are classic image segmentation methods, to segment the image.The first peak represents the pixel gray distribution of the background and the second peak DOI: 10.1051/ C Owned by the authors, published by EDP Sciences, 2015

Figure 1 .Figure 2 .Figure 3 .
Figure 1.(a) An image of LED chips on wafer.(b) the histogram of pixel gray value

Figure 7 .Figure 8 .
Figure 7. Histogram of gradient orientation codes.(a) Histogram of gradient orientation codes of object B in figure 5(a).(b) Histogram of those of template A in figure 2(b).(c) Histogram of those of object C in figure 5(c).Num=36 (Eq.2).

Figure 9 .Figure 10 .
Figure 9. Images of chips on wafer.A in (b) is the template for matching the chips in (a)-(c).(d)-(f) are the results of matching through both the presented method and the traditional method.The green frames indicate that the chips were matched.

Table 1 .
Time both the presented method and the traditional method take to locate the chips on wafer Methods\ images (a) in figure 9 (b) in figure 9 (c) in figure 9 (a) in figure 10 (b) in figure 10