Inspection method of images’ overlap of UAV photogrammetry based on features matching

The overlapping degree of UAV aerial imagery is an important parameter in judging the quality of aerial photography. This paper applies the technology of image feature matching to realize the automatic inspection of low-altitude UAV aerial image overlap. It utilizes the feature point matching and homography transformation model, which can accurately identify the overlapping area of the image and overcome the defect caused by the large rotation angle of UAV’s images and irregular overlap area. We use various feature-extracting algorithms to verify the practicability of this method. It shows that it can calculate the overlapping degree of adjacent aerial images efficiently and accurately, which improve the production efficiency of aerial photogrammetry.


Introduction
In recent years, with the rapid development of unmanned aerial vehicle(UAV)technology, its cost has been decreasing, and its application scope has been continuously expanded. The use of UAV for lowaltitude photogrammetry is an important aspect of its application. Compared with traditional manned aircraft, UAV flight altitude is below 1000 meters, and has the capability of low-altitude operation under the cloud, highway take-off, catapult launch, glide, parachute and other landing way which can be used without landing at the airport. Compared with traditional aerial remote sensing and photogrammetry technology, it could make up for the defects of cloud cover while obtaining high resolution images. The operation cost is relatively low, the maneuverability is flexible, and the images of the working area can be quickly acquired in real time, which overcomes the problems such as long period of image acquisition and lack of mobility of traditional aerial photography. Since the light weight of UAV, it will be easily affected by external factors such as wind force. Hence, the image parameters such as image overlap degree, deflection angle and aerial photograph height are not sufficiently stable [1,2]. Moreover, the overlap of image is an important indicator to judge the quality of aerial photography. For example, if the end overlap is less than 53%, the stereopair will not be produced, and the subsequent production operation will be affected. Therefore, it is an important process to check the overlap degree of aerial photography to ensure the quality of aerial photography.
Since the traditional aerial photography has a high aerial height, wide coverage in aerial image, fewer images and it's stored in the form of aerial film, the overlap degree of the aerial photography is conducted by manual inspection of washed photos. It not only requires time and effort, the result of the image overlap, which is determined by the subjective judgment of the inspector, is inaccurate. Meanwhile, the manual inspection method can't be applied to the aerial photography site [3]. It is impossible to check image overlap inspection in time, we can only choose another time to do it again when aerial photography has an aerial photographic gap. Although, corresponding research has been conducted in recent years [4][5][6][7][8], for example, Chen et al [7] realized the flight quality inspection method of frame-type of aerial photography, Li et al. [8] proposed a flight overlap detection method of A3 digital aerial camera based on improved SIFT algorithm. Since the large angle of rotation of UAV image and the irregularity of the overlapping area, it's still an urgent problem need to be solved that how to use an efficient method to check the overlap of the UAV imagery accurately in the quality inspection of aerial photography data.
With the development of computer vision, feature matching algorithm has achieved significant improvements in operational efficiency, stability, and accuracy. Using feature matching to search for corresponding features ensures the accuracy and

Related Work
The basic idea of UAV image overlap checking method based on image matching is: Firstly, downsample the image pyramid to reduce the amount of data and improve the operation efficiency. Then, use the feature point extraction algorithm to extract the adjacent image feature points and match the corresponding feature points one by one. Thus, we can get the fine matching by using the initial matchingused ratio method [9] and filtering RANSAC algorithm [10]. Finally, it should be determined if the number of matching points satisfies the requirements of the solution homography matrix [11] or not. If yes, the homography matrix needs to be calculated to obtain the relative position relation of the adjacent images. Thus, we can obtain the result of the image overlap.
The detailed process flow is as follows in Figure1

Feature point extraction algorithm
In the field of image matching, usually, the image matching method extracts feature points from adjacent image and compares the similarity of the feature descriptors in it to make the corresponding feature points get a better match. The extraction of various features in the image matters a lot. Different feature point extraction algorithms have different sensitivities to responses of different features such as corners, edges, and spots. Considering the drastic changes of the angle of UAV image attitude and down-sampling method in this paper, feature extraction algorithm needs to be insensitive to angle and scale changes. The image feature point extraction algorithms, which are commonly used and have the characteristics of the scale invariance features, include: SIFT, SURF, ORB, AKAZE, BRISK et al [9,[12][13].
In order to comprehensively compare the efficiency and performance of the feature extraction algorithms above, this paper shows its analysis in time consumption, accuracy of the calculation, correct matching rate, the accuracy of the calculation result of the overlap of each matching algorithm and so on in the part of experiments. Thus, we can select the most efficient feature extraction algorithm for overlap checking.

Feature point matching and mismatching point elimination
After extracting the feature points in adjacent images, the corresponding feature points need to be matched. The purpose of image matching is finding the mapping relationship between adjacent images by identifying the corresponding image points in a pair of images. After that, the ratio method [9] should be firstly used to filter the obvious mis-match pairs and get a rough matching result (shown in Figure 2) to ensure the correct matching rate of the feature points. With small threshold, large numbers of mismatches can be eliminated, but it also excludes part of correct matching. In Lowe's paper, it says that it can reduce 90% error matching [9] only by ignoring 5% correct matching when the threshold value is 0.8. Considering the actual filtering effect, we set 0.7 as the filter threshold value in the experiment of this paper.
After filtering the rough matching result by the ratio method, it is necessary to filter the result for the second time. Considering that the correct matching points of adjacent images have the same projection transformation parameters, we should use the RANSAC (random sampling consistency) algorithm [10] and take the projection transformation as its fitting model. Then we can exclude the false matching points from the model and obtain the accurate matching results. As is shown in Figure 2, the red cross points, which dissatisfy the projection model, are the mismatch points. The green dots are the interior model points.

The homography matrix and overlap calculation
Traditional aerial photogrammetry aerial photography won't be easily influenced by the wind. Its swinging amplitude of the flight attitude is smaller. Therefore, the method for calculating the overlapping degree of images is shown in Figure 3. The shadow area, where the overlap region of adjacent images, is near rectangle. Therefore, the approximate result of overlapping degree can be calculated by using formula (1). Px、Py are the width and height of the image overlap area, while Lx、Ly is the width and height of the image.
However, since the instability of the shooting attitude angle in low-altitude aerial photography of UAV, the overlapping area of the image often presents an irregular polygon. As is shown in Figure 4, it shows an irregular hexagonal shape in the region of lateral overlapping area since the excessively large rotation angle. If we still calculate it with formula (1), the reliability of the calculation result of overlapping degree will be reduced. To solve the problem above, this paper uses the homography transformation of the image in the image matching to find the relative position relationship between the images and get the intersection of boundary rectangles in two images. Thus, the accurate overlap degree of the adjacent images is obtained.
x Degree of overlap= 100%  In the field of computer vision, the homography of a plane is defined as the projection mapping from one plane to another plane. The projection model between a pair of adjacent images can be expressed by the following formula (2) [11]: (2) (x,y) is the initial image plane coordinates of feature points. (x1,y1) is the image plane coordinates of the feature points after homography transformation projection. h1, h2 h4 are the scale conversion parameters; h5 is the rotation conversion parameter; h3 is horizontal displacement parameter and h6 is vertical displacement parameter; h7 is horizontal deformation parameter and h8 is vertical deformation parameter. According to the formula above, the projection boundary of the left image can be obtained. As is shown in Figure 5, the white border is the relative border position to the right image in the left image after undergone homography projection. The red border is the right image border and the yellow border is the overlapping boundary two images. The accurate area of the overlap region can be calculated by using the intersection point of the left and right image borders and edge point coordinates to generate the polygon geometry object of the overlap region.

Experiments and Analysis
In this paper, we compare and analyze the calculation efficiency and the validity of the calculation results of five feature extraction algorithms in two experiments. All the data, taken by a Nikon D810 non-metric camera, come from a UAV aerial photogrammetry project in Xiangtan City, Hunan Province. The image size is 7360×4912 pixels, and the focal length of camera is 35.7 mm. In the image down-sampling experiment, the design altitude of flight is about 650m and the design end overlap degree is 65%. In the overlap calculation experiment, the design altitude of flight is about 570 m, and the design end overlap degree is 75%.

Feature point matching and mismatching point elimination
Unmanned aerial images in high resolution will increase the computational cost of the algorithm, that's why it is unnecessary to check the overlap degree. However, the image down-sampling will cause the loss of image information, which will lead to the instability of overlapping region matching. Therefore, an appropriate pyramid down-sampling level is particularly important to improve the computational efficiency and ensure the accuracy and reliability of the overlapping degree results. This experiment compares and analyzes the overall operation efficiency and the accuracy of the calculated overlap degree of each algorithm at different levels of image downsampling. The size of original image is 7360×4912 pixels. After each down-sampling of the image, the width and height of the it will be reduced to half of the last size. The lowest level in the experiment was level 7 and its image size was 58×39 pixels. The experimental results are shown in Table 1 and Table 2: According to Table 1, the computational efficiency of each algorithm is continuously improved when the image down-sampling level is increasing step by step. Especially in the process from the first down-sampling to the third down-sampling, the time consumption of algorithm is greatly reduced with the increasing times of down-sampling. When the image is down-sampled for four times (i.e. the image size is 460 307 pixels), the time consumption of each algorithm tends to be stable. The time for the entire checking algorithm of overlap maintains at about 3s. Combining with Table 2, when the down-sampling is less than 4 times, the overlap calculation results, which are obtained by the feature algorithms, basically maintain in the range of 63% to 64%.Each of them has little difference. After the number of down-samplings reaching 4 times, the computational efficiency will increase slightly. According to the overlapped results, when the downsampling intervals from 5th to 7th, it can be seen clearly that the results start to deviate from the correct range of 63% to 64%.Since the extraction process of feature points is completed in the image pyramid space, the SIFT algorithm is highly stable at each level. When it reaches 6 to 7 times, it's still able to extract enough feature points to compute. Compared with SIFT, other algorithms will show instability during the 6th to 7th time. It's unable to extract enough feature points for subsequent matching after the 7th time.
According to the analysis above, it shows that when the image sampling reaches 4 times (i.e. the image size is 460 x 307 pixels), the calculating efficiency of method can be significantly increased and the accuracy of the calculation results can also be guaranteed.

Experiment of overlapping degree calculation
In order to evaluate the computational efficiency and accuracy of each feature matching algorithm when calculating the degree of overlap, we randomly select one of the aerial strips in the survey area. The aerial strip contains 80 UAV images. Each pair of adjacent images in the aerial strip is a matching group then the end overlaps of the whole aerial strip can be calculated in turn. According to the experimental analysis above, not only the efficiency of the algorithm, but also the stability of the calculation result can be ensured in the level 4 image pyramid. Therefore, the calculation of overlap degree will be performed at the level 4 image pyramid. Figure 5 shows the matching pattern of end overlap. The calculation efficiency and results of each algorithm are shown in Figure 6 and Figure 7.  According to Figure 6, the ORB algorithm has the highest operational efficiency in all algorithms. As is shown in Figure 7, the overlap degree calculated by each algorithm in the experiment is basically the same and so does the result curve. The difference between the results of each algorithm will no more than 5%, even in the area where the largest difference of the overlap results exists. Besides, based on the calculation results, the average overlap of the entire aerial strip is between 72% and 76% when the flight condition tends to be stable. It is close to the 75% of design end overlap, which further proves the reliability and accuracy of UAV image overlap inspection method on the basis of image feature matching algorithm. Thus, the overlapping degree calculated by the method, which based on the image feature matching algorithm, is accurate and reliable. The ORB algorithm has significant advantages than other four algorithms in computational efficiency.

Conclusion
To solve the problem of evaluating overlapping degree in low-altitude UAV aerial images, this paper proposes an accurate inspection method for UAV image overlap on the basis of image feature matching. It also uses multiple image matching algorithms to test the aerial image data, which analyzes the operating efficiency of this method and the accuracy of the calculation results. It shows that the calculation method of UAV image overlap based on image feature matching algorithm is more efficient and convenient than the traditional method. It only needs aerial image and strip information to calculate automatically. In addition, the result is highly accurate and reliable.