Algorithm combination of deblurring and denoising on video frames using the method search of local features on image

: In this paper, we propose an approach that allows us to perform an operation to reduce error in the form of noise and lubrication. To improve the processing speed and the possibility of parallelization of the process, we use the approach is based on the search for local features on the image.


Introduction
The primary data received by the cameras have a noise component. The cause of noise can be, sensitive elements of the matrix, cross-talk, electromagnetic interference, the ADC bit depth, the data transmission channel, primary codecs, etc. In the case of fixation of the frame by standard cameras having the possibility of changing the focusing, a blurring effect is observed on the secondary objects. The appearance of blur can also be related to the movement of the subject in the frame, the slow fixing speed, the appearance of disturbing factors and the shooting conditions. Elimination of these types of distortions makes it possible to improve the accuracy of subsequent processing and decision making.
In paper proposed many methods [1][2][3][4][5] to automatically reduce the noise component in twodimensional signal. Some methods are fully automatic, but use a lot of many computing resources. These methods work well enough when it is possible to obtain data on noise. If this is not possible, then the operator's help is required to adjust to each type of scene. Otherwise, their effectiveness is sharply reduced. The result obtained depends on the variance of the noise component and the type of two-dimensional signal (image). Methods based on the first order square difference allow to reduce the effect of noise, but the differences blur the boundaries of the picture. Methods based on L2 norm, allow to keep the border between the differences of image brightness, but little suppress noise near the differences [6,7].
The effectiveness of decision making depends on the result obtained at the first stages of data processing. For high-quality data processing, it is necessary to perform the following basic steps: elimination of the noise component, elimination of blurring, reconstruction and correction of images.
In this paper, we propose an approach that allows us to perform an operation to reduce error in the form of noise and lubrication. To improve the processing speed and the possibility of parallelization of the process, we use the approach is based on the search for local features on the image.

Mathematical model for image of blur and noise
A sequence of frames is a multidimensional array of data. The object that has fallen into the frame fixation can be non-stationary. Simplified mathematical model is presented in [1]:

Algorithm for eliminating blur and filtering
The step of reducing the effect of interference is realized by the algorithm shown in Figure 2  Algorithm is shown on figure 2 and implemented as follows: -At the first step the procedure for recording video data is performed; -The next stage the video stream is divided into scenes. The separation is performed at time intervals and is specified by the operator; -At the third stage, the operation of reducing the size of the video frames. This procedure involves combining the pixel blocks with the color priority reassignment; -The next step is to look for local features. This stage includes the search for base points, the search for boundaries and angles on objects located in frames. This stage is discussed in Section 3; -At the fifth stage, we perform in parallel the operations of searching for objects that are detailed or meaningful for humans and of the searching for correspondences between base points. The operations of searching for objects that are detailed or meaningful for humans is discussed in Section 4. For searching for correspondences between base points we use SURF detector; -At the next stage, a multidimensional array of cut objects from each frame is assembled. The resulting multidimensional array carries data about structure and the change in the object; -The next stage is related to the previous one. At the current step, there are stable links in the changes in the positions of the objects. Using this stage, you can determine the significance of objects in relation to the frame. The construction of trees is standard, the most stable branches are considered basic; -At this stage, the results are transferred to the size of the video. A multidimensional structure is the constructed for a group of full frames; -The next two steps perform denoising and debluring elimination procedures (Chapter 5); -At the final stage of the implementation, the procedure for transferring the results to the final video and frame-by-frame restoring the video data.

Search for local features on images
An analysis of the literature revealed that the main methods of detecting for borders and corners in images used in practice are detectors: Roberts, Sobel, Prewitt, Laplace, Kirsch, Laplacian of Gaussian (LOG), Difference of Gaussians (DOG), Canny and Harris.
For blurred images, the best of the quantitative results within the SNR, PSNR, and MSE assessments shows Roberts' method. However the result has a large number of errors, almost complete loss of boundaries in low-contrast areas is observed. Similar results show Harris detector and LOG-detector. The task of detecting local features was handled by the detector Canny. This detector has not high quantitative indicators but visually allows you to find all the boundaries. Thes same detector has the best quantitative results on the noisy image and he is detecting edges in test image with presence lowcontrast area. Results similar to the Canny detector, have detectors the Sobel and Prewitt.
On the smooth images, Canny detector shows the best quantitative results. Only this filter allows accurately detect the low-contrast boundaries. The worst quantitative indicators has Harris detector he is forming many false pixels and double border. Low-contrast area on the smooth images defines no one detector except Canny detector [8].
In the work, we applied the Canny method, which showed high efficiency in both smoothed and noisy images.

Definition of significance areas
From a large group of approaches performing data analysis, it is possible to single out approaches based on a mathematically light device, and methods requiring large computational costs, for example neural networks.
The first group includes the methods processing in window and simple methods for analyzing gradients. The following approaches can be distinguished from this group: "White/black edges", "number of transitions", "Density", LPI-ICI, etc.
The conducted studies [8] showed that the application of computationally simple algorithms has a number of limitations: the lowest efficiency, in comparison with other methods of selecting detailed objects, shows the method "Number of transitions". This method, finding detailed areas on the image, allocates a sufficiently large number of erroneous areas and requires an empirical selection of the coefficient of detail.
-The effectiveness of the method "Density" is high. This method more precisely defines the detailed objects and has a smaller error. But this method has a large sliding window size. Reducing the size leads to an increase in the erroneously found visually non-detailed areas. An important advantage of the "Density" method is that it does not require an empirical selection of the threshold, but determines it automatically for each image.
the "White / Black Ratio" method are similar to those of the "Density" method. But this method has a smaller size of the sliding window, which minimizes the error in determining false regions and more locally determines the detailed areas. The main drawback of this method is the empirical choice of the coefficient of detail for each image.
-the LPI-ICI method makes it possible to effectively detect local stationary regions. Has low speed. Badly works on noisy images and images with high detail.
Methods based on the search for human attention cards, allow localizing areas that have a higher priority for the user. Most often these approaches are based on the modelling of the human visual system and machine learning. These approaches have a high computational complexity.
In this paper, as a block, offering simple computational algorithms and neural networks for localization of area.

Noise reduction and blur reduction operation
To perform the operation of reducing the effect of interference, it is proposed in the paper to combine the regions with the object into a three-dimensional array. Layered data generation allows you to use the resulting detail masks from one frame to the next. The use of layered structures also makes it possible to refine the boundaries of objects and accelerate the process of primary data processing.
To find the correspondences between layers, we applied the SURF method [9]. For each pair, cross check layers were assigned. Based on the resulting data set, a link tree is built. Stable branches are a basis. The operation of eliminating noise and blurring is performed using the methods proposed in the works [10,11]. The filtration method is based on simultaneous The method of restoring the boundary. This method works as follows: A mask of blurred boundaries is drawn; Pixel-by-pixel displacement of the inner and outer borders to centre; The united boundaries are taken for the final border. The operation is performed step by step. As the recovery algorithm, the approach to suppressing the Gaussian blur. Stopping criterion is the fulfillment of the requirements for the minimum width of the pixels within the boundaries (n) for the maximum number of objects (k).
An example of restoring a frame of video data is shown in Figure 3.

Conclusion
As a result of the conducted researches, the approach to the analysis of frame the video. The proposed approach makes it possible to eliminate the noise component while retaining the edges. The algorithm is based on the following steps: the search and selection of domains with boundaries in areas of overlap; Finding reference points and their correlation; The transformation to reduce the action of visible distortion.