Image Fusion Based on Principal Component Analysis and Slicing Image Transformation

Image fusion deals with the ability to integrate data from image sensors at different instants when the source information is uncertain. Although there exist many techniques on the subject, in this paper, we develop two originative techniques based on principal component analysis and slicing image transformation to efficiently fuse a small set of noisy images. For instance, in neural data fusion, this approach requires a considerable number of corrupted images to efficiently produce the desired outcome and also requiring a considerable computing time because of the dynamics involved in the fusion data process. In our approaches, the computation time is considerably smaller. This results appealing to increasing feasibility, for instance, in remote sensing or wireless sensor network. Moreover, and according to our numerical experiments, when our methods are compared against the neural data fusion algorithm, they present better performance.


Introduction
On one hand, data fusion consists in measurements integration from different sensors at different time instants when the original information is uncertain [1][2][3].As a result, the fused data are more valuable for situation awareness understanding by completing notable information.Nowadays, and because of the availability of high technology and faster computers, suitable discrete dataset has been applied in many research activities such as medical image, machine vision, remote sensing, vehicle driving, robotics, secure communication design, pattern recognition, diagnosis, job shop scheduling, etc. [1,[3][4][5][6][7][8][9][10][11].Generally speaking, image multi-sensor measurements data fusion refers to the acquisition, processing and synergistic combination of information gathered by various knowledge sources to provide a better understanding of a phenomenon.It is worth noticing that data fusion presents a notable difficulty when the available data is incomplete, inconsistent or imprecise; and additionally if the information is corrupted by external additive noise [1,3,[12][13][14].
Additionally, data fusion not only consists in using integration measurements based on sensors but also integration of decisions, conflicts facts, and human intelligent assessments, [15][16][17].Even the same human brain uses data fusion to make an inference regarding the surrounding environment by realizing data fusion coming from the sight, smell, hearing, taste, and touch [18].Moreover, in the multi-sensor data fusion theory, there are many related data fusion techniques.For instance, the well known Kalman filter can be viewed as a data fusion algorithm [18,19].Fuzzy logic [20,21], genetic algorithms [22,23], and wavelet analysis [24] can also be sorted as artificial intelligence techniques for data fusion.Statistical methods and Bayesian theory both can be invoked for data fusion too [25,26].Just to name a few.As a complementary information on it, see, for instance, [27].Finally, it is interesting to highlight that there exist other definitions for data fusion: 'multilevel, multifaceted process handling the automatic detection, association, correlation, estimation, and combination of data and information from several sources' [27].
On the other hand, for instance, the design of new fault detection systems based on image or signal processing is an important and challenging task in many engineering applications, such as chemical processes, nuclear engineering, automotive system, wind turbines, and so on [4,12,28].Where, basically, the main motivation on designing new monitoring processes is to maintain safe and proper operation of these plants.In recent literature, the design of fault detection systems can be classified into two main categories (see, for instance, [12]): model-based and data-based methods, where some of the model-based techniques invoke the statistical hypothesis testing technique for monitoring and diagnosis [12], among many other ideas.Nerveless, in these and other applications of image processing require a pre-processing stage (like, for instance, noise filtering, data fusion, noise analysis, and so on) to go further in diagnosis and/or control.Therefore, the typical pre-preprocessing stage consists of a kind of noise data filtering -data fusion may be employed as a kind of noise filtering although data fusion may also mitigate intrinsic sensor uncertainties too-.Moreover, among the different todays image precessing https://doi.org/10.1051/matecconf/201821004020CSCC 2018 methods (see, for instance [29][30][31][32][33][34] and references there in), we are going to specially focus on the principal component analysis (PCA) along with slicing image transformation.The principal motivation to invoke PCA is to separate redundant information from the contaminated data [12,[35][36][37][38].And slicing image transformation (or bit-plane slicing) is commonly employed to extract significant amount of visual entropy from a given image [39].Hence, the main objective of this paper is to analyze image data fusion by invoking the previous cited methods and to grant new numerical algorithms for noisy image data fusion under the situation of limited access to image sensors information.It is worth noticing that the data fusion by using Lagrangian network converges to its optimal fusion if we have access to a huge number of discrete measurement information.In resume, by taking into account that image data fusion can be realized by doing a proper linear combination of the acquired image samples at certain instant of times [1,2,[40][41][42][43], the main objective of this paper is to present two novel numerical techniques on data fusion by using PCA and slicing image transformation.To support our designs, several numerical scenarios are prepared.According to our numerical experiments, our approaches present beneficial performance because they are efficient on separating the useful image data from the noisy information by showing both of them separately.The performance evaluation of our processes were carried out by utilizing the structural similarity index measurement (SSIM).See [44].In resume, so far, and to our best knowledge, the PCA technique in combination to the slicing image transformation has not been related.Specially, and according to the SSIM index, our second approach presents better performance.This fact is illustrated in our numerical experiments specially when the least significant bit-plane images are incorporated to the image fusion process.
The rest of this paper is organized as follows.Section 2 gives the standard theory and the main statement on image data fusion by using Lagrangian networks in continuous-time domain.In Section 3, we grant our main contribution on PCA image data fusion.The related numerical experiments are depicted in Section 4. Results and discussion are shown in Section 5.In this section, we also test the robustness of our design by proposing another example by using three captured frames coming from a film and noisily corrupted, among other examples.Finally, some concluding remarks are stated in Section 6.
Notations.-Throughout this paper, symbols in boldface format represent vectors or matrices, the un-boldface style for scalar variables, and T denotes the transpose.E[•] represents the expected value of a random variable, ∇ f (x) is the gradient of the given scalar function f (x), and ∇ 2 f (x) is the Hessian matrix of the stated function f (x).

Image multi-sensor data system modeling
Consider an image multi-sensor system with K (K ≥ 2) image sensors and processed at the pixel detail.Let the k-th image sensor measurement be given by [1]: where a k is the sensor scaling coefficient -or sensor gain-, N is the number of sensor measurements, s(t) is the mother signal, and n k (t) represents the additive white Gaussian noise at the kth sensor with zero mean value.Recall that noise is a random variation on the image density [39].Principal sources of Gaussian noise in digital images arise during the acquisition of the image by the sensor due to, for instance, poor illumination, high temperature, and the noise induced by the electronic circuit system.Furthermore, s(t) and n k (t) are mutually independent random processes.See Fig. 1.Hence, in the mentioned scheme, the pixel by pixel image processing system is adopted.Then, by defining a = [a 1 , .., (called the non-mean deviation observation vector), and ] T , the above system can be represented as follows: The principal objective of data fusion consists to finding a weighting vector w = [w 1 , .., w K ] T such that be minimized with In this way, z(t) represents the optimal fused data so that the uncertainty (for instance, the set of additive noises) is optimally attenuated.See Fig. 1.Hence, z(t) is in fact a linear combination of the sensor measurements.In this way, the corresponding optimal 'weighting' vector w linearly combine the pixels coming from the images to be processed in the restricted class of linear optimization theory.
The above data fusion statement is also equivalent to [1]: where Then, by solving the above optimization problem, we give a solution to the data fusion statement.Obviously, it is https://doi.org/10.1051/matecconf/201821004020CSCC 2018 assumed that the probabilistic density model of the noise is known -as previously mentioned, the Gaussian case in our case-.Moreover, the obtained optimal fused signal z(t) is an unbiased estimation of the mother signal s(t) [1].A solution to the higher up optimization problem can be realized by invoking the next Lagrangian network [1,2]: It is worth noticing that another option to the previous optimization problem is as follows [1]: being its corresponding Lagrangian network: where α is a given positive constant.In fact, by manipulating the value of α, the convergence speed of the Lagrangian dynamics is also manipulated (see, for instance, [40]).y = y(t) is the Lagrangian multiplier.Moreover, the next important statistical properties are cited [1]: x T (t) with probability one (w.p. 1).
Lately, the optimal fusion solution is z * (t) = (w * ) T x(t) where [1]: f 2 (w), (w.p.1) (10) From the previous properties, we can observe that the optimal solution is obtainable if we have access to an enough huge number of images to be processed.
In summary, we have the following result [2,40]: Then, the Lagrangian network in (7) (or ( 9)) is stable in the Lyapunov sense and is globally convergent to an equilibrium point of (7) (or ( 9)), which corresponds a unique optimal solution of the stated minimization problem.
Next, we will comment a key important reflexion to justify the use of PCA as an alternative technique on data fusion.According to [1], the optimization problem stated in (5) follows after invoking which is true if This realism can be obtained if we produce the corresponding mean-deviation observation matrix from the non-mean-deviation observation matrix.In fact, this is realized by the PCA technique.Hence, by producing the mean-deviation matrix, we arrive to a sub-optimal data fusion by just using PCA.And because of this, Lagrangian dynamic is not need anymore.Obviously, this brings a potential improvement in computer calculations.
In other words, by invoking ( 6) and extracting its noise information (by employing, for instance, the PCA method), we are solving the un-constraint sub-optimal case to the cited image fusion objective.Finally, and because of data fusion by using Lagrangian networks assumes additive Gaussian noise in the sensor signals, this assumption is not restricted.

Introduction to principal component analysis
In signal processing, the principal component analysis can be efficiently utilized as a mathematical tool able to separate noisy data into two sets.One containing its filtered version, and in the other an estimation of the noisy entropy affecting the original clean data.To resume this mathematical method, let be the matrix observation, where X 1 , • • • , X N ∈ R k are the non-mean-deviation observation vectors.Then, its sample mean matrix is given by The mean-deviation matrix is then stated as: where Finally, the PCA covariance matrix is defined as: The essential PCA procedure consists in finding a orthogonal matrix P such that https://doi.org/10.1051/matecconf/201821004020CSCC 2018 where D is a diagonal matrix with the eigenvalues and so on.Lately, a filtering image fusion is obtained by realizing the following linear combination: and the next represents the estimation noisy image data affecting the measured image information: where I m 1 ,I m 2 ,• • • ,I m k are the noisy images to be processed and invoked to produce the PCA covariance matrix.To note that the standard representations in ( 24) and ( 25) represent a linear vector combination being f 1 , f 2 , and f 3 ; and K 1 , K 2 , and K 3 the weighting elements.

Numerical experiments
By discomposing an image into its bit planes, see Figure 2, results useful for analyzing the relative entropy of the number of bits used to quantize an image [39].The reconstruction is realized by multiplying the pixel intensity of the nth image plane by the constant 2 n−1 .This corresponds to converting the nth significant binary bit into its decimal format.On the other hand, by using the two more significant bit planes outcomes the most relevant image information.We use them in our next algorithms.
For easy reference and without loss of generality, we invoke three noisy images.

First approach on data fusion based on PCA
In this subsection we are going to present our first approach on data fusion based on PCA and slicing image transformation.Given three noise images, L 1 , L 2 and L 2 , we obtain their two bit planes slicing transformation L a , L b and L c (we use the bitget Matlab command) to obtain the first and third principal components by following the PCA algorithm previously commented: and Then NI and NI2 represent the obtained fusion image (a filtered image version) and the noisy image estimation, respectively.We decide to use the slicing transformation in order to illustrate the potentiality of the method to be used, for instance, in compressed images.Moreover, the negative sign in ( 20) is inserted to avoid image inversion and followed by a mathematical normalization (in equation ( 21)) to avoid image saturation.For comparison objective, we also realize the average filtering: Finally, for performance evaluation, we use the structural similarity index (we invoke the ssim Matlab command) between the corresponding processed image and the original one.See [44].For sake of simplicity, Appendix A.1 shows the corresponding Matlab code.By using the original image depicted in Figure 3, and its noise corrupted images shown in Figure 4 (in these figures, the noisy images are corrupted by uncorrelated Gaussian noise of zero mean and variance 0.01), numerical experiment results are shown in Figures 5-6.A second round of numerical experiments is repeated by using the noisy images now affected by the uncorrelated Poisson noise and shown in Figure 7. Figures 8-9 show the agreeing simulation results.

Second approach on data fusion based on PCA
In this subsection we will present our second approach on data fusion based on PCA and slicing image transformation.Given three noise images, L 1 , L 2 and L 2 , we again obtain their two bit planes slicing transformation L a , L b and L c to obtain the first and third principal components by using the PCA method.Then, we propose: Once again, NI and NI2 represent the obtained fusion image (a filtered image version) and the noisy image estimation, respectively.Over again, we decide to employ the slicing transformation in order to illustrate the potentiality   of the method to be used in compressed images.Then, the negative sign in ( 25) is inserted to avoid image inversion and followed by a mathematical normalization (in equation ( 26)) to avoid image saturation.For comparison objective, once more we realize the related average filtering: https://doi.org/10.1051/matecconf/201821004020CSCC 2018

Results and Discussion
Nonlinear convex programming by using Lagrangian networks in the sense of [2,41,42,[45][46][47][48] is stated in continuous-time domain.One of its main application is linked to the data fusion when linear equality constraints is imposed to the optimization objective.However, and taking into account that digital devices are usually employed for physical realization, translating the above statement into discrete-time domain requires special modification due to the next facts:   • Stability of discrete-time systems depend on the sampling-rate of the employed digital device.
• Stability of the Lagrangian network dynamics will depend on the positive definiteness of ∇ 2 f 1 (w).
• Positive definiteness of ∇ 2 f 1 (w) will depend on the available data to process and captured in R.
• The available captured data to produce R may cause that the Lagrangian network dynamics be un-convergent.
From the above writing, data fusion by using Lagrangian networks results inappropriate if the number sensors is   small.Hence, our approaches are simpler and effective due to the following facts: • They produce a fused image.
• They provide an estimation of the noise affecting our image sensors.
The second one results attractive because by knowing an estimation of the noise we can figure out the reason of it.For instance, the speckle noise can be induced by a kind of radiation, and so on.
On the other hand, in some applications, the set of image to be processed arrives from the same image sensor, which means that these image samples can be considered perfectly aligned.But if different image sensors are used, or if the scene presents slightly time moving agents, then a perfect alignment among the image pixels is not true anymore.With this issue in mind, we prepare the following experiment.From a film, three different screen captures were obtained (see Figure 17).These images, in fact, capture a slightly moving environment.Moreover, and because the captures were realized by using the classical selecting, copying and pasting method into an image file, and then adding, for instance, the uncorrelated Gaussian noise, the spatial variation among them is also presented.The numerical results by using, for example, our first approach, are shown in Figures 18-19.Clearly, this experiment shows the robustness of our approach.
To amplify our numerical examples, we slightly modify our second approach by changing the slicing images L a , L b , and L c by (see Appendix A. On this way, and by adding the least significant slicing images, we increase noise information to the data fusion procedure.The obtained results by employing the clean image given in Figure 20, and its noise samples shown in Figure 21, the received results are shown in Figures 22 and 23.By reading the structural similarity index measurement (SSIM), our approach present better performance.Moreover, the corresponding histogram to the third principal component gives important information about the noise information in the images.For instance, the average gray scale of the image is most affected by the related noise.To conclude our numerical realizations, Figure 25 shows the corresponding first and third principal components by invoking our second approach given in Appendix A.2 to the noisy images shown in Figure 24.
We select this experiment to illustrate the ability of the slicing process to highlight the influence of the noise on the rocked instead of the image background.This realism follows because of algorithm.
In addition, from the revised literature on data fusion methods stated in the Introduction section, our design   has the following gains.For instance, in comparison to the method given in [18], our propose does not require clustering nor update prediction rules.In contrast to the Fuzzy logic technique shown in [20,21], our strategic does not involve implementing inference rules, implication, aggregation, and defuzzification processes.These tasks are no always easy to realize, especially to write down the corresponding input and output membership functions.On the other hand, the genetic algorithms granted in [22,23] basically needs a kind of intelligent   exploitation of a random search to solve optimization part.This may consume a notable machine computation time.Lastly, the statistical and Bayesian designs given in [25,26] essentially require decision rules based on probabilistic loss functions requiring a training process.In our approaches, we just follow a deterministic solution by employing PCA.In our opinion, it is easier to programing.On the other hand, image processing by using wavelets transformation requires, as in the Fourier theory, the next   • Alter the transform.
• Compute the inverse transform.
Hence, to essential capture the philosophy design by using this technique, we first involve some basic acknowledge on Fourier transform [39].Therefore, we can say that our design and the wavelets transformation tool are two completely different approaches and each one has theirs own advantages and disadvantages depending on the application because data fusion has the main aim to obtain information of greater quality from the supplied image information, where the exact definition of "greater quality" will specially depend upon the application [49].Furthermore, by using the data fusion stated in [1], the obtained image fusion by using the Lena Gaussian noisy images is shown in Figure 26.It is clear that this technique is un-convergent due to the small number of images to process.To highlight that the computation time to obtain this result was around 75 minutes by using a common personal computer.Hence the PCA technique is substantially faster.
To close this section, although there are some PCA solutions to image fusion, our main approach presents an alternative resolution.For instance, in [50], the proposed PCA algorithm is used in combination with the intensity-hue-saturation transform.But this technique was not conceived as an image fusion method to attenuate the external noise effect on the images to be fused.In [51], some other PCA procedures are commented to integrate the geometric detail of a given high-resolution panchromatic image.Finally, in [52], more PCA techniques to image applications are documented.However, the mixing combination of PCA with the slicing image transformation has not been related.

Conclusions
In this paper we have presented two data fusion methods based on PCA and slicing image transformation.According to our numerical experiments, our approaches are able to separate the noise from the fused image.This results useful to study, for instance, the environmental noise affecting our sensors.In this way, we can detect, for instance, some kind of external radiation.Moreover, in our approaches, the computation time is considerably MATEC Web of Conferences 210, 04020 (2018) https://doi.org/10.1051/matecconf/201821004020CSCC 2018 smaller in comparison to other image fusion techniques.This results appealing to increasing feasibility, for instance, in remote sensing or wireless sensor network.Finally, we have used a neuronal Lagrangian technique to develop image fusion by using the given Gaussian noisy Lena images, and the obtained result was un-convergent.Therefore, our essential approach based on PCA is better.

Conclusions
In this paper, we have developed a controller based on K-means clustering theory.In our philosophy design, the controller continuously observes the plant output and when the data measurement is located outside of the desired cluster size located around the closed-loop system equilibrium point, it is forced to be in it.Moreover, the obtained controller is robust against un-vanishing perturbation and nonlinearity effects on the overall closed-loop system such as saturation, slew-rate limit, limit bandwidth frequency operation, and so on.Finally, and from the academic point of view, the employed base to reach our controller approach seems interesting.

Figure 1 .
Figure 1.Processing image data fusion technique based on K image sensors at the pixel level.

Figure 2 .
Figure 2. Slicing bit planes representation of an 8-bit image.

Figure 4 .
Figure 4.The uncorrelated Gaussian noisy images for processing.

Figure 6 .
Figure 6.The noisy image estimation affecting our original image and its histogram.

Figure 7 .
Figure 7.The uncorrelated Poisson noisy images for processing.

Figure 9 .
Figure 9.The noisy image estimation affecting our original image and its histogram.
+ L b + L c ). (29)Lately, for performance evaluation, we keep using the structural similarity index between the corresponding processed image and the original one.Appendix A.2 shows the corresponding Matlab code.By using the original image depicted in Figure10, and its noise corrupted images show in Figure11(in these figures, the noisy images are corrupted by uncorrelated Gaussian noise of zero mean and variance 0.01), numerical experiment results are shown in Figures12-13.A second round of numerical experiments is repeated by using the noisy images now affected by the uncorrelated Poisson noise and shown in Figure14.Figures15-16show the agreeing simulation results.

Figure 11 .
Figure 11.The uncorrelated Gaussian noisy images for processing.

Figure 13 .
Figure 13.The noisy image estimation affecting our original image and its histogram.

Figure 16 .
Figure 16.The noisy image estimation affecting our original image and its histogram.

Figure 18 .
Figure 18.The obtained image fusion by using our first approach.

Figure 19 .
Figure 19.The noisy image estimation affecting our original image and its histogram.

Figure 21 .
Figure 21.The uncorrelated Speckle noisy images with multiplicative noise 0.7 for processing.

MATECFigure 23 .
Figure 23.The noisy image estimation affecting our original image and its histogram.

Figure 24 .
Figure 24.The uncorrelated Poisson noisy images for processing.

Figure 25 .
Figure 25.The obtained image fusion by using our second approach: the related first and third principal components.

Figure 26 .
Figure 26.Data fusion by using the Lena noisy images and the Lagrangian neuronal method sated in [1].