Multimodal medical image fusion using Butterworth high pass filter and Cross bilateral filter

Multimodal Medical Image fusion is a prominent area of interest. Medical image fusion is the process of combining images from different modalities. It improves imaging quality and reduces the redundant information. The main aim of Medical Image fusion is in having better quality of fused image for the diagnostic purposes. Image fusion improves capability and reliability of images. In medical background, sharpness of fused image is the basic criteria of quality. In this paper, quality of fused image can be enhanced by using combination of Butterworth High Pass filter and Cross Bilateral filter. Cross bilateral filter is a nonlinear filter, which takes both range domain and spatial domain filtering into account. It is an edge preserving filter, which fuse images by taking weighted average of source image pixels. For better quality of fused results, the input images are sharpened using high order, Butterworth high pass filter and then images are fused with Cross Bilateral filter. Results show that the modified image fusion framework is effective in preserving fine details, information content and contrast of image.


Introduction
Image fusion is a process of combining two registered images of same scene, to get a single fused image which is advantageous for both human and machine learning.Image fusion finds its major applications in various fields of medical diagnosis, robotics, military, remote sensing and in surveillance due to its advantages of improving reliability and capability.Image registration is the prior and important step of image fusion.Images are taken from different times, different modalities or different viewpoints; so they are quite different in localization, resolution and dimensions, thus there is a need to align them spatially [1].
Image fusion can take place in both spatial and frequency domains.It is difficult to gather enough information from a single image, as an example, images of same scene are to be taken at different focus by shifting the focal plane of a camera, it is multifocus image fusion [2].Likewise, multispectral image fusion is the acquiring of images at different wavelengths.A very important type of image fusion is the multimodal image fusion.It is the fusion of different modalities, to conglomerate the necessary information.Magnetic Resonance Imaging (MRI), Computerized Tomography (CT), Positron Emission Tomography (PET) and Single Photon Emission Computed Tomography (SPECT) are different modalities used in multimodal medical image fusion [3].Depending on the level of generalization, image fusion could be performed at pixel level, feature level and decision level [4].
The paper is categorized as: Section 2 enlightens the previous work.Section 3 describes modified methodology, which also illustrates the cross bilateral filter and Butterworth filter in brief.Results and Discussions are included in Section 4, along with the description of statistical parameters.At last, conclusions are drawn in section 5.

Literature review
Image fusion can be performed at signal, pixel, feature and symbol levels.Most of the image fusion algorithms fall under pixel level.Pixel level is very commonly used in multisensor and multimodal image fusion [3,4].Various morphological operators like opening, closing, erosion, dilation, and top hat transformation described in [5] are useful for detecting spatial relevant information.Morphological pyramid repetitively filters and samples the image, resulting in decrease in resolution and size of image [6].Principal Component Analysis (PCA) is a new advanced fusion algorithm, which computes the Eigen values of the image matrix [7].Multiresolution approaches like discrete wavelet transform (DWT) and Curvelet transforms described in [8] have both spatial and spectral resolution.Wavelet transform along with Fuzzy C-Means clustering algorithm is better approach as it also improves the Peak Signal to Noise ratio (PSNR) to a great extent [9].In the same way, Dual tree complex wavelet transform (DTCWT) is shift invariant, thus along with PCA gives an improved fusion approach [10].A new fusion scheme using Daubechies Complex wavelet transforms (DCxWT) has the highest value of edge ICAET 2016 strength due to its phase information and shift sensitivity described in [11].A modification of contourlet transform is ridgelet and ripplet transform, which eradicates the singularities present in 2D image [12,13].Artificial neural network (ANN) along with Back propagation for up gradation of weights is useful in detecting cancer [14].Multiple data present in different modalities can be best analyzed with the help of Independent Component analysis (ICA) and Independent Vector analysis (IVA) [15].
C. Tomasi and R. Manduchi [16] proposed Bilateral filter (BF), a nonlinear filter, which combines the images based on both photometric similarity and geometric closeness.BF has many applications in Image denoising [17], Video fusion [18], tone mapping etc. Context bilateral filter proposed in [19] is used for image denoising, as range filter in it depends on context rather than on gray levels.Cross Bilateral filter (CBF); variant of BF, modified by B. K. Shreyamsha Kumar fused the images by using weights computed from detailed image [2].
Butterworth high pass filter (BHPF) used to preserve image lines and edges, are better than ideal low pass filter as described in [20].Butterworth low pass filter (BLPF) used to eliminate noise from ECG signals described in [21].Sharpness of BHPF can be controlled by the order, which is not possible in Gaussian high pass filter (GHPF) [22,23].

Modified image fusion framework
Modified image fusion framework is the combination of Butterworth High pass filter and Cross Bilateral filter.
Input source images are sharpened by using high order and low cut off frequency Butterworth filter [22].Sharpened source images are the inputs of cross bilateral filter.By taking both gray level similarities and geometric closeness into account, CBF consider one input image to shape the kernel and filters the other source image.Weights are computed from detailed image and this detailed image is subtraction of CBF output and one of the input images [2].The methodology framework is shown below for the two images P and Q:

Cross bilateral filter
Bilateral filter is a smoothing filter, which nonlinearly combines the image values while maintaining the edges.
It filters the images by taking the weighted average of pixels, which is same as the Gaussian convolution.But Gaussian filter (GF) depends only on domain filtering and BF depend upon both the range filtering and domain filtering [16].The up gradation of weights also depends on intensity of image.BF thus used for both color and gray scale images.Edges do not have much difference in intensity, so they remain preserved and not get impaired.
For filtered images P and Q, the bilateral filter outputs are given as: Where is a normalization constant.

is a domain filtering function is a range filtering function
Where ||m-n|| is Euclidean distance between m and n And are two parameters, which control the filtering of bilateral filter.Increasing the value of leads to the smoothing of features and also decreases the Mean square error (MSE).Increasing the filter becomes closer to the Gaussian blur [17].CBF output is subtracted from the original image, to get the detailed images of P and Q, which is given by =Pand =Q-.Eigen values of covariance matrix are used for calculating horizontal and vertical detail strength.Covariance matrix is obtained by considering a window of size w x w around the detailed coefficients and .The sum of these detail strengths gives the weights of detailed images [2].And the fused image is obtained from the weighted average of source images given as: (3)

Butterworth high pass filter
The main notion behind filtering is to get the filtered image and the results obtained from these filtered images are more useful for medical applications.Image filtering is useful for the removal of noise and the enhancement of image details such as edges or lines.Low pass filter (LPF) leads to the smoothing of image by removing the high frequency components, and High pass filter (HPF) used for the sharpening purposes [20] ICAET 2016 purposes, Ideal HPF has the sharp discontinuity which produces the unwanted ringing effect.BHPF does not have sharp discontinuity, thus not having much ringing artifacts.It has maximal flat phase delay.Gaussian HPF is smoother than both BHPF and IHPF.BHPF is the transition between the IHPF and GHPF.BHPF has the gradual attenuation profile, in which the cut off and slope are to be adjusted independently [22].The transfer function of BHPF is given as where is a certain cut off frequency, n is the order of the filter.It passes the frequency above and rejects the lower frequencies.
In BHPF, both cut off frequency and order can be changed to yield variety of results.As the cut off frequency increases, the filter becomes smoother, and the resultant filtered images are milder [23].The effect is not much pronounced due to the order, which can be controlled independently to get the sharper images.But in GHPF, order cannot be changed, and thus increase in cut off frequency results in more smoothness.Hence, images filtered from BHPF are superior in quality than GHPF.
Figure 1 shows the effect of increasing cut off frequency of BHPF on MRI image.Where m x n is the size of image and f (i, j) is the pixel intensity of fused image at pixel location (i, j).Mean measures contrast and if it is at the intermediate of dynamic range, higher is the contrast.

2) Standard Deviation
It is the square root of variance, which is the second moment of mean.It is given as SD (6) It signifies the spread in data.Higher the value of standard deviation more will be the contrast.

3) Entropy
Entropy is average information comprised in an image.Entropy precisely signifies the quality of fused image.Higher its value, higher is the quality of fused image.But, Entropy not distinguishes the change in information content of the fused image.It is given as H ICAET 2016 - (7) where L are the number of levels in an image.is the probability of intensity value of an image.

4) Average Gradient
Average gradient is a measure of sharpness and clarity in a fused image.It provides the contrasts between the minute details of image.Average gradient is given by AG Where m x n is the region size around pixel (i, j), and are the difference in pixel intensities along x and y direction.
The parameters of BHPF are n=5 and =50, and for CBF, parameters are neighborhood window=11x11, =1.9 and =30.Simulations are performed on four pair of CT-MRI images.Due to less space, out of four, results of two have been shown below.It can be inferred from Table 1, that in both the datasets, modified method has higher value of API, SD, Entropy and AG in comparison to algorithms described in [2,24].Increase in API and SD signifies the fused images have high contrast and have higher information content.This is due to the sharpening of input images.There is a large difference in Average gradient, which clearly signifies the improved method have much better clarity and Figure 5 shows a comparison graph of modified image fusion framework and the algorithms described in [2,24] for dataset 2. Modified method is superior in comparison to others, having higher values of qualitative parameters.For better quality of fused results, the input images are sharpened using high order Butterworth high pass filter and then images are fused using Cross Bilateral filter.
Results show that the modified image fusion framework is effective in preserving brightness, fine details, information content, texture and contrast of image.The modified method shows better result both in terms of visual quality and quantitative parameters.

Figure 2 .
Figure 2. Effect of cut off frequencies of BHPF on sharpening

Table 1 .
Qualitative comparison of fused images of Dataset 1 and Dataset 2

Table 2 .Table 2 .
. Improved method also focuses on the tiny details of images.For Dataset 1, comparison tables are formulated by varying values of range domain filtering parameter and kernel window.Taking =1.8, kernel window = 11 x 11 and by increasing the value of range domain filtering parameter from 10 to 300, the cross bilateral filter leads to the Gaussian blur.Gaussian blurring decreases the value of entropy and Average gradient as shown in Qualitative comparison of fused image of Dataset 1 by contrastTaking =1.8, =30 and by varying the kernel window size from 3 x 3 to 13 x 13, Table3compares the value of statistical parameters.11 x11 is the best adjustable neighborhood window size for calculating the weight coefficients.Comparisons are tabulated for standard dataset.

Table 3 .
Qualitative comparison of fused image of Dataset by varying kernel window size