Using Statistical Algorithms for Image Reconstruction in EIT

The problem with image reconstruction from impedance tomography is an ill-posed inverse problem. To get quantitative information on the change in conductivity, it would be better to use a nonlinear model in the differential imaging solution. Statistical methods such as PCR, PLRS, elastic net, Lars, SVR were used to reconstruct the image. The discussed techniques can be applied to the problem of electrical tomography. The algorithms used to identify unknown material coefficient.


Introduction
This article proposes algorithms based on statistical methods to obtain more accurate and stable reconstruction results in solving the inverse problem in electrical impedance tomography (EIT) [3,8,[13][14][15][16]. In the EIT, the electrical voltages are injected into the object using a set of electrodes attached to the object's surface, and the potentials are measured. The object's conductivity is reconstructed on the basis of known voltages and measured potentials. Reconstruction of electrical impedance requires accurate modelling [6,11,17,18,20]. EIT is a method of imaging in which the conductivity distribution of the tested object is estimated on the basis of measurements of electrical voltages and potentials of electrodes at the boundary. The EIT deals with the inverse problem, in which, taking into account the measured voltages on the electrodes, it estimates the distribution of conductivity by means of the image reconstruction algorithm. To reconstruct the image in the electrical impedance tomography PCR, PLRS, elastic network, Lars, SVR were used [1,4,5,7,9,10,19,[21][22][23][24]. This work gives promising results as a new horizon to solve practical problems. The main objective of the tomography is to perform image reconstruction. During the measurements, we can see that the measured values from some electrodes are strongly correlated (due to the way of measurement). In this case, we have a multi-line problem [2,12,13].

Problem definition
Let the linear system be described by the state equation where n R Y ∈ , denote the observation matrices of response and input variables respectively, 1 + ∈ k R β denotes the vector of unknown parameters. When the linear model (1) contains the intercept, then the first column of matrix X is a column of ones. The object n R ∈ ε in linear system (1) presents a sequence of disturbances, which is usually defined as a vector of independent identically distributed random variables with normal distribution ( ) is an identity matrix. The classical Least Square Method depends on identification of unknown parameters , then the best linear estimator of unknown parameters β is The problem is often when X X T is singular.
The main goal of tomography is to make an image reconstruction. During measurements we can see, that the measured values from some electrodes are strongly correlated. In this case we have a multicollinearity problem. When the independent variables (predictors) are correlated (collinear), then the matrix X X T tends to singular matrix. Using the least square method we obtain the large absolute values of some estimators of unknown parameters. The predictions based on this model is unstable.
The most common approach depends on reduction of set of the input variables (deleting the same predictors which involve in multicollinearity). Then we have a problem of deciding which predictor variables would be

PCR and PLSR
From the singular value decomposition (SVD) the data matrix X can be presented as , but the matrix of loadings is a matrix of the right singular vectors and V P = . In PCR we use the scores to explain the linear regression (1), because the scores are orthogonal, so there is no multicollinearity problem. Analysing the Root Mean Square Error of prediction we usually approximate the data matrix X by a -number of principal Next, we regress the response variable Y on the scores ( ) a T instead on X . From (3) and unitarily the unknown parameters in linear model (1) are estimated as In practice PCR is performed on the mean cantered data matrix X and the intercept is calculated afterwards.
The prediction based on model (1) is where βˆis a vector of estimators of unknown parameters β is given by formula (5).
Just like PCR, the Partial Least Square Regression defines the latent variables to explain the relations in linear model (1). In PCR the principal components are chosen to explain the variance of data matrix X and come from singular value decomposition. In PLSR the components (called the latent variables) are chosen to capture as much as possible the covariance between X and Y . Like PCR, PLSR is performed on the mean cantered data matrix X and the intercept is calculated afterwards. The latent variables are calculated iteratively. The PLSR algorithm will be presented below.
1. First, we put 4. The i -th loadings of X and Y are obtained by 5. The information related to the latent variable based on score i t is subtracted from data matrices i E and Finally, from (7) we obtain the estimators of unknown parameters The image reconstruction is shown in Fig.1.

Elastic net
Another way to determine the linear regression when the input variables are collinear depends on solution the task For the ridge regression the penalty is calculated in norm 1 L but for LASSO in 2 L . Difference between ridge regression and LASSO is symbolic, only the norms are changed. The ridge regression shrinks coefficients for correlated predictors towards each other. When the correlated predictors depend on any latent factor, then ridge regression allows to uniformly distribute the strength of latent factor on these predictors. Whereas LASSO is indifferent to correlated predictors. This method allows to determine the preferred predictor and to ignore the rest. By applying LASSO method, we obtain a model, where the many coefficients to be close to zero, and as a result we receive a sparse model. The elastic net is a connection of ridge regression and LASSO. Choosing the appropriate α we may create the compromise between ridge regression and LASSO.

Lars
Next possible way of reduction of multicollinearity problem between predictors depends on applying Least Angle Regression algorithm. This algorithm includes to linear model only causal variables should be included (from a set of predictors such input variables should be chosen which influence directly to response variable). In this case the linear model is built by employing the forward stepwise regression, where at each step the best variable is added to model (Fig, 3).

Conclusion
This article proposes algorithms based on statistical methods to obtain more accurate and stable reconstruction results in solving the problem of inverse EIT. We can see that the algorithm allows us to estimate the parameters adequate for linear models that describe linear relationships between conductivity and voltage measurements on electrodes. The method presented in the article allows us to accurately predict conductivity for image reconstruction. An effective algorithm for solving inverse problems would also improve many numerical efficiencies. The reconstruction process is sufficient because the region borders are almost at the edges of the objects being searched. The presented method has been successfully applied in many areas of scientific modelling in the EIT.