High-resolution Distributed ISAR Imaging based on Sparse Representation

Distributed ISAR technique has the potential to increase the cross-range resolution by exploiting multichannel echoes from distributed virtual equivalent sensors. In the existing imaging approaches, the echoes acquired from different sensors are rearranged into an equivalent single-channel ISAR signal. Then, the missing data between the observation angles of any two adjacent sensors is restored by interpolation. However, the interpolation method can be very inaccurate when the gap is large or the signal-to-noise (SNR) of echoes is low. In this paper, we discuss sparse representation of distributed ISAR echoes since the scattering field of the target is usually composed of only a limited number of strong scattering centres, representing strong spatial sparsity. Then, by using sparse algorithm (Orthogonal Matching Pursuit algorithm, OMP), the positions and amplitudes of the scattering points in every range bin can be recovered and the final ISAR image with high cross-range resolution can be obtained. Results show the effectiveness of the proposed method.


Introduction
The cross-range resolution of conventional inverse synthetic aperture radar (ISAR) is obtained by exploiting the relative rotation between the target and the radar [1,2].To achieve a high cross-range resolution, a long observation time is needed, which will make the translation motion compensation complicated.In order to overcome this drawback, the antenna array can be used to obtain the image of a maneuvering target by one single snapshot [3].However, this technique needs a large number of antenna elements, which is costly.To increase the cross-range resolution of ISAR image with limited antenna elements, the distributed ISAR technique has been considered [4][5][6].This technique has the possibility of utilizing the data acquired by multiple radar sensors.Each sensor is characterized by either transmitting capability or receiving capability.The receiving sensors can receive and separate all the transmitting signals.Therefore, any transmitting sensor and any receiving sensor can form a virtual equivalent sensor that autonomously transmits and receives radar waveforms.The synthetic aperture formed by target motion is utilized to fill the gap between the adjacent virtual sensors.With appropriate radar formation, each scatter can be observed from much wider observation angles and thus can achieve a higher cross-range resolution.
A very significant step in distributed ISAR imaging is how to deal with the multi-channel echoes.In the existing approaches, the multi-channel echoes are rearranged according to the change of the observation angle [5,6].Three basic cases are distinguished, (1) overlap in observation angle, (2) no overlap, with a small gap (sensors close), (3) no overlap, with a large gap (sensors far apart) [7].In case (1), it needs time selection to reduce overlap.And in case (2), the gap filling interpolation technique can be utilized.While in case (3), the interpolation may not be suitable since the gap is large and interpolation may not be correct.
In this paper, we mainly focus on the case (3).While the interpolation technique may recover the missing data in case (2), its performance deteriorates when the gap of observation angle is large.Also, the interpolation technique is susceptible by noise.Therefore, a sparse imaging method is proposed to overcome these drawbacks.Meanwhile, the sparse imaging method has no sidelobe in the final image.
The rest of this paper is organized as follows.After presenting the signal model of distributed ISAR in Section 2, the sparse imaging method is introduced in Section 3.Then, simulation results are presented in Section 4 to validate the effectiveness of the proposed method.Finally, we conclude this paper in Section 5.


is the scattering coefficient of the qth scatter in the m-nth observation channel, and , () Rt is the propagation distance from the sensor m to the scatter q and back to sensor n. () denotes the point spread function of ()  , where () pt is the sinc function [6].The position vector of the qth scatter can be written as 00 ( ) [sin( ),cos( )] , where q r is the vector length, 0 q  is the initial azimuth angle of the scatter, and  is the constant rotation speed in clockwise.
0m R and 0n R are denoted as the position vectors of the mth transmitting sensor and the nth receiving sensor, respectively, which can be expressed as 00 [ sin , cos ] where m  and n  are the azimuth angles measured clockwise.Under the far-field assumption, the propagation distance , () Rt can be expressed approximately as where the mean distance mn R , the mean angle mn  , and Rt can be written as 0 ( ) 2 cos( )cos ) () iq p rt reflects the range compression position in range dimension and the phase signal in cross-range dimension.We will simplify ()   iq p rt in range dimension and cross-range dimension respectively.
In the range dimension, we assume the accumulation angle T  is very short, so the range migration can be neglected with the approximations of sin( ) 0 qq xy is the initial coordinate of the qth scatter).Although we assume that the angle between the equivalent sensor is large, this is only relative to the accumulation angle.So, we assume that the i  and i  are not very large and have little influence on the range position after compression.So, it must satisfy 0 (cos 1) sin cos  must less than 5.7 o , and i  will less than 2.85 o .Actually, i  cannot be very large in order to satisfy (6).Thus, for echoes of different equivalent sensors, each scatter is compressed into the same range bin.Therefore, from ( 5), ()   iq p rt can then be written as 0 2 q y , which simplifies the process in the range dimension.
Meanwhile, in the cross-range dimension, the observation angle can be regarded as pi t For the sake of simplicity, we assume that the relationship between the mean angle of each equivalent sensor is . In order to ensure there exists gap between the observation angles of two adjacent equivalent sensors, the i The first addend of ( 7) is constant and has no effect on the imaging result.Therefore, the received signal of the ith equivalent sensor can be expressed as Now, the question is how to use the multiple echoes from all the equivalent sensors to increase the cross-range resolution.In existing approaches, the echoes of all equivalent sensors are rearranged according to the observation angle.After uniform interpolation to fill all the gaps, the standard ISAR focusing techniques can then be applied.However, this method cannot handle the case which the gap is large or the SNR of echoes is low.In this paper, we use sparse imaging method to get well focused result.

Sparse imaging method
If there is no overlap and no gap of the multiple echoes, the standard ISAR focusing method can be applied after rearranging these echoes.However, if the gap exists, it will be difficult to get the desired imaging result.I equivalent sensors can obtain I echoes with 1 I  blocks of missing data.The number of valid sampling points in cross-range dimension obtained by each equivalent sensor is P , the total number is v L IP  .And the number of the sampling points to be recovered is L , v LL  , that is, the imaging in this situation is an underdetermined problem.Modern compressive sensing theory has proved that if the original signal satisfies the sparse property under a certain observational base space that complies with the RIP criterion (Restricted Isometric Property), then the number of samples less than the original signal length can also recover the original sparse signal [8].In general, the ISAR image appears as a superposition of scattering points in several different doppler frequency units in the cross-range.We consider dealing with the distributed ISAR signals with missing data in a sparsely optimized manner.

Sparse representation
We only consider the cross-range signal in one range bin, which is shown in Figure .3.According to (8), this signal for the ith equivalent sensor can be written as 0 1

( ) exp( cos cos )
And the global signal length is Combined with ( 9), the global azimuthal signal can further be expressed as  S Φσ (12) where . Φ is the overcomplete base space based on i  , i  , the target rotation speed  and other parameters.The essence of Φ is the Fourier transform matrix: where the i F is expressed as

OMP method
Reconstruction of cross-range image can be regarded as the process of finding the solution to (12): which is a nonlinear convex optimization problem.Orthogonal Matching Pursuit Algorithm (OMP) is an improvement on matching pursuit algorithm [9,10].OMP algorithm needs to orthogonalize the selected atoms by the Gram-Schmidt method, and then project the signals in the space formed by these orthogonal atoms to obtain the signal components on the selected atoms and the residual components.Then, use the same method to decompose the residual components.After V decomposition, the original signal is decomposed into a linear combination of V atoms.In each step of decomposition, the selected optimal atom satisfies certain conditions.Therefore, the residual component decreases rapidly with the decomposition process.In this way, the original signal can be represented by a small number of atoms and can be converged after a limited number of iterations.Therefore, the imaging steps are shown in Fig. 4 and explained as follows: (1) rearrange echoes of all equivalent sensors after passing matched filters based on the observation angle (2) coarse imaging to find where scatter points exist from all range bins (3) select a range bin and extract this bin's crossrange signal as S (4) construct Φ matrix (5) obtain the result of σ in this range bin with OMP method (6) select another range bin's signal to solve

Simulation results
In this section, simulations are conducted to demonstrate the effectiveness of the proposed method.The distributed sensors are composed of two transmitting sensors and two receiving sensors ( 22 MN  ， ).The azimuth angles of m  and n  are (6 ,1 ) oo and ( 6 , 1 )   oo  , respectively.Therefore, three equivalent sensors can be obtained with mean angles (actually four equivalent sensors are formed with two have the same mean angle 0 o ).The transmitting signal is a set of orthogonal signals with the same center frequency 10GHz and the same bandwidth 300MHz which can achieve a range resolution of 0.5m.
The target is composed of eight scattering points (see Figure .5)which are isotropic and independent with each other.Assume the target rotates uniformly with an angular speed of 0.02 rad/s.Since the mean angle i  and angular speed  are fixed, the gap between observation angles only depends on the accumulation time.2s and 1.5s are considered separately.For the former case, the accumulation angle for single sensor is 2.3 o , and for the latter case, it is 1.72 o .In these two cases, the range compression echoes rearranged according to the observation angle are shown in Figure .6(a) and When the accumulation time is 2s, the cross-range resolution is 0.375m for single sensor, so the two scatters (0, 0) and (0.5, 0) cannot be distinguished clearly.From Figure .7 (a), four scatters are all distinguished, which means the interpolation technique can recover the missing data well, and a higher resolution can be achieved by exploiting all echoes to increase the accumulation angle.While when the accumulation is 1.5s, that is, more missing data, the interpolation technique fails to fill the gap.So, the FFT cannot get good result.
Then, we observe the performance of OMP method.We use the echoes in the 25 th range bin in the second case.The result with super-resolution feature is shown in  From the three simulations, we can obtain: (1) the interpolation technique can fill the gap between adjacent observation angles when the gap is small.And by exploiting all the echoes to increase the accumulation angle, the cross-range resolution can be enhanced.However, it cannot get good results when the gap is large or the echoes' SNR is low.
(2) the OMP method performs well to get highresolution result compared with the interpolation technique.

Conclusion
In this paper, a sparse method is proposed to get the cross-range image of distributed ISAR.Firstly, the signal model of distributed ISAR is introduced.The constraint condition is derived to make each scatter is compressed in the same range bin for echoes of different equivalent sensors.Secondly, the sparse representation of the crossrange signals shows that the sparse method can be used to get the estimated positions and amplitudes of scattering points based on Fourier transform matrix.The simulation results show that the OMP method can get highresolution result without recovering the missing data.And the OMP method can work well when the gap is large or the echoes' SNR is low.However, when the migration through resolution cell exists between echoes of different equivalent sensors, the cross-range signal cannot be jointly processing.Therefore, the correction of range migration needs further research.Also, if the target has three-dimensional rotation, how to apply this sparse representation method also needs to be discussed.
transmitting/receiving couple ( , ) mn can be regarded as the ith virtual equivalent sensor, and the number of all equivalent sensors is I.By setting i mn   , i mn   and neglecting the constant distance mn R under the assumption that the translation motion has been compensated, , () mn q p


and the rotation angle T 

is
has P sampling points.The whole valid sampling points for all echoes is v L IP  .The number of missing data points between two adjacent sensors is

Figure. 6
Figure.6(b).So, the gap is larger when the accumulation time is shorter.

Figure. 8
(a).Compared with Figure.7 (b), OMP can get the estimated positions and amplitudes of all scatters.Figure.8(b) is the final result of all selected range bin.

Figure. 9 .
Figure.8.OMP results ( 1.5 T s  ) (a) the 25 th range bin (b) all range bins Consider a 2D coordinate system (O,X,Y) with the origin in the target's rotating centre.The target is modelled as a rigid body, consisting of Q scatters and rotating clockwise.The M transmitting sensors and N receiving sensors are placed in the XOY plane (see Figure.1).