Construction of Deterministic Measurements Matrix Using Decimated Legendre Sequences

: This paper proposed and constructed a new class of deterministic measurements matrix by using decimated binary Legendre sequences for convolutional Compressed Sensing. The author proves that when the measurement matrix is constructed by a random subsampling, it can offer a stable sparse reconstruction. Besides, the simulation results shows that when a deterministic subsampler is used, the proposed matrix can also guarantee the stable reconstruction as good as random Gaussian or Bernoulli matrixes do, which are commonly used in CS.


INTRODUCTION
With the rapid development of information technology, data and information become more and more important.At the same time, as the quantity of data and information is very large, the access method of information has attracted great attention.As we know, when the amount of data is huge, if we use the traditional method to acquire the information, the Nyquist specific principle must be followed.In this method, the consumption of software and hardware resources will be great, which sometimes may overtake the biggest processing capacity that we can supply.In recent years, a new signal processing theory named Compressed Sensing (CS) [1] has been put forward, which effectively solves the predicament of previously described.
As is known to all, our modern technology-driven civilization acquires and exploits ever-increasing amounts of data [1], but most of the data that we acquire is useless.Actually, those data can be "through away" with almost no perceptual loss for us.After noticing this phenomenon, we can find a way to acquire the information that is important for us by directly measure the part of the data that will not end up being thrown away.Compressed Sensing (CS) is just a method like this, which was proposed by David L. Donoho in 2006.As mentioned above, Compressed Sensing (CS) is a method to acquire information by measuring part of the original data.Well, most of the original data are useless data, so we call this kind of data sparse.For example: let be a sparse signal with at most k nonzero components.Let be a measurement matrix with .Signals like are called k-sparse, which can be recovered using its measurements if can be approximated by k nonzero entries according to the Compressed Sensing.In order to recover the signal without missing im-portant information, two distinct questions should be considered.They are shown as follows: 1) How many measurements are necessary to reconstruct the original signal?
2) Given these measurements, what algorithms can perform the reconstruction task?
To the first question, according to the experiences of previous researches, we know that no fewer than k measurements will be enough to assure a stable recovery.In other words, sparse signals can be reconstructed with far less information [2].As we know, the recovery is dual to sparse approximation.And in CS theory, if the restricted isometry property (RIP) is satisfied, robust and stable reconstruction of sparse signal will be guaranteed [1].In this paper, our research mainly focuses on the relevant aspects of the first question.
To the second question, many researchers have proposed many kinds of reconstruction algorithms after several years of development of CS theory.Those researches pointed out that the process of sparse signal reconstruction can be converted to a problem of convex optimization.In this way, the Basis Pursuit (BP) [3] algorithm proposed by Chen, Donoho and Saunders can be used to recover the original signal.However, when using BP algorithm, people find out that it sometimes brings us high computational complexity and bad reconstruction results.In 2005, a new kind of algorithm called Matching pursuit (MP) [4] was used in CS.Tropp and Gilbert point out that matching pursuit (MP) algorithm and Orthogonal matching pursuit (OMP) [2] algorithm can guarantee a good reconstruction result with lower computational complexity.With the development and progress of CS, many algorithms based on the principle of Matching pursuit are proposed, for example: Stagewise orthogonal matching pursuit (StOMP) [5], Regularized orthogonal matching pursuit (ROMP) [6], etc.At the same time, in order to achieve a better reconstruction result, another kind of algorithm is proposed, which uses sparse matrix as measurements matrix taking sample.This kind algorithms include: "Chain pursuit (CP)", "Fourier sampling", "Heavg Hitters on Steroids (HHS)" and so on.
As we know, researches of compressed sensing theory focus on the following three aspects: sparse representation, construction of the measurement matrix and reconstruction algorithm.As mentioned earlier, we propose to construct a new kind of deterministic measurement matrix in this paper.For a sparse signal , is a measurement matrix, and .Let be a matrix.To guarantee a stable reconstruction, has to satisfy the RIP.According to paper [7], we know that if for all , where is an arbitrary k-sparse vector within .It is well known that obeys the RIP condition as long as if is a fully random matrix, such as the Gaussian or Bernoulli matrix [7].After years of researching, we know that a structured sensing matrix can play a better role in reconstruction.There're lots of methods to construct a structured sensing matrix, one of them is to construct from a partial circulant matrix [8]. ( Where is a sampling operator, and A is an circulant matrix, which can be expressed as follows: (2) According to convolutional Compressed Sensing, matrix A can be factorized into: ] be the first column of A. In equation ( 3), = [ , , …, ] T is a vector that obtained by taking the normalized FFT of a.And it is easy to find out that (4) And, if is unimodular, A will definitely be a unitary matrix satisfying .In existing works, the vector can be either a random unimodular sequence or a deterministic unimodular sequence.When is random, is also random, it is proved that if , the RIP of will be satisfied.When is deterministic, is a deterministic sampling operator; if [8], the RIP of will be held for any that is orthonormal.In this paper, we construct a new kind of by using a decimated binary Legendre sequence [9].We find out that the coefficients of corresponding are real-valued.It is interesting to note that the measurements matrix offers similar performance as that of a Gaussian or Bernoulli operator for a wide range of signal length whenever the sampler is random or deterministic.This is very significant, which implies that the decimated Legendre sequence has great potential in convolutional CS.

REVIEW
In this section, some related theories in CS and convolutional CS have been shown.
In convolutional CS, there are two ways to construct a deterministic matrix A. One way is to construct firstly in the frequency-domain approach.Then A can be obtained from (3).It is easy to find that A is a unitary matrix if is a unimodular sequence.The other one is to construct a first, which is the first column of A in the time-domain approach.Then we can obtain A from ( 4) and (3).However, getting a unitary A using the time-domain approach is not easy.Thus, we will mainly focus on the former way, in which is a unimodular sequence.Besides, many applications require A to be a real-valued matrix, such as coded aperture imaging and Fourier optics.What's more, using a real-valued matrix will be easier for computing and data saving.To generate real sensing matrix, needs to satisfy some the condition that is represented in Lemma 1.
Lemma 1: For the bipolar sequence , whose length is N, if it satisfies the equation as follows: (5) Where the superscript * represents the complex conjugate operation, A in (3) will be a real-valued matrix.
Theorem 1: [8] For the matrix in (1), where is a random sampling operator and is unimodular.If (6) Then all k-sparse signals in the canonical basis ( ) or the FFT basis ( ) can be stably recovered as long as .We know that the RIP is an essential requirement for stable reconstruction.However, for a CS operator, it is not easy to check whether the RIP is satisfied.As an alternative, deterministic design of CS operators with small spectral norm and small column coherence are widely used in signal recovery.For simplicity, let us suppose that s is sparse in the canonical basis with .And let us normalize = [ , ,…, ] T so that each column of has unit -norm.Therefore, the reconstruction can be achieved if [2], where is the column coherence of .According to [10], we know that if s follows the generic k-sparse model, the signs of the nonzero entries will be independent and equally likely to be +1 or -1.Besides, their positions will be (1 Web of Conferences MATEC uniformly distributed at random.In this way, the sparse vector can be reconstructed stably using l 1 -min if and [10].What's more, a similar bound as that of a random Gaussian or Bernoulli operator can be achieved.When is a tight frame with , which means .

CONVOLUTIONAL CS USING DECIMATED LEGENDRE SEQUENCE
This section studies the construction of by using a decimated binary Legendre sequence and the sparse recovery performance of the corresponding CS operator .In our previous researches, we have constructed a deterministic measurements matrix using formed by an original Legendre sequence, which offers a good reconstruction performance.However, the length of an original Legendre sequence must be an odd prime, which means the size of the measurements matrix is limited.This restriction will definitely limit its scope of application.In order to solve the problem, we propose to use a transform of Legendre sequence to construct whose length can be any positive integer.

Construction of Decimated Legendre Sequences
Definition 1 [9]: Let p be an odd prime, and Legendre symbol is a multiplicative function with values 1,-1 and 0. It is a quadratic character modulo prime number p, which can be represented as follows: Based on ( 1) and ( 2), we can easily get the deterministic measurement matrix .Besides, if we need the length of to be an odd number such as , we just abandon the first element of u and rerrange the sequence as .Then = [ ] T = [ ] T can be defined as a sequence of length .In this way, the length of can be either odd or even as long as we adjust the value of , which means the restriction that N has to be a prime number which is lifted.

Performance in Sparse Reconstruction
In this subsection, we propose to discuss the sparse reconstruction of that we construct when or .When is a random sampling operator, we firstly need to check whether the RIP is satisfied.Consider and A given in subsection 3.1.Then: It's clear that if N is sufficiently large.So for the matrix that we construct, when can satisfy the RIP, if , the stable reconstruction can be guaranteed when or .When is a deterministic sampling operator, in which = . In this part, we propose to use the results of simulation about column coherence of different CS operators to discuss the reconstruction performance of after deterministic sampling.For easy to computing, let , and signal length N: {121, 156, 256, 365, 644, 871, 1200, 1641, 2456, 3661, 47704, 6097, 8404}.
(11) ( ) a is quadratic resdue p a p a is quadratic non residue p p a p In this experiment, the values of N include even numbers and odd numbers in order to prove that the length of can be a number which is not a prime number.After simulation, we check and for each of the values that N can obtain.The simulation results of signal length N = 1200 is showed in Fig. 1.
In Figure 1, the column coherence of with N = 1200 constructed from a decimated Legendre sequence (p = 1201 and c = 2) shows the similar performance with that of the random Gaussian or Bernoulli matrixes.According to our simulations, constructed in subsection 3.1 is nearly an incoherent tight frame with , .It means that the stable reconstruction can be guaranteed if . In next section, extensive simulations will be carried out, which will give a further clarification for the performance of the measurements matrix constructed by us.

SIMULATION RESULTS
In this section, we present the simulation results for the following two experiments: Experiment 1: In this experiment, we take a k-sparse signal x of length N as the original signal, where k = N/10.By using OMP algorithm, we precisely check SNR (dB): (12) and compatibility (13)   Experiment 2: In this experiment, we study the application of the proposed measurements matrix in reconstruction of imaging of 2D signals.And in this application, A needs to be real-valued.The reconstruction algorithm of OMP is applied here.As shown in Figure 3, three 8-bit, 256×256 images have been used.Figure 3 (a) "Lena" is a portrait image, Figure 3(f) "Stars" is an astronomical image, and Figure 3(k) "Brain" is a medical image.The measurement matrixes used in this experiment are all matrix (N = 256, M = 0.6N).Besides, the reconstructed SNRs are also showed in Fig. 2. In Fig. 2, the "D" in "D-Gaussian" represents "deterministic", which means the measurement matrix is constructed by deterministic subsampling.And, the "R" in "R-Gaussian" represents "Random".
The simulation results indicate that the proposed deterministic measurement matrix can offer similar (even better performance in some conditions) performance as that of some commonly-used random measurement matrix, such as Gaussian and Bernoulli matrix, both in terms of the SNRs and the visual quality of reconstructed images.

9 )
: = 1&n->2 to (p -1) and p is the prime length.Then Legendre sequence can be defined as follows:(8) Where [0, p -1] is a positive integer.Let p be an odd prime in the finite field .A binary Legendre sequence x = (x 0 , x 1 ,…, x L-1 ) of period L = p can be defined as follows: (Construction: We've achieved a binary Legendre sequence x of period L = p in Definition 1.Now we construct a binary sequence u = [u 0 , u 1 ,…, u q-1 ] T of Length ,where c is a positive integer that divides L, and , .Then it's time to construct the vector that we need.Now what we need to do is to abandon the first and the last element of u and rearrange the sequence in reverse chronological order as a new sequence as .We define = [ ] T of Length as the bipolar version of u.It's clear that the matrix A constructed by will be a real-valued according to Lemma 1.

Figure 1 .
Figure 1.Column coherence of different CS operators where N = 1200.
(p = 1201 and c = 2), which also include the reconstruction performance of a random Gaussian operator or a random Bernoulli operator.The reconstruction results are shown in Figure2.The simulations indicate that constructed in Section 3 can supply the similar reconstruction performance to that of a random Gaussian or Bernoulli matrix.

Figure 3 .
Figure 3. Original images and reconstructed results

Table 1 .
Output SNRs and compatibility in sparse signal reconstruction.