Convolutional Noise PDF at the Convergence State of a Blind Adaptive Equalizer

In the literature, the convolutional noise obtained at the output of a blind adaptive equalizer, is often modeled as a Gaussian process during the latter stages of the deconvolution process where the process is close to optimality. However, up to now, no strong mathematical basis was given supporting this phenomenon. Furthermore, no closed-form or closed-form approximated expression is given that shows what are the constraints on the system’s parameters (equalizer’s tap-length, input signal statistics, channel power, chosen equalization method and step-size parameter) for which the assumption of a Gaussian model for the convolutional noise holds. In this paper, we consider the two independent quadrature carrier input case and type of blind adaptive equalizers where the error that is fed into the adaptive mechanism which updates the equalizer’s taps can be expressed as a polynomial function of the equalized output up to order three. We show based on strong mathematical basis that the convolutional noise pdf at the latter stages of the deconvolution process where the process is close to optimality, is approximately Gaussian if complying on some constraints depending on the step-size parameter, input constellation statistics, channel power, chosen equalization method and equalizer’s tap-length. Simulation results confirm our findings.


Introduction
In this paper, we deal with the convolutional noise arising at the output from a blind deconvolution process.A blind deconvolution process arises in many applications such as seismology, underwater acoustic, image restoration and digital communication [1]- [28].Let us consider for a moment the digital communication case.During transmission, a source signal undergoes a convolutive distortion between its symbols and the channel impulse response [29].This distortion is referred to as intersymbol interference (ISI) [30], [29].Thus, a blind adaptive filter is used to remove the convolutive effect of the system to produce the source signal [30].This process is called blind deconvolution.Since the updated coefficients used in the blind adaptive filter are not the ideal values, a noise named as "convolutional noise" is observed at the output of the deconvolution process in addition to the source signal [29].In the literature, the convolutional noise is often modeled as a zero-mean-white-noise for a zero-mean input sequence [20], [31]- [37].But, non of those works ( [20], [31]- [37]) supply a closed-form expression that shows what are the constraints on the system's parameters (equalizer's tap-length, input signal statistics, channel power, chosen equalization method and step-size parameter) for which the Gaussian model for the convolutional noise holds.It should be pointed out that in the early stages of the iterative deconvolution process, the ISI is typically large with the result that the input data sequence and the * e-mail: monikap@ariel.ac.il,monika.pinchas@gmail.comconvolutional noise are strongly correlated and the convolutional noise sequence is more uniform than Gaussian [38], [37].According to [37], the convolutional noise is produced from a long and oscillatory wave (denoted in [37] as ∇[n]) that is convolved with the transmitted data sequence.This wave (∇[n]) is according to [37] a sequence of small numbers, representing to the residual impulse response of the channel due to imperfect equalization (deconvolution).If the input sequence is identically independent distributed (i.i.d) as is often the case in the digital communication area, then according to [37], if ∇[n] is long enough, the central limit theorem makes a Gaussian model for the convolutional noise to be plausible.It is well known [34] that the equalizer's tap-length, input signal statistics, channel power, chosen equalization method and equalizer's step-size parameter play a major role in the equalization performance from the residual ISI point of view.Choosing a higher valued step-size parameter will lead to a faster convergence rate but at the same time will leave the system with a higher residual ISI where it is not clear if the convolutional noise can be still modeled as Gaussian.In addition, if the equalizer's tap-length is set to small or much to high as needed, we will get again a higher residual ISI where it is not clear if the convolutional noise can be still modeled as Gaussian.Up to now, we have no constraints on the equalizer's tap-length, input signal statistics, channel power, chosen equalization method and equalizer's step-size parameter for which the convolutional noise can be approximately considered as Gaussian at the convergence state where the deconvolution process is close to optimality.
In this paper, we consider the two independent quadrature carrier input case and type of blind adaptive equalizers where the error that is fed into the adaptive mechanism which updates the equalizer's taps can be expressed as a polynomial function of the equalized output up to order three as given in the next section.We show based on strong mathematical basis that the convolutional noise pdf at the latter stages of the deconvolution process where the process is close to optimality, is approximately Gaussian if complying on some constraints depending on the step-size parameter, input constellation statistics, channel power, chosen equalization method and equalizer's tap-length.In addition, we supply some simulation results that confirm our findings.The paper is organized as follows: After having described the system under consideration in Section 2, we show in Section 3, based on strong mathematical basis that the convolutional noise pdf at the latter stages of the deconvolution process where the process is close to optimality, is approximately Gaussian if complying on some constraints depending on the step-size parameter, input constellation statistics, channel power, chosen equalization method and equalizer's tap-length.In Section 4 simulation results are presented and the conclusion is given in Section 5.

System Description
The system under consideration is illustrated in Fig. 1, where we make the following assumptions: 1.The input sequence x[n] belongs to a two independent quadrature carrier case constellation input with zero mean and variance σ 2 x where x r [n] and x i [n] are the real and imaginary parts of x[n] respectively.

The unknown channel h[n]
is a possibly nonminimum phase linear time-invariant filter in which the transfer function has no "deep zeros", namely, the zeros lie sufficiently far from the unit circle.

The equalizer c[n] is a tap-delay line. 4. The noise w[n] is an additive Gaussian white noise with zero mean and variance σ
is the expectation operator and () * is the conjugate operator on ().

The sequence x[n] is transmitted through the channel h[n]
Adaptive Equalizer and is corrupted with noise w[n].Therefore, the equalizer's input sequence y[n] may be written as: where " * " denotes the convolution operation.The equalized output sequence is defined by: where p[n] is the convolutional noise (convolutional error) due to non-ideal equalizer's coefficients (h The adaptation mechanism of the equalizer is given by: where l = 0, 1, 2..., (N − 1), N is the equalizer's tap length, µ is the step-size parameter and ∂F(z[n]) ∂z[n] is considered in this paper as: where z r [n] and z i [n] are the real and imaginary parts of z[n] respectively.The constants a 1 and a 3 depend on the chosen algorithm.In the following we define as P[z [n]].Next, we multiply both sides of (3) with the horizontal vector: as was done by [34], [35] and obtain according to [34] for the noiseless case: where In the following we denote ∆p r [n] as the real part of ∆p [n].For the noiseless case and based on (2), ( 4), ( 5), ∆p r [n], is given by: ) Suppose we have a function g(p r [n]) then we may write by tailor expansion [39] and [34] that (7) where g in ( 7) is g(p r [n]), O(q) is defined as lim q→0 (O(q)/q) = r const and r const is a constant.Based on [34] and (7), ∆g with g = p 2 r [n] can be approximately given for the noiseless case by:

The Obtained Conolutional Noise PDF
In this section, we derive the approximated convolutional noise pdf valid for the convergence state where the deconvolution process is close to optimality.In the following, we ignore the input noise to the equalizer which is a Gaussian process.We do this in order to have at the equalizer's output only the desired signal in addition to the convolutional noise.

Theorem
For the following additional assumptions: Assumption 1: ) and R is the channel coefficient's length, Assumption 2: The real and imaginary parts of the convolutional noise are independent at the convergence state where the process is close to optimality, the convolutional noise pdf at the convergence state where the deconvolution process is close to optimality is approximately a Gaussian pdf.

Comments
Assumption 2 was used also in [34] and [35] where satisfying results were obtained.Since the input source is a source signal where the real and imaginary parts are independent, it is reasonable to have independent parts also of the real and imaginary parts of the convolutional noise at the convergence state where the process is close to optimality.

Proof
In the following, we will consider only the real part of the convolutional noise and will show that its pdf is approximately Gaussian.We start our proof by calling the one dimensional Edgeworth expansion up to order four following [33], [40] , [41] for approximating the unknown convolutional noise pdf: ] with v = 0, 2 (which are the leading terms).Thus we may have: ) At the convergence state ∆[m 2 ] 0. Thus, we may write based on (11) that at the convergence state where the deconvolution process is close to optimality that: Next we turn to find a closed-form approximated expression for E p 4 r [n] valid at the convergence state where the deconvolution process is close to optimality.For that purpose we use (7) with g = p 4 r [n] and obtain: Next we take the expectation operator on both sides of ( 13) and leave only terms of m j = E[p j r [n] with j = 0, 2, 4: ) At the convergence state ∆[m 4 ] 0. Thus, we may write based on ( 14) and ( 12) that at the convergence state where the deconvolution process is close to optimality that: Next, by using ( 14) and ( 11) we have: Now, substituting ( 16) into (15) we obtain: Let us come back to (14).According to (14), if (E[B]) 2 as was done in [34], [35]), then according to (11), we have also that Thus, based on Assumption 1 from this section, we have: Substituting ( 20) into (17) and using ( 12) leads to: This completes our proof.

Simulation
In this section, we show via simulation results that the convolutional noise pdf at the latter stages of the deconvolution process where the process is close to optimality, is approximately Gaussian if complying on some constraints depending on the step-size parameter, input constellation statistics, channel power, chosen equalization method and equalizer's tap-length.For this purpose we use two different blind equalization methods for the 16QAM constellation input (a modulation using ± {1,3} levels for in-phase and quadrature components) with the following channel case (initial ISI = 0.44) where the channel parameters were determined according to [1]: h n = (0 for n < 0; −0.4 for n = 0; 0.84 • 0.4 n−1 for n > 0).The equalizer taps for the MMA algorithm [13], [42], [43] were updated to: where Based on Assumption 1, the step-size parameter µ MMA was taken following: where for the 16QAM input constellation and above mentioned channel case we have: The equalizer taps for the WNEW algorithm [20] was updated according to: ) where Based on Assumption 1, the step-size parameter µ WNEW was taken following: where for the 16QAM input constellation and above mentioned channel case we have: For both equalization methods, the equalizer's tap-length was set to 13.Both the equalizers were initialized by setting the center tap equal to one and all others to zero.In the following we denote f as E p  3 show the normalized f as a function of iteration number for the MMA and WNEW algorithm for two cases of step-size parameters.For Fig. 2, we have chosen the step-size parameters for the MMA and WNEW algorithm approximately ten times smaller than the values given in ( 25) and ( 29) respectively.According to Fig. 2, the normalized f achieved a value of approximately 10 −6 for both algorithms.This value (10 −6 ) for the normalized f compared to the value of the normalized f at the first iteration (which has a value of 1) can be considered as negligible, namely as approximately zero.For Fig. 3, we have chosen the step-size parameters for the MMA and WNEW algorithm approximately two hundred times smaller than the values given in ( 25) and ( 29) respectively.According to Fig. 3, the normalized f achieved a value of approximately 10 −8 for both algorithms.This value (10 −8 ) for the normalized f compared to the value of the normalized f at the first iteration (which has a value of 1) can be considered again as negligible, namely as approximately zero.Thus, according to Fig. 2 and Fig. 3, if (9) holds, turns approximately to zero and makes the approximated convolutional noise pdf given in (10 ) as Gaussian.

Conclusion
In this paper, we considered the two independent quadrature carrier input case and type of blind adaptive equal-   izers where the error that is fed into the adaptive mechanism which updates the equalizer's taps can be expressed as a polynomial function of the equalized output up to order three as was described in the system description section.We have shown based on strong mathematical basis that the convolutional noise pdf at the latter stages of the deconvolution process where the process is close to optimality, is approximately Gaussian if complying on some constraints depending on the step-size parameter, input constellation statistics, channel power, chosen equalization method and equalizer's tap-length.Simulation results confirmed our findings.

Figure 1 .
Figure 1.Block diagram of a baseband communication system.
f as | f |/| f | f irst where | f | f irstis the absolute f obtained at the first iteration during the deconvolution process.Fig.2and Fig.

Figure 2 .
Figure 2. Normalized f as a function of iteration number for two equalization methods (MMA and WNEW).The step-size parameters were set to: µ MMA = 0.00002, µ WNEW = 0.00008.The results were obtained for 1000 Monte Carlo trials.The obtained residual ISI for the MMA and WNEW algorithm was approximately −30 dB and −33 dB respectively.

Figure 3 .
Figure 3. Normalized f as a function of iteration number for two equalization methods (MMA and WNEW).The step-size parameters were set to: µ MMA = 0.000001, µ WNEW = 0.000004.The results were obtained for 100 Monte Carlo trials.The obtained residual ISI for the MMA and WNEW algorithm was approximately −44 dB and −46 dB respectively.