A Blind Source Separation Algorithm Based on Dynamic Niching Particle Swarm Optimization

In this paper, the dynamic niching particle swarm optimization (DNPSO) is proposed to solve linear blind source separation problem. The key point is to use the DNPSO rather than particle swarm optimization (PSO) and fastICA as the optimization algorithm in Independent Component Analysis (ICA). By using DNPSO, which has global superiority, the performance of ICA will be improved in accuracy and convergence rate. The idea of sub-population in DNPSO leads to the greater efficiency compared with other methods when solving high dimensional cost functions in ICA. The performance of ICA based on DNPSO is investigated by numerical experiments.


Introduction
Blind source separation (BSS) is an advanced method in signal processing.In recent years, BSS has gained increasing interest in various fields of applications.The problem of linear BSS, in nature, is the problem of separating and estimating waveforms of the observed sources from their linear mixture, without knowing the characteristics of the transmission channels [1,[8][9][10].And the traditional method of linear BSS is Independent Component Analysis (ICA).
The main objective of ICA is to estimate an un-mixing matrix by using optimization algorithm to achieve the optimal solution.Since the traditional optimization algorithm, including gradient descent algorithm and natural gradient algorithm, are easily trapped into local optimum and have slow convergence rate, researchers proposed Particle Swarm Optimization (PSO) and the its improved algorithm as the optimization algorithm for ICA and have achieved remarkable experimental result [1].However, the standard PSO algorithm also has the probability that being trapped into local optimum, and its convergence rate turns to be slow at the later time [6].The Chaos-PSO algorithm proposed by Ding [7] improved the searching efficiency and accuracy of the standard PSO and shows high performance in optimizing low-dimensional functions, however, its accuracy still need to be improved in dealing with high-dimensional functions.[3] All that above demonstrates a requirement of an optimization algorithm with greater global superiority for BSS.
In this paper, we proposed ICA based on Dynamic Niching Particle Swarm Optimization (DNPSO), which has greater global superiority in optimization progress of ICA algorithm.Dynamic Niching Particle Swarm Optimization (DNPSO) maintains the diversity of its particle swarm by applying the idea of niche technique.
The widely using of Niche Technique in Genetic Algorithm (GA) in recent years has interpreted its remarkable performance in global optimization of highdimensions functions [5].In DNPSO, the particle swarm is divided into several sub-swarms, which form independent searching spaces dynamically so as to ensure every extreme point being examined.Meanwhile, the free particles are always searching for new optimums, which can help the algorithm avoid being trapped into local optimum [6].
The remainder of this paper is organized as follows.Section 2 presents an overview of the BSS problem based on ICA.Section 3 describes ICA based on standard PSO.Section 4 presents the proposed ICA based on DNPSO.The simulation results will be presented in Section 5.

Problem description of BSS
The problem of linear BSS can be modeled as follows: is the estimation of the source signal (t) s , W is the separating matrix.Without considering the additive noise, the basic thought of BSS is to separate the independent sources from the mixtures by estimating the separating matrix so as to realize Blind Source Separation.

Independent Component Analysis (ICA) and its optimization algorithm
Independent Component Analysis (ICA) is currently the major method to solve BSS problem.The most widely used algorithm based on ICA is a kind of adaptive optimization algorithm, which find the separation matrix by using optimization algorithm to a determined cost functions after utilizing the measure of independence between separated signals.In the process of using ICA to solve BSS problem, the choosing of cost function, which estimate the similarity between source signals and separation signals, and the optimization algorithm for determining separation matrix will greatly influence the separation effect [1].
The core strategy of improving ICA method lies on the global optimization of the cost function.In traditional ICA, gradient descent algorithm and natural gradient algorithm are used as optimization algorithm, but these methods based on gradient exists the disadvantage of being easily trapped in local optimum [2] In 2007, Vrins, Pham and etc. proved that the cost function of nongaussian maximum principle has a large amount of local optimums [4].This demonstrates that when using the optimization algorithm with poor global superiority, the separation precision of ICA method may not be ideal enough as the algorithm based on gradient will be easily trapped into local optimums.

ICA based on particle swarm optimization
The Particle Swarm Optimization (PSO) is an optimization algorithm inspired by the process of searching food of bird flock and fish school, especially Swarm Theory.In PSO, each searching individual is regarded as a particle, which represents a potential solution in the D-dimension solution space.All the particles in the solution space combined are regarded as the Particle Swarm.
Assuming that there are m particles in the Particle Swarm, (p , p ,..., p ) is the D-dimensions position vector of the i th particle.The superiority of the current position in solution space can be estimated by calculating the fitness of i p with the cost function determined.PSO disperses each particle in the whole solution space and adjust its velocity according to its previous velocity and independent cognition (the best solution position that this particle found) as well as the social cognition (the best solution position that the whole particle swarm found), so as to converge the particle swarm on the position of the optimal solution.
The PSO-based ICA algorithm can be implemented by the following iterative procedure [7]: 1) Set the cost function (x) f 2) Preprocess the source signals and randomly initialize particle swarm's positions and velocities.
3) The best previous position which giving the best fitness value of the ith particle is updated by the following expression, which also updates the best position among the whole particles: 4) Update the position and velocity vector of each particle , where learning factors 1 In traditional PSO algorithm, each individual particle shares the best position of the whole particle swarm, which leads to the high possibility that the whole particle swarm converge on the optimum found early with slow convergence rate [6].These limitations will require an efficient and accurate method.

ICA based on dynamic niching particle swarm optimization
Dynamic niching Particle Swarm Optimization (DNPSO), proposed by Nickabadi in 2008, was based on the niche technique.[6], and was mainly applied as the optimization algorithm for multi-extreme value problems.
DNPSO uses the concept of sub-populations.In this method, the particles are divided into some sub-swarms and a group of free particles.The sub-swarms are created and destroyed dynamically at each generation and the number of the particles in each swarm changes.The best fitted neighbor is determined as lbest and is called the master of the particle.Any particle that is its own best neighbor can be a candidate solution or a niche in the objective function and is referred to as a self-master particle, the particle's master is replaced with the master of its master repeatedly until a self-master particle is reached.And if two masters are too close to each other, the particle with the better fitness value is set as the master of the other particle and form a sub-swarm.All of the particles that are not a member of any sub-swarm are set as free particles [6].
Owing to the using of GCPSO method in sub-swarm searching and the cognition-only model in free particles searching, the sub-swarms will form a niching search area near a promising solution and converge quickly to the optimum point while the free particles can find all of the extreme value points in searching space and form a new sub-swarm around it, which protects the algorithm from being trapped in local optimum The DNPSO improves the exploration capability of the standard PSO and ensures both the accuracy and efficiency of the results while being applied to high-dimensional functions with multi local optimums.
The most widely used cost function in ICA method such as Non-gaussian maximum principle and mutual information estimation, has been proved to have multiple local optimums in practice using.[4], Considering how the multiple local optimums affects the performance of ICA, which have been discussed above, we use DNPSO in the ICA method for BSS.The ICA based on dynamic niching particle swarm optimization (DNPSO) can be implemented as follows: 1) Set the cost function (x) f ; processing pretreatment and whitening process to the blind source signals; make the signals zero mean and unit covariance: 3) Use the cognition-only model PSO algorithm for free particles group (At the beginning all the particles are free particles), the positions and velocities is updated according to the following equations( p is the best position of the th id particle w is inertia weight, c is the learning factor r is a random parameter between For each particle, if it is not self-master, the particle's master is replaced with the master of its mater repeatedly until a self-master particle is reached. 5) If two self-masters particles' Euclidian distance is smaller than the threshold value min d , the particle with the better fitness value is set as the master of the other particle and form a sub-swarm.
6) Train the velocity and position of the particles of each sub-swarm according to GCPSO, specifically as the equation follows [6]: ( p is the best position of this sub-swarm) Update the members of each sub-swarm: If the Euclidian distance between any particles in two subswarms is smaller than the threshold, combine these two sub-swarms.
8) Repeat 3) to 7) until converge or the maximum iteration times is reached or all the particles have converged.9) Use the particle w with the best fitness value as the separation matrix and compute the separated signals

Simulation Results
ICA based on DNPSO was tested with three speech signals shows in Figure .2,where totally 100000 sampling pixels are set as abscissa and amplitude are set as ordinates.In this paper we use the widely-used negative entropy as the cost functions of ICA, its common approximation algorithm is as follows: where G is the random Gaussian vectors with zero mean and unit covariance.And f is a none-quadratic function which can be chosen according to the requirement,

¦ ¦ ¦
The results of the three used algorithm is showed in the following tables: The performance results show that ICA based on DNPSO outperforms both FastICA and ICA based on standard PSO in separation precision.As to the coverage rate, ICA based on DNPSO also shows remarkable superiority over ICA based on standard PSO.
Owing to its fast converge rate, ICA based on DNPSO requires fewer iteration times and has great superiority in stability.In all of the methods except the proposed algorithm, the exploration will probably focuses on the optimum found earlier and been trapped in the nearby domain.But in ICA based on DNPSO, the free particles are always search for better solutions.In general, ICA based on DNPSO solves the problem of low separation accuracy in blind source separation with an ideal processing rate.

Conclusion
In this paper, an ICA method based on Dynamic Niching Particle Swarm Optimization (DNPSO) is proposed for blind source separation in view of the local optimum and slow convergence rate difficulty.The DNPSO is for the first time in this paper to be used in ICA method.The division of main swarm in DNPSO not only help guarantee the global optimum to be found in searching domain, but also improves the separation precision of blind source separation remarkably.According to the simulation results which compare the proposed method with Fast-ICA and PSO-ICA, the proposed method significantly improves BSS's performance and gives better results.
As the parameter setting of ICA base on DNPSO still has much space to be optimized, and the additive noises is not concerned in this paper.Our future work will focus on the optimization of parameter as well as taking the additive noises into consideration, and to further deal with the nonlinear BSS problem.

c and 2 c are nonnegative real numbers, 1 r and 2 r
are independent random numbers subject to the uniform distribution between [0, 1]. 5) Repeat 2) and 3) until convergence.6) Use the particle w with the best fitness value as the separation matrix and compute the separated signals t t y Wx the dimensions of the signal and calculate the fitness value.Each position of the particles indicates a potential solution of separation matrix W needed in ICA.

Figure 1 . 3 DOIH
Figure 1.Source signals.The mixture signals are generated by mixing the source signals with a random 3*3 matrix, which is as the following:

5 .
The parameters of DNPSO are set as follows.The number of particles n = 150.The maximum number of sub-swarms M = 30 learning rate c = 1.5 The maximum number of particles in a subresults of ICA based on DNPSO, we performed the blind source separation with the proposed algorithm along with two other algorithms, including the well-known FastICA and ICA based on standard PSO.Their separation results are showed in Figure.3 to Figure. 5.

Table 1 .
Performance of three algorithm in simulation