The Parameters Selection of PSO Algorithm influencing On performance of Fault Diagnosis

. The particle swarm optimization (PSO) is an optimization algorithm based on intelligent optimization. Parameters selection of PSO will play an important role in performance and efficiency of the algorithm. In this paper, the performance of PSO is analyzed when the control parameters vary, including particle number, accelerate constant, inertia weight and maximum limited velocity. And then PSO with dynamic parameters has been applied on the neural network training for gearbox fault diagnosis, the results with different parameters of PSO are compared and analyzed. At last some suggestions for parameters selection are proposed to improve the performance of PSO.


Introduction
Particle swarm optimization (PSO) is an optimization algorithm which is based on swarm intelligent [1]. The PSO algorithm is initialized with a population of random candidate solution. Swarm intelligent produced by cooperation and competition of particles between population, guides optimization and search. PSO has much more profound intelligent background and could be performed easily and simply. Based on its advantages, the PSO has been applied in function optimization, artificial neural network training, fuzzy control system and some other fields at the present .
Parameters selection of PSO is the key influence on performance and efficiency of the algorithm. It is a complicated optimized problem that how to determine the optimum parameters for the optimal performance. There are no general methods to determine the optimum parameters, which are selected by user experience, because there are different parameter spaces between different parameters and relativity each other [2]. However, the regularity of different parameter which influences the performance of algorithm may be found. In this paper, the performance of PSO is analyzed, based on the control parameters variants including particle number m , accelerate constant 1 c and 2 c , inertia weight w and maximum limited velocity max v . And then the effect of these parameters of PSO is applied in the neural network training for gearbox, the fault diagnosis results with different parameters of PSO are compared and analyzed. At last some suggestions for parameters selection are proposed. .

Particle Swarm Optimization Algorithm
Particle swarm optimization algorithm is a method which makes each particle in the population follow the current superior particle by the certain speed, searches problems optimal solution in the solution space. Particle flying speed is adjusted dynamically by individual and community's flight experience [3].The particle swarm algorithm formula as follows: where ) (t x id is the current position of particle i , id p is superior position of particle i , which searches until now, gd p is the entire particle swarm s superior position. w is positive constants, called inertia factor which is linearly reducing with iteration; 1 c and 2 c are non-negative constants, called cognitive learning rate.
The selected condition of stopping iteration is the max iteration number or the best position of particle searched till now meets for adaptive thresholds significance.
In formula (1), the velocity change of PSO includes three components: the momentum components, the cognitive components and the social components. It determines the performances of PSO how to balance three components. This evolution equation has been used as a standard PSO algorithm.

Parameters Selection
The PSO algorithm includes some tuning parameters, such as inertia weight w , particle number m , accelerate

Analysis of inertia weight selection
Inertia weight describes that the previous velocity influence on current velocity. Controlling its selection may tune the global and local search ability of PSO. If 0 w the velocity of particle depends on its current position pbest the personal best and gbest the global best . The velocity has no memory. If 0 z w , the particle has the tendency of exploring new space, and the larger is the w ,the larger is the flying velocity of the particle, at the same time the particle will explore with longer steps. The smaller is the w , the smaller is the flying velocity of the particle, and then the particle will tend to dedicate local exploitation.
Presently, the inertia weight used more is linearly decreasing weight (simply called LDW), which is proposed by Shi, as following: Where max w is the maximum inertia weight, and min w is the minimum inertia weight, typically T is the maximum iteration number.

Analysis of accelerate constant selection
Accelerate constant 1 c and 2 c represent the particle stochastic acceleration weight toward the personal best (pbest) and the global best (gbest). Small accelerate constant may induce the particle wandering away in goal area; however, large accelerate constant may induce that the particle move quickly to the goal area, even fly away from it.If 0 2 1 c c , the particle will fly with current velocity till the border. At that moment, it is difficult for the particle to find the good solution, because it has to search in definite area. When 0 1 c , the particle has no the cognitive ability, which is social-only, and the algorithm has the ability to get to the new searching area by particles' cooperation for each other. Its convergent speed is faster than standard algorithm. But encountering complicate problem, it is easily getting into local optima.
When 0 2 c , there isn't shared information between particles, which is cognition-only, so a swarm with particle number m equals to m single particle. The probability to get the optima is very small because individual has no interaction.
In previous researches, Kennedy and Eberhart describe that a relatively high value of the cognitive component, compared with the social component, will result in excessive wandering of the search space [1]. In contrast, a relatively high value of the social component may lead particles to rush prematurely toward local optimum. Moreover, they suggested setting either of the acceleration coefficients at 2 in order to make the mean of both stochastic factors in (1)  from experiments. In reality, these researches have been constricted to the application of part problems. So it had not popularized all fields. Suganthan tested a method of linearly decreasing both acceleration coefficients with time, but observed that fixed acceleration coefficients at 2 generate better solutions [4]. However, through empirical studies he suggested that the acceleration coefficients should not be equal to 2 all the time [5].
Generally, in population-based optimization methods, it is described to encourage the individuals to wander through the entire search space, without cluster around local optima, during the latter stages, it is very important to enhance convergence toward the global optima, to find the optimum solution efficiently.

Analysis of particle number of population selection
Shi and Eberhart found that PSO was not sensitive to particle number [6]. Their conclusion was based on the mean value of needed iteration under given precision. Zhang Li-ping has researched the mean values of all kinds of function with population variant, such as Sphere ,De Jong s f4, Rosenbrock, Rastrigin ,Griewank Shaffer sf6 Ackley [7]. The conclusion was that their effect on PSO was smaller, when particle numbers were more than 50; which were the same as the conclusion of Shi and Eberhart. However, when particle numbers were less than 50, they greatly influenced on the performance of PSO. From the computed complexity analysis, more particles require more function evaluation, thereby increase more computing time, and increase the reliability simultaneously.

Analysis of the max velocity selection
The velocity of particle is usually constricted to a range in order to decrease the probability of particle moving away from searching space in evolutionary process.
max v is the maximum limited velocity of a particle. If max v is excessively large, it is thought that the velocity is not limited, then the particle may fly through the good solution. At that moment, the performance of PSO depends on inertia weight; in the contrast, If max v is excessively small, the particle may not explore the space outer local best area. So it easily gets into local best. For these reasons, the excessively big or small velocity may lead the performance of algorithm decreasing.

Structure of neural network for gearbox fault diagnosis
In this diagnostic system, the gearbox is a researched object. Because gearbox is a very complicated transfer mechanism, its relations with faults and symptoms are not very definite and of nonlinear map , while BP neural network models have the properties such as good self-learning, self-adapting, associative memory and identification of nonlinear model, and particularly suitable for complicated pattern identification, so they are widely used in fault diagnosis of gear.
Here, the gearbox of a tractor is the researched object, and some feature parameters of frequency-domain are selected, based on gear fault. So gearbox fault diagnosis system is defined by three-layer BP neural network with topology architecture 15-31-3. The outputs of neural network are relative to fault types. There are 3 fault types, which are expressed no faults, root of tooth crack, and gear tooth collapsing.

The analysis of parameters selection of PSO influence on performance of fault diagnosis.
The simulation experiments were performed in neural network with standard PSO, when the main control parameters were tuned including particle number m and maximum limited velocity max v .Then their effect on neural network training process and fault diagnosis results has been researched and analyzed.

The effect of different particle number
In simulation experiments, on the conditions including given numbers m was changed, , and 9 group samples of gearbox were inputted neural network to train it, the training error cures of network with different particle numbers have been showed in Figure.1(a) and (b). When m is 15 and 20, the error cures convergent speedily, observed from Figure.1 (a); and iteration number being about 150, the mean square errors have reached specified precision 0.001. When m is 30 and 50, the error cures convergent slowly , observed from Fig.1(b); and iteration number being the maximum 2000, the individual mean square error is 0.1668 and 0.3333 , which has not reached specified precision 0.001 , and has got stuck in local best .

The effect of the max velocity
In previous researches, usually max v was constant 1.In this study, the performance of PSO with changing max v is observed. As formula (4) shows, max v is changed with iteration number , and the PSO with dynamic max v is velocity, t is current iteration number, max T is the maximum iteration number, and c V is set velocity constant.
This algorithm was applied to simulation experiments in neural network with 15-31-3 topology architecture. Of them the other parameters were set as follow:    Figure 2 shows the training error cures based on VPSO and PSO. To the same group samples, two algorithms convergent quickly under 50 steps; but evolution steps being between 50 and 100, VPSO obviously convergent more quickly than PSO. It is indicated that, max v linearly varying with iteration number, plays the role of accelerating convergent speed.
Two output mean error cures have been showed in Figure 3

Conclusion
In this paper the fault diagnosis results with different parameters of PSO have been compared and analyzed. The conclusion is as follows: (1) As particle number 100 m , the effect of m variant on the training performance of networks and diagnostic accuracy is obvious; As 100 ! m , the effect on them is not obvious. With particle number increasing, the reliability of PSO algorithm improved, but more particles require more functional evaluation, so the required time of the algorithm increases, the efficient is poor. Therefore, selection particle number should base on architecture of networks, and consider the reliability of the algorithm and the required time. Generally speaking, the particle number is 20 enough to usual problem, but the particle number may be selected 50 to some complicate problem.
(2) The performance of algorithm is better, and the accuracy and efficient of fault diagnosis are obviously improved, and the convergent speed is accelerating when the maximum velocity max v is set a dynamic parameters following with iteration number. The max v indirectly influences the global searching ability. Dynamic modification max v may induce particle which always searches in rational space in order to prevent particle wandering through the searching space or toward the local optima.