Application of GA-LM-BP Neural Network in Fault Prediction of Drying Furnace Equipment

In the foundry, the surface dry furnace is a special equipment for surface drying after sand core hydrophobic coating. In order to accurately predict whether it was possible to malfunction, four objective variables was used as input, and the health status of the equipment was used as the output. A prediction model based on the traditional BP neural network was established. This model combined genetic algorithm (GA) to optimize the initial weight of BP neural network; combined with LM (Levenberg-Marquardt) algorithm to improve the BP neural network, the error decreased too slowly when the predicted value approached the target value. Four kinds of evaluation methods were used in Matlab to compare the prediction results of the three models in simulation training. The research shows that the improved algorithm can overcome the problem that the traditional BP neural network has slow convergence rate and is easy to fall into the local optimal solution, and it has higher prediction accuracy, which provides a new solution to the fault prediction of the surface dry furnace.


Introduction
In the foundry, the working principle of the dry furnace is to generate heat through gas combustion, and then uses the circulating fan to send the hot air of the combustion chamber into the furnace to spray and dry the workpiece [1]. In order to improve the intelligence level of the workshop and save manpower and material costs, it is of great significance to establish a model for predicting the failure of the dry furnace.
BP (back propagation) neural network has good nonlinear mapping ability and is widely used in various fault prediction fields. Due to its own algorithm limitation, BP neural network still has some limitations, which are mainly reflected in two aspects: 1.Convergence is too slow. 2.Easy to fall into the local optimal solution. In response to these problems, domestic and foreign scholars have proposed many improvements. For example, [2] proposed a general modeling method for machine tool thermal error based on GA algorithm to optimize BP neural network structure and initial value. This method improves the prediction accuracy of neural network, but the problem of neural network convergence speed is not carried out to discuss in depth. [3] introduced an improved BP neural network based on simulated annealing algorithm, which can improve the BP neural network to fall into local optimum by finding a more optimized sample subset, but the convergence speed is limited. [4] proposed a method combining GA algorithm and rough set to optimize neural network, which improved the prediction accuracy of traditional BP neural network and enhanced the generalization ability.
Based on the research at domestic and abroad, this paper constructs a prediction model for the historical operation data of the dry furnace on the work site, and proposes a BP neural network optimized by GA-LM combination to predict whether the surface oven may be faulty in the future. The experiment proves the effectiveness of the proposed method.

BP neural network model
The BP neural network is capable of learning and storing a large number of input-output mapping relationships without knowing the mathematical functions that specifically describe these mapping relationships. Figure  1 shows the basic structure of a BP neural network.
Where n is the number of neurons in the input layer, r is the number of layers in the hidden layer, and m is the number of neurons in the output layer.

GA applied to BP neural network
Genetic Algorithm (GA) is a global optimization search algorithm developed by simulating biological evolutionary elimination mechanism in nature.
The steps of the GA algorithm to optimize the BP neural network are as follows: (1) Encoding. The GA algorithm needs to encode the object of the study. The code length S is expressed as: * * S n r r m r m = + + + (2) (2)Calculation of fitness values. The fitness value is the standard for judging individual performance.
(3) F is the fitness value that needs to be solved, and SE is the sum of squared errors.
In equation (4), t is the number of training samples, m is the number of output neurons, y k is the output value of the output layer in the BP neural network, and y ok is the expected output value in the neural network.
(3) Select operation. According to the fitness value, the individual can use the roulette method to select operations.
Where p i is the probability that the i-th individual is selected, F i is the fitness of the i-th individual, and z is the number of populations.
(4) Cross operation. Assuming that the h-th chromosome and the l-th chromosome intersect at the jth position.
(1 ) hj hj lj Where b is a random number between 0-1, d hj is the j-th gene of the h-th chromosome, and d lj is the j-th gene of the l-th chromosome after the crossing.
(5) Variation operation. The mutation operation is the basis for making the GA algorithm have the variability.
F max is the maximum fitness of the population, F avg is the average of the fitness of the population, F is the fitness of the individual, k 1 ，k 2 is the random number between 0-1, and C is the mutation operator.

GA-LM applied to BP neural network
The LM (Levenberg Marquardt) algorithm is also called the damped least squares method. This method takes into account the fact that the gradient descent method drops rapidly in the initial stage of error reduction, and also considers the Quasi-Newton method to find the best search direction when the error is very small and it is not prone to oscillating characteristics.Its convergence speed is faster and more stable than the gradient descent method. The process of combining the three is shown in Figure 2.

Input and output determination
The writing background of this paper is based on the foundry of an engine manufacturer in China. The main four parameters of the dry furnace equipment is shown in table 1. The temperature at which the drying function is carried out in the dry furnace equipment Temperature in the residual temperature zone (℃) After the sand core is dried, the temperature of the area in which it is transported out Chain speed (m/min) The speed of the ramp where the sand core is located Air volume (m3/s)

Air volume blown by the circulating fan
It is found that the workshop maintenance personnel will evaluate the health status of the dry furnace equipment once a week through a complex evaluation plan based on these four factors, and classify the health status of the equipment [5] .

Determination of GA-LM-BP neural network parameters
(1) The choice of the number of hidden layers in the network. Studies have shown that a S-type hidden layer plus a linear output layer of a 3-layer BP network can approximate any function, and the network hidden layer is chosen to be 1 layer, r = 1.
(2) Selection of the number of neurons in the output layer. The output of the neural network is healthy, and there are only one neuron in the output layer, m = 1.
(3) Selection of the number of neurons in the input layer. The model has four influencing factors, n = 4.
(4) The selection of the number of neurons in the network hidden layer. There is no mature theory to determine its number. In practical applications, it is usually determined according to formula (9) combined with experience.

R m n s = + +
(9) s is a positive integer within 10, and the value of n, m is substituted. It can be seen that the value range of R is 3-13. After repeated testing, the final choice is R = 9.
(5) Sample normalization. The sample data needs to be normalized so that the values of the input samples are all in the interval [0, 1]. The formula for normalization is as follows. In the formula, xmin and xmax are the minimum and maximum values in the sample data.
(6) Selection of training function. The LM algorithm is used as a training function.
(7) Determination of the transfer function. The transfer function of the implicit layer of the neural network uses a nonlinear sigmoid function; the transfer function of the output layer uses a linear purelin function.
(8) Determination of the cost function. The default root mean square error function is used.
(9) Learning rate. TThe value is generally 0.01-0.8. In order to ensure the stability of the neural network, a small learning rate is usually selected, which is set to 0.01.

Comparative analysis of prediction results
Combined with the historical data collected by the workshop's surface oven equipment, 150 sets of data from the past 3 years were selected as training samples. The first 130 sets of data were selected as the training set, and the remaining 20 sets of data were used as test sets. The normalized data was used as samples in the traditional BP neural network model, GA-BP neural network model and GA-LM-BP neural network model. In the network model, Matlab is used for simulation training. (1) Comparative analysis of model validity. From Figure 3a, Figure 3b, Figure 3c, it is easy to see that the discrete points in Figure 3c are more concentrated in the vicinity of the theoretical curve y=x than the other two figures, and the prediction effect is the best. After calculation, the decision coefficients R 2 of the three prediction models are: 0.75758, 0.96768, and 0.97511, both of which are greater than 0.75, indicating that all three models can effectively reflect the mapping of four objective parameters to the healthy state, of which GA-LM-BP neural network has the largest R 2 and has the best prediction effect on the fault of the dry furnace. (2) Whether it falls into the comparative analysis of local optimal solutions. In Figure 4a, when the gradient reaches 9.52e-11, the result falls into local optimum. In Figure 4b and Figure 4c, the gradient of the two is 0.00146, 0.0550, respectively, have not reached the set gradient of 1.00e-10.Therefore, the GA algorithm solves the problem that the traditional BP neural network is easy to fall into the local optimal solution to some extent.   Table 3 are: 50%, 5.65%, and 4.39%. When using GA-BP neural network, the error is reduced from 50% to 5.65%. When using GA-LM neural network, the error decreased from 5.65% to 4.39%, further improving the prediction accuracy.
(4) Contrast analysis of convergence speed. It can be seen from Figure 5 that the number of iterations of the conventional BP neural network is 739 steps. After using the GA-BP algorithm, the number of iterations is shortened from 739 steps to 275 steps, and the target error is achieved. When using LM algorithm, the number of iterations is shortened from 275 steps to 3 steps, further improving the convergence speed.

Conclusion
The experimental results show that the prediction model established by GA-LM-BP neural network can better guide the workers to the surface. The maintenance of the furnace improves the efficiency of the work of the workers to a certain extent and reduces the daily production cost of the foundry, which is of great significance to the foundry.