Prediction of Cable Junction Temperature in Power Transmission System based on BP Neural Network optimized by Genetic Algorithm

Two forward neural networks were established in this study. Training and learning of reflection factor data and prediction results were conducted respectively then the weights and thresholds of the two networks are optimized by genetic algorithm, finally the set of target values can still be predicted without reflection factor data. In order to predict the temperature of the conductor in the cable joint of a power transmission system, the genetic algorithm is used to optimize the BP neural network to establish an effective prediction model based on the analysis of the related reflection factors. This model not only has the strong learning ability of BP neural network, but also combines the excellent global searching ability of genetic algorithm. The innovation of this research is that the network 1 is used to train the reflective factor data to get the corresponding time point temperature value, and then the reflective factor data of three consecutive time points are trained by the network 2 to get the fourth time point temperature value. The whole process of solving the temperature value of the fourth time point does not need the reflective factor data of the time point.


The introduction
In the power transmission system, the cable is very important equipment, and the cable connector is a weak link. At present, the sensitivity of cable detection equipment is low and the maintenance cost is high. More importantly, once a fault occurs, it will lead to the normal operation of individual power system or even serious paralysis. At present, a key method of cable joint research is joint conductor temperature detection [1] , which is because whether the leakage current caused by cable aging or overload triggered loss increase will be reflected by the temperature rise, and temperature rise is the main cause of cable joint failure. The temperature of cable joints can be predicted by the temperature of other surface touchable parts, such as ambient temperature, joint insulation layer temperature and so on. In order to realize real-time cooling or maintenance treatment and ensure the normal and safe operation of the circuit system, this paper selects some surface touchable temperature reflecting factors to precisely predict the conductor temperature [2] .
Artificial neural network, to some extent, mimics the information processing, storage and retrieval function of the human brain nervous system, which is a kind of simplification, abstraction and Simulation of the neural network of human brain. The Back Propagation (BP) learning algorithm [3] proposed by Rumelhart and others in 1985 is more commonly used. It uses the error of the output to estimate the error of the direct preamble of the output layer, and then estimates the error of the previous layer with this error, and then the error estimate of all layers can be obtained.

Genetic algorithm
Genetic Algorithm (GA) is a model that simulates the genetic selection and natural elimination of natural organisms during evolution [4] . GA uses group search technology, and the population represents a group of problem solutions. By applying the genetic manipulation of selection, crossover and mutation to the parent population, the population is produced, and then the population evolves gradually to the state of the optimal solution.

BP neural network
BP neural network is also called back propagation neural network. The network model is generally composed of input layer, hidden layer, output layer and inter layer nodes [5] . All the neurons in each layer are connected, but there is no connection between the neurons in the same layer. The main idea of BP neural network algorithm is as follows: for some input learning samples " x 1 ,x 2 ,...,x m ", it is known that the corresponding some output samples are " t 1 ,t 2 ,...,t n ", using the error between the actual output of the network and the target vector (t 1 ,t 2 ,...,t n ) to modify its weight value. So that y l (l=1,2... , n) is infinitely close to the expected t l , that is, the adjusted weight value minimizes the total network error. A typical three-layer BP neural network model is shown in Figure 1. In summary, BP neural network has good learning ability and precise search ability, while GA has strong global search capability. Therefore, GA is used to reduce the search scope and then use BP neural network to solve the problem accurately.

Data selection
In this study, nine temperature response factors were selected, including ambient temperature, right sheath temperature, left sheath temperature, insulation layer temperature at the junction, the temperature at the leftmost end of the cable skin, the temperature between the left-most and the middle of the cable skin, the temperature in the middle of the cable skin, the temperature between the right-most and the middle of the cable skin. Temperature and the right end temperature of the cable surface. The data used in this paper provide data for the temperature monitoring terminal service center of a cable project. Temperature values of 17 whole-time points from 8:00 to 24:00 a.m. in a day were selected to study.
As mentioned above, two BP neural networks were constructed to train data twice. Network 1 is training reflection factor data. Reflection factor data and target values from 8:00 to 21:00 are used as training objects. The network model 1 takes the nine reflection factor data as input vector and the actual temperature values corresponding moments as the target output, that is, the number of neurons in the input layer is 9 and the number of neurons in the output layer is 1.Network 2 uses the forecasting results of the first three moments' reflective factor data as input vectors and the forecasting results of the fourth year's reflective factor data as output vectors, that is, input 8:00, 9:00, 10:00 reflective factor data forecast results to predict 11:00 reflective factor data forecast results，then the prediction is compared with the actual value at 11:00, and so on, until the prediction result of the reflection factor data at 18:00, 19:00, 20:00 is used to predict the prediction result of the reflection factor data at 21:00.These reflection factor data prediction results of the 11 groups were used for training and learning in network 2. So the number of neurons in input layer and output layer in network 2 is 3 and 1. The remaining data for 22:00, 23:00 and 24:00 will be used as test data to verify the accuracy of the forecast, as follows: enter reflection factor data for 19:00, 20:00, 21:00, and use network model 1 to get the forecast results for the corresponding moment. Then the network model 2 is used to predict the value of 22:00 from the reflection factor data prediction results of these three points, the whole process does not appear the reflection factor data of 22:00, and so on [6,7] .

Normalization of data
The number of factors of each factor in the original data sample is very different. In order to facilitate the network calculation, the data samples of the original factors should be normalized. The network model 1 uses premnmx function to make the processed data uniformly distributed in the range of [-1,1]. The conversion formula is as follows: Among them, p and t are the input samples and the target output of the original factor data respectively, minp and maxp are the minimum and maximum value of p, mint and maxt are the minimum and maximum value of the target vector t. P and T are the input samples and the output samples after the function normalization, respectively. The simulated values of the training should be reduced to the original quantity level by postmnmx. Network model 2 also normalizes the prediction results of factor data. The conversion formula is as follows: The training results should be reduced by the following formula:

Initialization of parameters
The training functions of the BP algorithm for gradient descent momentum and adaptive lr are adopted in the two networks. The learning rate of network 1 is 0.035, the maximum iteration number is 20000 times, the target error is 10 -6 ; the learning rate of the network 2 is 0.05, the maximum iteration number is 30000, the target error is 10 -5 , the performance function is MSE function, each operation 50 time shows a training process, the other values are the default values.

Transfer function
The transfer function of the hidden layer of the network 1 is hyperbolic tangent S shape function tansig, and the transfer function of the output layer is a linear function purelin. The transfer function of the hidden layer of network 2 is hyperbolic tangent S shape function tansig, and the transfer function of output layer is logarithmic S shape function logsig.

The determination of the number of nodes in the hidden layer
The number of neurons in the hidden layer and the number of neurons in the input layer are approximately as follows: d=2N+1. The number of neurons in the 1 input layer of the network is 9, the output layer is 1, so the number of hidden layer nodes is 19, the number of neurons in the 2 input layer of the network is 3, the output is 1, and the number of hidden layer nodes is 7.

The implementation of BP neural network optimized by GA
The basic flow of genetic algorithm optimization is as follows [8,9] : (1) Initialization of the population The real coding is used to encode the individual, and the individual contains the weight value and threshold of the whole BP neural network. Each is composed of four parts: the weight of the input layer and the hidden layer, the threshold of the hidden layer, the weight of the hidden layer and the output layer, and the threshold of the output layer.
R is the number of nodes in the input layer, S1 is the number of hidden layer nodes, and S2 is the number of nodes in the output layer.
The number of weights: The number of threshold values: S1+S2 The individual encoding length is: S= S1*R+S2*S1+S1+S2 The GA population size of the optimized network 1 is set to 250, and the evolution number is 1000. The GA population size of the optimized network 2 is 50, and the evolution number is 1000.
(2) Fitness function In this study, fitness function is defined as the reciprocal of error square sum of neural network.

val=1/SE
(3) Genetic manipulation -Selection The "roulette" choice method is simple and practical. If the fitness of is f i and the population size is NP, the probability of being selected is:

) Genetic manipulation -Crossover
The crossover operation is carried out by the monarch scheme, and the probability of crossover is Pc=0.8.
(5) Genetic manipulation -Mutation According to the mutation probability Pm=0.09, some individuals are selected randomly, and then the ectopic is selected randomly. The binary of the bit is reversed and a new individual is generated.
The newly generated group returns to the (2) step, then carries out a round operation, optimizes the individual fitness value again, cycles many times, until the number of iterations reaches the set value or the fitness value to the set target, then the optimal weight and threshold are obtained.

Model training and simulation results and analysis
The BP neural network based on genetic algorithm optimization is implemented by MATLAB. The 17 sets of data collected are divided into two parts: Training 1 used 8:00 to 21:00 reflective factor data and results, and then training 2 used 8:00 to 21:00 prediction result of the reflection factor data, and then using the remaining 3 sets of data as test validation samples. That is to say, after completing the two training, enter the reflection factor data of 19:00, 20:00 and 21:00, and achieve the forecast of the 22:00 temperature, and so on [10] . The comparison between the predicted value and the actual value is as follows: in Table 1, the prediction results of the BP neural algorithm are analyzed, and Table 2 is the analysis of the prediction results using the GA-BP neural algorithm. Comparing the relative errors between Table 1 and  Table 2, we can get that the prediction accuracy of GA-BP is much higher than that of BP neural network.
The following figures 2 to 5 compare the whole operation process under the GA-BP algorithm and the BP algorithm.    The fitness function value is calculated on the basis of the normalization of experimental data. The fitness values of the first and second training reached their maximum after about 800 and 900 generations, respectively.

Epilogue
After secondary training, the temperature value of certain moment can be predicted without any reflection factor data. According to the analysis, BP neural network is easy to fall into local minimum, resulting in insufficient accuracy or even deviation. Using GA to optimize the weights and thresholds of BP neural network, the accuracy of prediction can be greatly improved. In this experiment, traditional BP neural network and GA optimized BP neural network are used to predict the temperature of 22:00, 23:00, 24:00 respectively, and the relative error of the predicted value is output. The results show that the BP neural network optimized by GA has achieved good results.