Asynchronous differential evolution with self- adaptive parameter control for global numerical optimization

In this paper, we propose an extended self-adaptive differential evolution algorithm, called A-jDE. A-jDE algorithm is based on jDE algorithm with the asynchronous method. jDE algorithm is one of the popular DE variants, which shows robust optimization performance on various problems. However, jDE algorithm uses a slow mutation strategy so that its convergence speed is slow compared to several state-of-the-art DE algorithms. The asynchronous method is one of the recently investigated approaches that if it finds a better solution, the solution is included in the current population immediately so it can be served as a donor individual. Therefore, it can improve the convergence speed significantly. We evaluated the optimization performance of A-jDE algorithm in 13 scalable benchmark problems on 30 and 100 dimensions. Our experiments prove that incorporating jDE algorithm with the asynchronous method can improve the optimization performance in not only a unimodal benchmark problem but also multimodal benchmark problem significantly.


Introduction
Differential Evolution (DE) algorithm proposed by Storn and Price [1] is a popular evolutionary algorithm especially for solving continuous domain optimization problems. DE algorithm is popular among many evolutionary algorithms thanks to following two features. First, the structure of DE algorithm is simple and easy to implement so researchers and practitioners can apply DE algorithm to solve their problems easily. Second, DE algorithm usually finds more accurate solutions than other population-based metaheuristics, which has been demonstrated on many optimization benchmark problems, i.e., IEEE CEC, as well as real-world problems such as neural network learning [2,3], generating artworks [4]. Since DE algorithm was proposed, many studies have been conducted for improving its searchability, such as adaptive parameter control [5][6][7][8][9][10], adaptive strategy control [11], and hybridizing DE algorithm with other methods [12], to solve more complicated optimization problems. Zhabitskaya and Zhabitsky proposed a modified DE algorithm called Asynchronous DE (ADE) [12], which is designed for parallel optimization. In contrast to classical DE algorithm that generates trial vectors of all the target vectors of the current population at each generation, ADE algorithm generates a trial vector with a randomly selected target vector or the worst individual with respect to the fitness value of a given optimization problem. Although ADE algorithm was designed for the parallel optimization, its optimization performance shows competitive to classical DE algorithm even in the sequential mode. There are several interesting questions remain. For example, it is an interesting question whether the asynchronous method still is useful when it is applied to the state-of-the-art DE algorithms. In this paper, we extend jDE algorithm [5], which is one of the state-of-the-art DE algorithms, with the asynchronous method and test its optimization performance in 13 scalable benchmark problems [13] on 30 and 100 dimensions. Although there exist many other automatically tuning control parameter methods, we selected jDE algorithm because its self-adaptive parameter control method is suitable for the parallel optimization. The optimization performance of the proposed algorithm outperformed the compared algorithms significantly. This implies that the selfadaptive parameter control method still works well along with asynchronous DE algorithm.

Classical DE Algorithm
Since DE algorithm is one of the evolutionary algorithms, it has the same operators, mutation, crossover, and selection. In DE algorithm, the mutation and crossover operators generate a new candidate solution from a current solution, and the selection operator chooses the better solution between the current and candidate solutions as a member of next generation. Therefore, the mutation and crossover operators increase the diversity of solutions, and the selection operator decreases it. DE algorithm has �鑘 individuals, and each individual is a �-dimensional vector. At the beginning of its evolutionary process, DE algorithm scatters each individual in the search space of a given problem uniformly. The �th individual is denoted by � � � = � �,1 � ,� �,2 � ,⋯,� �,� � and the lower boundary and the upper boundary of the search space are denoted by � ��h = � ��h,1 ,� ��h,2 ,⋯,� ��h,� and � �㤵� = � �㤵�,1 ,� �㤵�,2 ,⋯,� ��h,� , respectively. Each individual is initialized as follows: x i,j 0 = � ��h,� + �㤵h� �,� ⋅ � �㤵�,� − � ��h,� where �㤵h� �,� represents a uniform random number within the range 0,1 . After that DE algorithm repeats three evolutionary operators, the mutation, crossover, and selection, until one of the termination criteria is met. In this section, we describe one of the most commonly used mutation and crossover operators called DE/rand/1/bin. In is generated by three donor individuals, which is calculated as follows: where � represents the scaling factor and � � 1 � , � � 2 � , and � � � � represent the three donor individuals, and � ≠ � 1 ≠ � 2 ≠ � � . After the mutation operator, a trial individual � � � = is generated by the target and mutant individuals as follows: where th represents the crossover rate and � �㤵h� represents a uniform random index within the range 1,� , which makes it possible for the trial individual to have at least one of the mutant elements. After the crossover operator, the selection operator chooses the member of next generation between the target and trial individuals with respect to their fitness values as follows:

Asynchronous DE Algorithm
Classical

Asynchronous DE with Self-Adaptive Parameter Control
The optimization performance of DE algorithm is significantly affected by the control parameters. Applying poor control parameters to DE algorithm may get stuck in a local optimum or make its convergence speed significantly slow. Therefore, it is important to apply proper control parameters to DE algorithm to achieve a satisfiable optimum. However, finding proper control parameters requires much computational cost because the proper control parameters vary on a given problem. In order to solve this problem, many adaptive and self-adaptive parameter controls that are automatically tuning control parameters have been studied. jDE algorithm is one of the well-known self-adaptive parameter control, which adjusts two control parameters, scaling factor and crossover rate. In jDE algorithm, each individual has its own control parameters � � and th � , and if an individual has better control parameters, then the individual is more likely to has better fitness value. And these individuals should propagate their control parameters to the next generation. The control parameters of each individual are calculated as follows: where �㤵h� � for � ∈ 1,2,�,� denotes uniform random numbers within the range 0,1 . In this paper, we extend jDE algorithm with the asynchronous method. jDE algorithm uses a robust but slow mutation strategy, DE/rand/1. Although this mutation strategy helps jDE algorithm to prevent it from getting stuck in a local optimum, its convergence speed significantly slow. Therefore, by incorporating jDE algorithm with the asynchronous method, its convergence speed can be improved.
To evaluate the optimization performance of the extended algorithm, we compared it with jDE algorithm without the asynchronous method. We conducted our experiments in the scalable benchmark problems in [13]. Many single objective optimization algorithms were tested in the benchmark problems to measure their optimization performances. We conducted a total of 100 independent experiments in the benchmark problems on 30 and 100 dimensions. In addition, we used population size as 100 for 30-dimensional problems and 400 for 100-dimensional problems. Table 1 presents the experimental results on 30 dimensions. In this table, the better results are marked as boldface. As we can see from this table, jDE algorithm with the asynchronous method outperformed original jDE algorithm. In detail, jDE algorithm with the asynchronous method outperformed all of the unimodal benchmark problems (� 1~�� ) significantly. This result is natural because the asynchronous method applies a better solution immediately that increases the convergence speed significantly. Also, the unimodal benchmark problems have no local optimum so increasing the convergence speed is safe from getting stuck in a local optimum. Regarding the multimodal benchmark problems, still, jDE algorithm with the asynchronous method outperformed original jDE algorithm in several benchmark problems (� 7 , � 12 , and � 1� ). In the rest benchmark problems, both of two algorithms get the same optimization results. This result implies that the asynchronous method still works in the multimodal benchmark problems without degrading the optimization performance of jDE algorithm. In addition, table 2 presents the experimental results on 100 dimensions. Again, jDE algorithm with the asynchronous method outperformed original jDE algorithm. In detail, jDE algorithm with the asynchronous method outperformed all of the unimodal benchmark problems (� 1~�� ) significantly. Regarding the multimodal benchmark problems, still, jDE algorithm with the asynchronous method outperformed original jDE algorithm in several benchmark problems (� 5 , � 7 , � 12 , and � 1� ). In the rest benchmark problems, both of two algorithms get the same optimization results. The interesting point of this result is that jDE algorithm with the asynchronous method finds a better solution than original jDE algorithm on Generalized Rosenbrock's Function (� 5 ). This result implies that the asynchronous method works better in the multimodal benchmark problems even though it increases the greediness of jDE algorithm.

Conclusion
In this paper, we presented an extended self-adaptive DE algorithm, A-jDE. jDE algorithm shows robust optimization performance on various problems, but it uses the slow mutation strategy, DE/rand/1. Therefore, we applied jDE algorithm the asynchronous method to increase the greediness of its searchability. The asynchronous method is one of the recently investigated approaches. In contrast to original DE algorithm, the asynchronous method generates a trial individual of a target individual and becomes a member of the current population immediately if the trial individual has a better fitness value than the target individual has. By doing this, if the asynchronous method finds a better solution, the solution is included in the current population immediately so it can be served as a donor individual. Therefore, it can improve the convergence speed significantly. We evaluated the optimization performance of the extended DE algorithm in 13 scalable benchmark problems on 30 and 100 dimensions. Our experiments prove that incorporating jDE algorithm with the asynchronous method can improve the optimization performance in not only a unimodal benchmark problem but also multimodal benchmark problem significantly. Note that, in the experimental results, there is no degrading optimization performance with incorporating jDE algorithm with the asynchronous method. Therefore, it is highly recommended to use A-jDE algorithm instead of original jDE algorithm. In the future study, we will investigate theoretical analysis of the asynchronous method. Theoretical analysis is critical to understand the characteristics of the asynchronous method. In addition, we will extend A-jDE algorithm for solving more complex problems such as constraint optimization, multiobjective optimization, and/or large-scale optimization.