An Improved Brain Storm Optimization with Dynamic Clustering Strategy

: Intelligence algorithms play an increasingly important role in the field of intelligent control. Brain storm optimization (BSO) is a new kind of swarm intelligence algorithm inspired by emulating the collective behavior of human beings in the problem solving process. To improve the performance of the original BSO, many variants of BSO are proposed. In this paper, an improved BSO algorithm with dynamic clustering strategy (BSO-DCS) is proposed as a variant of BSO for global optimization problems. The basic framework of BSO is firstly introduced. Then to reduce the time complexity of the original BSO, a new grouping method named dynamic clustering strategy (DCS) is proposed to improve the clustering method in the original BSO. To verify the effectiveness of the proposed BSO-DCS, it is tested on 12 benchmark functions of CEC 2005 with 30 dimensions. Experimental results show that DCS is an effective strategy to reduce the time complexity, and the improved BSO-DCS performs greatly better than the original BSO algorithm.

Brain storm optimization (BSO) is a new kind of swarm intelligence algorithm inspired by emulating the collective behavior of human beings in the problem solving process [10,11]. Like other swarm intelligence algorithms, BSO has achieved successful applications in areas such as optimal satellite formation reconfiguration [12], the design of DC Brushless Motor [13], economic dispatch considering wind power [14], and multi-objective optimization problems (MOPs) [15][16].
However, in swarm intelligence research, there have always been attempts to further improve the performance of any given findings. In this paper, an improved BSO algorithm with dynamic clustering strategy (BSO-DCS) is proposed as a variant of BSO for global optimization problems. After we firstly introduce the basic framework of BSO, in order to reduce the time complexity of the original BSO, a new grouping method named dynamic clustering strategy (DCS) is proposed to improve the clustering method in the original BSO. To verify the effectiveness of the proposed DCS, it is tested on 12 benchmark functions of CEC 2005 with 30 dimensions.
The rest of this paper is organized as follows. Section 2 introduces briefly the BSO algorithm. Section 3 describes the BSO with a dynamic clustering strategy (BSO-DCS). Section 4 presents the test benchmark functions, the experimental setting for each algorithm and the experimental results. Section 5 discusses the differences between the BSO-DCS and the original BSO algorithm, and conclusions are also given in this Section.

Brain storm optimization
The BSO algorithm is motivated by the philosophy of brainstorming. Brainstorming is a widely used tool for increasing creativity in organizations which has achieved wide acceptance as a means of facilitating creative thinking [17]. Similar to other swarm intelligence algorithms, BSO is also a population-based stochastic optimization technique. A potential solution in the fitness landscape is named an idea in BSO. BSO sticks to the rules of interchange of ideas by a team and uses clustering, replacing and creating operators to produce global optimum generation by generation. As presented in [10][11], the process of BSO can be described as follows.
Firstly, N ideas are randomly initialized within the searching space. Then each idea is evaluated and its fitness value is calculated. Next m points of cluster center are also randomly initialized like N ideas, where m is lesser than N.
Next, like other evolutionary algorithms and swarm intelligence algorithms, the main operations will be divided into converging operation and diverging operation.

Converging operation
The converging operation primarily is the clustering of ideas. Clustering is a process of grouping similar objects together, and during each generation, all the ideas are clustered into m clusters according to idea individual features such as the distance. The best idea in each center is chosen as the cluster center, and the clustering operation can refine a search area. In BSO, the basic method of clustering is k-means.

Diverging operation
The diverging operation mainly includes disrupting cluster center and creating individuals.

Cluster center disrupting
Cluster center disrupting operation randomly chooses an idea in a cluster center and replaces it with a newly-generated idea with a probability of p_replace, which is also named as the replacing operation. The value of p_replace is utilized to control the probability to replace a cluster center by a randomly generated solution. This can avoid the premature convergence, and help idea individuals "jump out" of the local optima.

Creating individuals operation
To maintain the diversity of population, a new idea individual can be generated based on one idea or two in one cluster or two, respectively. In the creating operation, BSO first randomly chooses one cluster or two according to a probability of p_one. Then in the basis of choosing one cluster or two, an idea of cluster center or a random idea is selected with a probability of p_one_center and p_two_center. The selecting operation is defined below as , one cluster 1, 2, , where rand is a random value between 0 and 1. After choosing one idea or two, the selected idea(s) is updated according to as where normrnd is the Gaussian random with mean 0 and variance 1, [ is an adjusting factor slowing the convergence speed down as the evolution goes, which is expressed as where rand is a random value between 0 and 1. The max_iteration and current_iteration denote the maximum number of iteration and current number of iteration respectively. The logsig is a logarithmic sigmoid transfer function, and such form is beneficial to global search ability at the beginning of the evolution and enhances local search ability when the process is approaching to the end. k is a predefined parameter for changing slopes of the logsig function. The new created idea is evaluated, and if the fitness value is better than the current idea, the old idea will be replaced by the new one.

Dynamic clustering strategy
In the original BSO, a k-means clustering method was adopted to group similar ideas into serval groups in the converging operation. As we all known, the k-means clustering method is time consuming and needs heavier time computational burden. During the evolutionary process, BSO executes the k-means clustering in every generation to group ideas. However, it is not necessary to use k-means clustering method to group the ideas into different groups in every generation. In our proposed algorithm, the dynamic clustering strategy is used to improve the k-means clustering method. The main thinking of the dynamic clustering strategy is that we periodically execute the k-means clustering after a certain number of generations (re-clustering period), so that the exchange of information covers all ideas in the clusters to achieve proper exploration ability.
Hence, in our dynamic clustering strategy, the key point is the size of re-clustering period. If the re-clustering period is higher, the algorithm will get a high degree of exploitation and a high convergence rate, and the time complexity of the algorithm will be correspondingly higher. On the contrary, if the re-clustering period is lower, the algorithm will achieve more exploration and diversity, and the run time of the algorithm will be reduced in proportion. So, an appropriate re-clustering period helps to balance the exploitation and exploration of the algorithm, and will achieve better performance than the original BSO algorithm.
In this paper, we use a probabilistic parameter p_dynamic to denote the re-clustering period. p_dynamic is a value between 0 to 1. According to the above analysis, the pseudo-code for the implementation of dynamic clustering strategy is summarized in Algorithm 1. The setting of the parameter p_dynamic will be introduced in detail in Section 4.2.

Dynamic step size parameter control
For an optimization algorithm, the good balance between exploration and exploitation is comparatively necessary. Exploration refers to the ability of the algorithm to explore To adjust the convergence speed as the evolution goes in idea generation, the original BSO algorithm defines an adjusting factor [ described in Formula (3). Through numerical experiments, we find that at first the adjusting factor keeps around 1, while when half the number of generations has been reached, it rapidly turns to near 0. This method to control the size of step can also balance exploration and exploitation at different searching generations. However, it just takes effect only for very short interval. Hence, we introduce a simple dynamic step size strategy. The dynamic function is described as the following where rand is a random value between 0 and 1. The max_iteration and current_iteration also denote the maximum number of iterations and current number of iteration, respectively.

Pseudo-code of the BSO-DCS
As described above, the pseudo-code of the improved BSO-DCS is summarized in Algorithm 2.
Algorithm2: BSO-DCS( ) In Algorithm 2, the main process of BSO-DCS is similar with the original BSO. In converging operation, we improve the k-means clustering method with the dynamic clustering strategy, and in creating individuals operation, we adopt a new dynamic step size strategy described in Section 3.2. As a whole, we do not change the main framework of the original BSO for a fair comparison with BSO.

Benchmark functions
In this paper, we choose 12 generally known shifted and rotated benchmark functions in CEC2005 given in Table  3 [18]. All functions are tested on 30 dimensions. The searching range and theory optima for all functions are also given in the Table 1. Among 12 benchmark functions, F1 to F5 are shifted unimodal functions, F6 to F12 are shifted or rotated multimodal functions.

Parameter settings of comparative algorithms
For a fair comparison, all the experiments are conducted on the same machine with an Intel 3.4 GHz CPU, 4GB memory. The operating system is Windows 7 with MATLAB 8.0 (R2012b). Compared with the original BSO, Our BSO-DCS algorithm requires a new parameter (p_dynamic) called re-clustering period. Fig.1 shows the variation of convergence performance under different p_dynamic which of values are 0.1, 0.3, 0.5, 0.7 and 0.9 for function F1.
From the Fig.1, we can observe that for function F1, higher re-clustering period leads to higher accuracy. However, the higher the re-clustering period, the more time complexity is required. We set p_dynamic=0.5 to get a trade-off between solution accuracy and time complexity. Because the BSO algorithm is analogy with the PSO algorithm, we specially compare the performance of BSO-DCS with the PSO. To eliminate influences of statistical errors, each problem function is independently simulated 25 times which is a prescribed evaluation criterion in CEC2005 [18]. For all algorithms, the population size is set to 30. The same stopping criterion is used in all algorithms, i.e. reaching certain number of iterations or function evaluations (FEs).
In our experiments, we have run all algorithms on the benchmark functions using the same FEs of 10000*D for a fair comparison, where D is the size of the problem dimension. The parameter settings of all the algorithms are given in Table 2.

Comparisons on solution accuracy
The results of solution accuracy are given in Table 3 in terms of the mean optimum solution and the standard deviation of the solutions obtained in the 25 independent runs by each algorithm over 300,000 FEs on 12 benchmark functions. In all experiments, the dimensions of all problems are 30. In each row of the table, the mean values are listed in the first part, and the standard deviations are listed in the last part, and the two parts are divided with a symbol "±". The best results among the algorithms are shown in boldface. The comparison results among BSO-DCS and other algorithms are summarized as "w/t/l" in the last row of the Table, which means that BSO-DCS wins in w functions, ties in t function and loses in l functions. From the Table 5 it can be observed that the mean value and the standard deviation value of the BSO-DCS performs better for all 12 functions than the original BSO and PSO. We can conclude that the BSO-DCS algorithm remains a better performance of the solution accuracy for complex shifted and rotated benchmark functions than the original BSO and PSO.

The Comparison results of convergence speed
To compare the convergence speed of different algorithms, the mean CPU times of all algorithms in one run are shown in Table 4.  Table 6 shows that the mean CPU times of BSO-DCS are shorter than BSO and PSO for functions F1 to F12. Fig.2 presents the convergence graphs in terms of the mean fitness values achieved by each of 3 algorithms for 25 runs. From the Fig.2 we can observe that BSO-DCS has fast or similar convergence speed than the other two algorithms.

Discussions and conclusions
There are mainly two differences between the improved BSO-DCS and the original BSO.
Firstly, to reduce the time complexity of the algorithm, the proposed BSO-DCS adopted the dynamic clustering strategy to adjust the k-means clustering method. The experimental results show that the mean CPU times of BSO-DCS is lower significantly than the original BSO algorithm.
Besides, we modified the creating individuals' operator with a dynamic step size parameter control strategy. The step size can dynamically adjust the convergence speed as the evolution goes in idea generation, and may overcome the shortage of the original adjusting factor which updates in very short interval. On the other hand, the dynamic step size parameter control strategy also reduces the load of parameter settings, since it is no longer required the parameter k.
In this paper, an improved BSO with dynamic clustering strategy named BSO-DCS is proposed for solving complex global optimization problems. Experiments on the 12 chosen test problems were carried out. From the results of experiments, it is observed that the dynamic clustering strategy can greatly reduce the run time of the BSO-DCS algorithm, and with the aid of dynamic step size parameter control, the solution accuracy of the BSO-DCS also is better than the original BSO. From the result analysis, we can conclude that BSO-DCS significantly improves the performance of the original BSO.
We expect that BSO-DCS will be further applied to constrained, uncertain and dynamic, large scale optimization domains. Moreover, the algorithm may be used to solve some real-world intelligent control problems, such as the fuzzy model predictive control of non-linear processes, chaotic time series prediction, etc.