Porcellio scaber algorithm (PSA) for solving constrained optimization problems

In this paper, we extend a bio-inspired algorithm called the porcellio scaber algorithm (PSA) to solve constrained optimization problems, including a constrained mixed discrete-continuous nonlinear optimization problem. Our extensive experiment results based on benchmark optimization problems show that the PSA has a better performance than many existing methods or algorithms. The results indicate that the PSA is a promising algorithm for constrained optimization.


Introduction
Modern optimization algorithms may be roughly classified into deterministic optimization algorithms and stochastic ones. The former is theoretically sound for well-posed problems but not efficient for complicated problems. For example, when it comes to nonconvex or large-scale optimization problems, deterministic algorithms may not be a good tool to obtain a globally optimal solution within a reasonable time due to the high complexity of the problem. Meanwhile, while stochastic ones may not have a strong theoretical basis, they are efficient in engineering applications and have become popular in recent years due to their capability of efficiently solving complex optimization problems, including NP-hard problems such as the travelling salesman problem. Bio-inspired algorithms take an important role in stochastic algorithms for optimization. These algorithms are designed based on the observations of animal behaviors. For example, one of the well known bio-inspired algorithm called particle swarm optimization initially proposed by Kennedy and Eberhart [1] is inspired by the social foraging behavior of some animals such as the flocking behavior of birds.
There are some widely used benchmark problems in the field of stochastic optimization. The pressure vessel design optimization problem is an important benchmark problem in structural engineering optimization [2]. The problem is a constrained mixed discrete-continuous nonlinear optimization problem. In recent years, many bioinspired algorithms have been proposed to solve the problem [3][4][5][6]. The widely used benchmark problems also include a nonlinear optimization problem proposed by Himmelblau [7]. ⋆ e-mail: yinyan.zhang@connect.polyu.hk ⋆⋆ e-mail: shuaili@polyu.edu.hk ⋆⋆⋆ e-mail: guohl1983@uestc.edu.cn Recently, a novel bio-inspired algorithm called the porcellio scaber algorithm (PSA) has been proposed by Zhang and Li [8], which is inspired by two behaviors of porcellio scaber. In this paper, we extend the result in [8] to solve constrained optimization problems. As the original algorithm proposed in [8] deals with the case without constraints, we provide some improvements for the original PSA so as to make it capable of solving constrained optimization problems. Then, we compare the corresponding experiment results with reported ones for the aforementioned benchmark problems as case studies. Our extensive experiment results show that the PSA has much better performance in solving optimization problems than many existing algorithms. Before ending this introductory section, the main contributions of this paper are listed as follows: 1)We extend the PSA to solve constrained optimization problems, including the constrained mixed discretecontinuous nonlinear optimization problem.
2)We show that the PSA is better than many other existing algorithms in solving constrained optimization problems by extensive numerical experiments.

Problem Formulation
The constrained optimization problem (COP) considered in this paper is presented as follows: minimizef (x), subject to g j (x) ≤ 0, with i = 1, 2, · · · , d and j = 1, 2, · · · , m, where x = [x 1 , x 2 , · · · , x d ] T is a d-dimension decision vector; l i and u i are the corresponding lower bound and upper bound of the ith decision variable;f (x) : R d → R is the cost function Generate initial position of porcellio scaber x 0 i (i = 1, 2, · · · , N) Environment condition E x at position x is determined by f (x) Set weighted parameter λ for decision based on aggregation and the propensity to explore novel environments Initialize f * to an extremely large value Initialize each element of vector x * ∈ R d to an arbitrary value while k < MaxS tep do Get the position with the best environment condition, i.e., Randomly chose a direction τ = [τ 1 , τ 2 , · · · , τ d ] T to detect Detect the best environment condition min{E x } and worst environment condition max{E x } at position x k i + τ for i = 1 : N all N porcellio scaber for i = 1 : N all N porcellio scaber do Determine the difference with respect to the position to aggregate i.e., x k i − arg min x k j { f (x k j )}) Determine where to explore, i.e., pτ Move to a new position according to (2) end for end while Output x * and the corresponding function value f * Visualization to be minimized. For the case that the problem is convex, there are many standard algorithms to solve the problem. However, for the case that the problem is not convex, the problem is difficult to solve.

Algorithm Design
In this section, we modify the original PSA [8] and provide an improved PSA for solving COPs.

Original PSA
For the sake of understanding, the original PSA is given in algorithm 1 [8], which aims at solving unconstrained optimization problems of the following form: where x is the decision vector and f is the cost function to be minimized. The main formula of the original PSA is given as follows [8]: where λ ∈ (0, 1), τ is a vector with each element being a random number, and p is defined as follows: Evidently, the original PSA does not take constraints into consideration. Thus, it cannot be directly used to solve COPs.

Inequality constraint conversion
In this subsection, we provide some improvements for the original PSA and make it capable of solving COPs. As the original PSA focuses on solving unconstrained problem, we first incorporate the inequality constraints g j (x) ≤ 0 ( j = 1, 2, · · · , m) into the cost function. To this end, the penalty method is used, and a new cost function is obtained as follows: where h(g i (x)) is defined as and γ ≫ 1 is the penalty parameter. By using a large enough value of γ (e.g., 10 12 ), unless all the inequality constraints g i (x) ≤ 0 (i = 1, 2, · · · , m) are satisfied, ) takes a dominant role in the cost function. On the other hand, when all the inequality constraints g i (x) ≤ 0 (i = 1, 2, · · · , m) are satisfied, h(g i (x)) = 0, ∀i, and thusf (x) = f (x).

Addressing simple bounds
In terms of the simple bounds l j ≤ x j ≤ u j with j = 1, 2, · · · , d, they are handled via two methods. Firstly, to satisfy the simple bounds, the initial position of each porcellio scaber is set via the following formula: where x 0 i, j denotes the initial value of the jth variable of the position vector of the ith (with i = 1, 2, · · · , N) porcellio scaber; rand(0, 1) denotes a random number in the region (0, 1), which can be realized by using the rand function in Matlab. The formula (4) guarantees that the initial positions of all the porcellio scaber satisfy the the simple bounds l j ≤ x j ≤ u j with j = 1, 2, · · · , d.
Secondly, if the positions of all the porcellio scaber are updated according to (2) by replacing f (x) withf (x) defined in (3) for the constrained optimization problem (1), then the updated values of the position vector x k i may violate the simple bound constraints. To handle this issue, based on (2), a modified evolution rule is proposed as follows: Algorithm 2 Algorithm for the evaluation of P Ω (x) with where λ ∈ (0, 1), τ is a vector with each element being a random number, and Besides, P Ω is a projection function and make the updated position satisfy the simple bound constraints, where Ω = {x ∈ R d |l i ≤ x i ≤ u i , i = 1, 2 · · · , d}. The mathematical definition of P Ω (x) is P Ω (x) = arg min y∈Ω y − x 2 with · 2 denoting the Euclidean norm. The algorithm for the evaluation of P Ω (x) is given in Algorithm 2.

PSA for COPs
Based on the above modifications, the resultant PSA for solving COPs is given in Algorithm 3. In the following section, we will use some benchmark problems to test the performance of the PSA in solving COPs.

Case Studies
In this section, we present experiment results regarding using the PSA for solving COPs.

Case I: Pressure vessel problem
In this subsection, the pressure vessel problem is considered. The pressure vessel problem is to find a set of four design parameters, which are demonstrated in Fig. 1, to minimize the total cost of a pressure vessel considering the cost of material, forming and welding [1]. The four design parameters are the inner radius R, and the length

Algorithm 3 PSA for COPs
Cost functionf (x) as defined in (3), x = [x 1 , x 2 , · · · , x d ] T Generate initial position of porcellio scaber x 0 i (i = 1, 2, · · · , N) according to (4) Environment condition E x at position x is determined by f (x) Set weighted parameter λ for decision based on aggregation and the propensity to explore novel environments Set penalty parameter γ inf (x) to a large enough value Initialize f * to an extremely large value Initialize each element of vector x * ∈ R d to an arbitrary value while k < MaxS tep do Get the position with the best environment condition, i.e., Randomly chose a direction τ = [τ 1 , τ 2 , · · · , τ d ] T to detect Detect the best environment condition min{E x } and worst environment condition max{E x } at position x k i + τ for i = 1 : N all N porcellio scaber for i = 1 : N all N porcellio scaber do Determine the difference with respect to the position to aggregate i.e., x k i − arg min x k j { f (x k j )}) Determine where to explore, i.e., pτ Move to a new position according to (5) where P Ω (x) is evaluated via Algorithm 2 end for end while Output x * and the corresponding function value f * Visualization L of the cylindrical section, the thickness T h of the head, the thickness T s of the body. Note that, T s and T h are integer multiples of 0.0625 in., and R and L are continuous variables.   The pressure vessel problem can be formulated as follows [9]: Evidently, this problem has a nonlinear cost function, three linear and one nonlinear inequality constraints. Besides, there are two discrete and two continuous design variables. Thus, the problem is relatively complicated. As this problem is a mixed discrete-continuous optimization, the projection function P Ω (x) is slightly modified and presented in Algorithm 4. Besides, the initialization of the initial positions of porcellio scaber is modified as follows: The best result we obtained using the PSA in 1000 instances of executions and those by using various existing algorithms or methods for solving this problem are listed in Table 1. Note that, in the experiments, 40 porcellio scaber are used, the parameter λ is set to 0.6, and the MaxS tep is set to 100000 with τ being a zero-mean random number with the standard deviation being 0.1. As seen from Table 1, the best result obtained by using the PSA is better than most of the existing results. Besides, the difference between the best function value among all the ones in the table and the best function value obtained via using the PSA is quite small.

Case II: Himmelblau's nonlinear optimization problem
In this subsection, we consider a nonlinear optimization problem proposed by Himmelblau [7]. This problem is Table 2. Comparisons of best results for Himmelblau's nonlinear optimization problem also one of the well known benchmark problems for bioinspired algorithms. The problem is formally described as follows [7]: x 5 ] T being the decision vector. In this problem, each double-side nonlinear inequality can be represented by two single-side nonlinear inequality constraints. For example, the constraint 90 ≤ g 2 (x) ≤ 110 can be replaced by the following two constraints: −g 2 (x) ≤ −90, g 2 (x) ≤ 110.
Thus, this problem can also be solved by the PSA proposed in this paper. The best result we obtained via using the PSA in 1000 instances of executions, together with the result obtained by other algorithms or methods, is listed in Table 2. In the experiments, 40 porcellio scaber are used, the parameter λ is set to 0.6, and the MaxS tep is set to 100000 with τ being a zero-mean random number with the standard deviation being 0.1. Evidently, the best result generated by the PSA is ranked No. 2 among all the results in Table 2.
By the above results, we conclude that the PSA is a relatively promising algorithm for solving constrained optimization problems. The quite smalle performance difference between the PSA and the best one may be the result of the usage of the penalty method with a constant penalty parameter.

Conclusions
In this paper, the bio-inspired algorithm PSA has been extended to solve nonlinear constrained optimization problems by using the penalty method. Case studies have validated the efficacy and superiority of the resultant PSA. The results have indicated that the PSA is a promising algorithm for solving constraint optimization problems. There are several issues that requires further investigation, e.g., how to select a best penalty parameter that not only guarantees the compliance with constraints but also the optimality of the obtained solution. Besides, how to enhance the efficiency of the PSA is also worth investigating.