Local search for parallel optimization algorithms for high diminsional optimization problems

— Local search algorithms perform an important role when being employed with optimization algorithms tackling numerous optimization problems since they lead to getting better solutions. However, this is not practical in many applications as they do not contribute to the search process. This was not much studied previously for traditional optimization algorithms or for parallel optimization algorithms. This paper investigates this issue for parallel optimization algorithms when tackling high dimensional subset problems. The acquired results show impressive recommendations.


Introduction
Optimization algorithms provide a crucial role since many real-world applications are optimization problems that can be tackled using these algorithms.They involve searching for the best configuration of the problem factors (the most ideal solution) to achieve particular objectives [1].
Many fields involve dealing with high dimensional optimization problems nowadays.An example of these optimization problems is subset problems.They require choosing as opposed to ordering or sequencing [2].This is because the order between the solution parts in a partial solution is not significant.This raises the need for advancing suitable algorithms to tackle them especially when they are high dimensional.
The reason for their importance is that a variety of real-world applications can be tackled as subset problems besides their significant applications [3]- [4].However, these applications can't be tackled utilizing traditional optimization algorithms within reasonable time.In addition, these algorithms may not be able to provide high quality solutions.This has motivated researchers to advance appropriate optimization algorithms such as hybrid and parallel algorithms.The latter have proven their success when handling complex optimization problems such as high dimensional optimization problems since dimensionality reduction is not applicable in all of these optimization problems.
Genetic algorithms are inherently parallelizable search methods since they need a big no. of independent calculations with insignificant synchronization and communication costs [5].Besides, parallelism emerges normally when utilizing populations' algorithms as every member in these populations is an autonomous unit.Therefore, when running in parallel, the performance of these algorithms is enhanced extraordinarily [6].Can these algorithms be enhanced by employing local search algorithms with them?This paper investigates this question i.e., the role that local search algorithms may provide if they are employed with parallel optimization algorithms when tackling high dimensional subset problems.This is accomplished through 2 sorts of experiments; with utilizing the optimization algorithm individually, and with including to them a variety of local search algorithms.
The rest of this paper is structured as follows.The next section addresses the local search.Section 3 demonstrates the parallel optimization algorithms.The experiments are demonstrated in Section 4. Section 5 details the acquired results.The conclusions and the future work are given in the last section.These algorithms start with an initial solution and try to find a superior one out of an appropriately described neighborhood of the current solution.In its simplest variant, it examines the neighborhood such that if a better solution is discovered, it will replace the current one.Such algorithms are called that as every move is performed only if the discovered solution is better than the current one.They end once they reach a local minimum.The choice of a reasonable neighborhood structure is vital for these algorithms and must be performed in a problem dependent manner [1], [7] - [9].

local search and optimization algorithms
Local search methods have 2 types.The first one examines the neighborhood and chooses the first solution which is better than the present one.The second one completely examines the neighborhood and returns the solution of the required objective function value.Both of these end at local minima [1].
They have been employed with many optimization algorithms to keep them from getting stuck in local minima when tackling various applications.For example, it is alternatively utilized with ant colony optimization algorithms to perform incorporated actions that can't be performed by a single ant such as observing the path constructed by each agent and selects an ant or more which are then allowed to deposit more pheromone on the solution parts that they passed.Since ant colony optimization algorithms' solution construction uses a different neighborhood as opposed to local search, there is a large probability that the latter will improve a solution constructed by an ant.
However, there are circumstances where such coupling ends up recognizably useless such as very strongly constraint applications.These are the ones for which local search polynomial neighbourhoods have slight solutions or none at all and in this manner local search becomes of extremely restricted utilize and acquiring a feasible solution is exceptionally hard [4], [10].A good example of these applications is subset applications that necessitate selecting and hence employing a local search method to the used optimization algorithm for handling them should not be recommended.
On the other hand, local search as a solving tool individually suffers from finding great beginning solutions [7].Besides, the performance of iterative improvement techniques when applied to optimization applications is typically so inadmissible [1].

Parallel algorithms
There are circumstances where conventional optimization algorithms can not tackle successfully high dimensional optimization problems in reasonable time.In these situations, parallel algorithms can be utilized as they have proven their success when handling numerous optimization problems.However, tackling real-world optimization problems having complex features such as high dimensionality or being subset problems (messing many helpful features that facilitate solving them as other optimization problems) requires extra features in order for such problems to be solved within reasonable time i.e., the entire system can be having more than one algorithm to be run in parallel.This paper investigates such systems when tackling high dimensional subset problems.
The islands population-based approaches have favoured performance over the approaches since parallel searches with the exchange of information between the searches are frequently superior to independent ones.They are very sophisticated and can converge rapidly in light of the fact that the individuals' number of the subpopulations is less than the one of the whole population utilized by the traditional genetic algorithms.Additionally, every island seeks in varied areas of the entire search space which improves the exploratory feature of these techniques.Therefore, the islands approaches consolidate the speed of having numerous processors and of having parallel searches as well i.e., they are composed of parallel computational components that are executed on a parallel machine to be parallel search methods on both software and hardware levels.Islands GAs are utilized in the experiments performed in this work since they have been widely utilized for handling various real-world optimization problems [5], [11]- [14].https://doi.org/10.1051/matecconf/201821004052CSCC 2018 Generally speaking, parallel genetic algorithms have been widely applied to many fields such as numerical mathematics, graph theory, and computer science, engineering, finance and economics, etc., [15].
The thought of GAs for handling optimization applications is that they begin with a set of potential randomly created solutions.And then, the different operators are applied to this population to get a new one.To measure the quality of a solution, a fitness function is utilized.An individual having a bigger wellness is probably going to be chosen numerous times.This is in contrast to the weaker ones [16].
Starting with one population then onto the next one is accomplished through the genetic operators.The selection operator selects pairs of chromosomes that will duplicate to get newer populations.Mutation and crossover are two of the most broadly utilized operators with genetic algorithms which utilize binary representation.Mutation works on a single chromosome and alters a bit randomly.Crossover works on two parents to create 2 child offsprings [17].
Their aim for subset problems using binary representation is to get the ideal individual in which each bit represents an element out of the original subset.One or zero means the component is selected or not respectively i.e., finding the chromosome containing the most modest number of ones that achieve the best performance [16]- [17].

Computational experiments
Several experiments utilizing different instances of the utilized benchmark subset problem were implemented as follows.

Method
The following experiments were performed: • Without including a local search method to the utilized algorithm, and 7.This procedure stops when the stopping condition is satisfied.Otherwise, go to step 4. The genetic algorithm that does not utilize local search methods follows these steps except the sixth one.

The benchmark utilized problem
In the experimints, we utilized the knapsack problem (KP) which is the problem of selecting a subset out of a superset of elements such that the total profit is maximized and the total weight of all the selected objects does not exceed the maximum capacity.Any element can be selected at most once and has a weight and a profit.Like other high dimensional NP optimization problems, this optimization problem can't be tackled in a linear amount of time.
Recently, it has been broadly investigated due to its enormous practical applicability in numerous domains like industry, management, and operations researches.Besides having many direct applications such as shipping organizations, numerous real-world industrial applications from different fields can be tackled as knapsack problems [3], [19]- [20].
The experiments utilized the knapsack instances appeared in Table 1.

Results
Table 2 presents the results of 2 systems with and without employing local search algorithms utilizing the knapsack instances presented in the previous Table .The results are the average of ten independent runs.The experiments were developed using R language [21].The same setting was utilized for the utilized genetic algorithm.The binary representation was used as it is the most reasonable one for tackling subset applications.The second, fifth, eighth, and eleventh columns of Table 2 list the average number of the selected items to be included into the knapsack.The third, sixth, ninth, and twelfth columns list the average of the total weights of the selected items in the knapsack.The fourth, seventh, tenth, and thirteenth columns show the average of the total profits of including the selected items into the knapsack.These results are gotten from employing BFGS with box constraints, conjugate-gradient, Nelder-Mead local search algorithms, and without employing any of them respectively.We don't measure the computational time since including an additional computational component such as a local search algorithm obviously increases it.

Discussion
This work researches the effect of including local search on the performance of parallel optimization algorithms by implementing two sorts of investigations using several knapsack instances.
The results show that integrating different local search techniques to the used optimization algorithm can lead to better results such as the Conjugate-gradient method when handling the first instance and the Nelder-Mead method when solving the fourth instance.However, for the same instance, other local search methods can negatively affect the performance such as Nelder-Mead, and BFGS with box constraints when tackling the first three instances which emphasize that when designing a hybrid system, one should consider whether the employed local search method can improve the results or not.Subsequently, different local search techniques have to be tried.In addition to considering the general requirements for applying local search methods that are the objective function, the search space, and the neighborhood structure [1].
Moreover, one should not only consider the known local search methods but he/she can design/propose a local search method that particularly suits the given optimization problem.In addition, one can represent the given optimization problem in a new way such that a particular known local search method can be employed with the utilized optimization algorithm tackling this given optimization problem such as the work of Bondarenko [22].

Conclusions and future work
The aim of this paper was the investigation of the impact of adding local search to the host parallel optimization algorithm.This is achieved by 2 experiments with and without employing local search when handling high dimensional 0-1 knapsack instances.
A local search method when being integrated to an optimization algorithm when handling a subset problem may lead to getting better solution while other local search methods can worsen the performance of the host optimization algorithm besides the extra computation time.
As for the future work, much testing utilizing other subset problems should be performed.Besides, other parallel algorithms should be utilized.

Table 1 .
The details of the utilized instances,

Table 2 .
The obtained results.