Accepted: 2024-10-12
(Communicated by Kok Lay Teo)
Abstract:Global optimization problems are common issue in numerous fields, such as molecular biology, economics and finance, environmental engineering, industrial manufacturing. In the global optimization problem, if it is a convex programming, we can use a local minimization algorithm to find its global optimal solution. Unfortunately, in most cases, these problems are non-convex optimization problems with multiple minimum points, which cause a scientific question arise: How to find a better minimizer from the known local minimizer of the objective function.
The traditional nonlinear programming algorithms, such as Quasi-Newton methods or other algorithms, cannot directly solve the puzzle. Therefore, it is a great challenge to find the global minimizer of non-convex objective function. In recent years, lots of studies have focused on global optimization theory and algorithm. In general, global optimization methods can be divided into two categories: stochastic methods and deterministic methods. The stochastic methods for global optimization include evolutionary algorithm, simulated annealing, and particle swarm optimization, etc. These methods are inspired by physical principles or natural evolution, which jump out of the local minimum by a probability-based approach. Although the stochastic methods have been effectively applied to solve many intricate problems, some problems remain unsolved, such as convergence analysis and parameter selection. The deterministic methods, for example, the filled function method, tunnelling method, covering method, can converge faster and often find a better minimizer than the current minima quickly. Compared with other deterministic methods, the filled function method is a more popular global optimization method because it is relatively easy to implement. The basic idea of filled function is to construct an auxiliary function at the current local minima of the objective function. By minimizing the auxiliary function, the current local minima is quickly jumped out and a better local minima is reached. This process cycles until a better local minimum cannot be found.
The first filled function introduced by Ge is as follows, i.e.
$P(x,x_{1}^*, r, \rho)=\frac{1}{r+F(x)}\text{exp}(-\frac{\|x-x_{1}^{\ast}\|^{2}}{\rho^{2}}),$
where the parameters $r$ and $\rho$ need to be chosen appropriately. However, Ge's filled function has some shortcomings. Because there are too many restrictions on the parameters $r$ and $\rho$, it is difficult for them to be properly selected so that $P(x,x_{1}^*, r, \rho)$ satisfies the definition of the filled function. Constructing a filled function with good properties is one of the keys to the success of a filled function algorithm. Therefore, researchers have done a lot of beneficial research work in enriching the forms of filled function. Unfortunately, These filled functions indeed possess several merits, yet they also exhibit drawbacks that necessitate further refinement and improvement. The adjustment of parameters for some two-parameter or single-parameter filled functions is still difficult. Non-smooth, non-differentiable or even discontinuous filled functions are not minimized by effective local optimization procedures based on gradient information, which decreases the efficiency of the filled algorithm. To address the difficulties in adjusting parameters, some scholars have proposed parameter-free filled functions. However, some parameter-free filled functions do not have any information about the objective function when $f(x)\geq f(x_{1}^{*})$, which may lead to difficulties in finding the global minimum point. In addition, without the adjustment of parameters, the proposed function in literature may fail to satisfy the definition of the filled function.
Based on the above considerations, a new one-parameter filled function is proposed in this paper. The main contributions of this work are summarized below:
(i) The new filled function is continuously differentiable and its gradient information can be used in local minimization procedure. Thus, the realization of the minimization process is simplified, and the algorithm is more efficient and convenient.
(ii) In any case, our proposed filled function always contains information about the objective function.
(iii) The new filled function has an easily adjustable parameter that can be tuned according to the specific problem, enabling the algorithm to adapt to the problem and avoids any issues similar to those encountered in literature.
Next, we conducted a theoretical analysis and research on the properties of the new filled function. These good function properties ensure the convergence of the minimization process of the proposed filled function. Therefore, the minimum iteration process of the new filled function algorithm will run successfully in the whole region. Without these function properties, the filled function may behave similarly to that in reference, having no local minima within the entire feasible region. Furthermore, supported by thorough theoretical analysis, we have meticulously conceived a filled function method that is not only easy to implement but also highly efficient. Through a series of rigorous numerical experiments, we have fully verified the effectiveness and reliability of this new algorithm. It is worth mentioning that in the comparative tests with two other filled function methods, our algorithm has demonstrated more significant advantages, and the experimental results convincingly prove its superiority.