Home Browse Just accepted

Just accepted

Accepted, unedited articles published online and citable. The final edited and typeset version of record will appear in the future.
Please wait a minute...
  • Select all
    |
  • Zihao Jia , Youlin Shang , Yan Feng , Yufei Ren
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-023
    Accepted: 2024-10-12

    (Communicated by Xinmin Yang)

    Abstract: A large number of problems in social sciences and natural sciences today can be attributed to global optimization problems, which are widely found in economic models, finance, network transportation, database, integrated circuit design image processing, chemical engineering design and control, molecular biology, environmental engineering, etc. At present, the methods for solving global optimization problems can be divided into two categories, namely, deterministic algorithm and stochastic algorithm. Stochastic algorithm, such as genetic algorithm, simulated annealing algorithm and so on, is constructed by using the point column generated by a probabilistic mechanism to describe the iterative process. Deterministic algorithms use the analytic property of the problem to generate deterministic finite (or infinite) point sequences that converge to the global optimal solution, such as interval method, D.C. Planning algorithm, branch and bound method, auxiliary function method and integral level set method, etc. Among them, the filled function method is a deterministic method which can search the optimization problem globally. Ren pu Ge took the lead in proposing the filled function method[1]. The main idea is to use the filled function to jump out of the current local minimizer to find a better minimizer. The filled function method consists of two parts, one is to local minimization and the other is to fill. In the minimization phase, a local minimization algorithm is used to obtain a local minimizer of the original problem. In the filling phase, the original problem is minimized by means of the filled function, and the two phases are alternated until a global minimizer is found.

         The following global optimization problem will be discussed

    min $f(x)$,

    s.t.  $x \in \mathbb{R}^n$.                                                                           (1)

    Ge states that if a function satisfies the filled function definition and the filling property, then it can be used to solve the problem (1). The first generation filled function formula given in [1] is as follows:

    $G(x,x_k^*,r,\rho)=\frac1{r+f(x)}\exp(-\frac{\|x-x_k^*\|^2}{\rho^2})$,                                                   (2)

    where $r$ and $\rho$ are given parameters. The selection of parameters in formula (2) is quite difficult. This is a challenge for coming up with new filled function methods. The filled function of equation (2) has some shortcomings. 

    It has been discussed in depth. These shortcomings are mainly related to the two parameters $r$ and $\rho$ contained in the function. According to Gao, Ge's filled function contains an exponential term $\exp(-\frac{\|x-x_k^*\|^2}{\rho^2})$, and the picture of the filled function is almost flat if $\|x-x_k^*\|^2$ is large or $\rho$ is small. This makes it difficult for the computer to discern changes in function values.

        In this paper, a new non-parameter filled function is constructed. It satisfies the definition of filled function and has excellent properties. Then, the corresponding algorithm is designed according to the new filled function. Numerical experiments are made and comparisons on several test problems are shown which exhibit the feasibility and effectiveness of the algorithm.



  • Qiao Chen , Liping Tang , Ke Zhang
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-024
    Accepted: 2024-10-12

    (Communicated by Xinmin Yang)

    Abstract:Global optimization problems are common issue in numerous fields, such as molecular biology, economics and finance, environmental engineering, industrial manufacturing. In the global optimization problem, if it is a convex programming, we can use a local minimization algorithm to find its global optimal solution. Unfortunately, in most cases, these problems are non-convex optimization problems with multiple minimum points, which cause a scientific question arise: How to find a better minimizer from the known local minimizer of the objective function.

        The traditional nonlinear programming algorithms, such as Quasi-Newton methods or other algorithms, cannot directly solve the puzzle. Therefore, it is a great challenge to find the global minimizer of non-convex objective function. In recent years, lots of studies have focused on global optimization theory and algorithm. In general, global optimization methods can be divided into two categories: stochastic methods and deterministic methods. The stochastic methods for global optimization include evolutionary algorithm, simulated annealing, and particle swarm optimization, etc. These methods are inspired by physical principles or natural evolution, which jump out of the local minimum by a probability-based approach. Although the stochastic methods have been effectively applied to solve many intricate problems, some problems remain unsolved, such as convergence analysis and parameter selection. The deterministic methods, for example, the filled function method, tunnelling method, covering method, can converge faster and often find a better minimizer than the current minima quickly. Compared with other deterministic methods, the filled function method is a more popular global optimization method because it is relatively easy to implement. The basic idea of filled function is to construct an auxiliary function at the current local minima of the objective function. By minimizing the auxiliary function, the current local minima is quickly jumped out and a better local minima is reached. This process cycles until a better local minimum cannot be found.

        The first filled function introduced by Ge is as follows, i.e.

    $P(x,x_{1}^*, r, \rho)=\frac{1}{r+F(x)}\text{exp}(-\frac{\|x-x_{1}^{\ast}\|^{2}}{\rho^{2}}),$

    where the parameters $r$ and $\rho$ need to be chosen appropriately. However, Ge's filled function has some shortcomings. Because there are too many restrictions on the parameters $r$ and $\rho$, it is difficult for them to be properly selected so that $P(x,x_{1}^*, r, \rho)$ satisfies the definition of the filled function. Constructing a filled function with good properties is one of the keys to the success of a filled function algorithm. Therefore, researchers have done a lot of beneficial research work in enriching the forms of filled function. Unfortunately, These filled functions indeed possess several merits, yet they also exhibit drawbacks that necessitate further refinement and improvement. The adjustment of parameters for some two-parameter or single-parameter filled functions is still difficult. Non-smooth, non-differentiable or even discontinuous filled functions are not minimized by effective local optimization procedures based on gradient information, which decreases the efficiency of the filled algorithm. To address the difficulties in adjusting parameters, some scholars have proposed parameter-free filled functions. However, some parameter-free filled functions do not have any information about the objective function when $f(x)\geq f(x_{1}^{*})$, which may lead to difficulties in finding the global minimum point. In addition, without the adjustment of parameters, the proposed function in literature may fail to satisfy the definition of the filled function.

        Based on the above considerations, a new one-parameter filled function is proposed in this paper. The main contributions of this work are summarized below:

        (i) The new filled function is continuously differentiable and its gradient information can be used in local minimization procedure. Thus, the realization of the minimization process is simplified, and the algorithm is more efficient and convenient.

          (ii)  In any case, our proposed filled function always contains information about the objective function.

         (iii)  The new filled function has an easily adjustable parameter that can be tuned according to the specific problem, enabling the algorithm to adapt to the problem and avoids any issues similar to those encountered in literature.

        Next, we conducted a theoretical analysis and research on the properties of the new filled function. These good function properties ensure the convergence of the minimization process of the proposed filled function. Therefore, the minimum iteration process of the new filled function algorithm will run successfully in the whole region. Without these function properties, the filled function may behave similarly to that in reference, having no local minima within the entire feasible region. Furthermore, supported by thorough theoretical analysis, we have meticulously conceived a filled function method that is not only easy to implement but also highly efficient. Through a series of rigorous numerical experiments, we have fully verified the effectiveness and reliability of this new algorithm. It is worth mentioning that in the comparative tests with two other filled function methods, our algorithm has demonstrated more significant advantages, and the experimental results convincingly prove its superiority.

  • Feifei Shen , Liu Yang , Jiaxin Yang , Xiaojiao Tong ,
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-016
    Accepted: 2024-09-04
    (Communicated by Guihua Lin)
    Abstract: With the development of financial investment risk management, utility-based shortfall risk measures have received increasing attention for its potential to quantify the risk of large tail losses more effectively than conditional value at risk. For portfolio selection, transaction costs incurred by changes of investment proportions on risky assets have a significant impact on the investment strategy and the return. In this paper, we discuss a distributionally robust portfolio selection problem with shortfall risk measures and investigate the impact of transaction costs on portfolio selection. We first constructed the distributionally robust shortfall risk (DRSR) portfolio models without and with transaction costs. Then we construct moment ambiguity sets based on empirical data for the unknown probability distribution functions of random variables and discuss the convergence of optimal values and solutions of models as the sample size increases. Under the condition that the loss function is a piecewise affine function, we transform models into equivalent semi-definite programming problems. Numerical tests illustrate the validity of our models. Our main contributions can be summarized as follows:
         (i) We construct the DRSR portfolio selection models without and with transaction costs based on the moment ambiguity set, and discuss the convergence analysis of optimal value and solution of the models with moment ambiguity sets based on empirical data as the sample size increases.
         (ii) We use a linearization method to convert the V-shaped transaction costs function into inequality constraints, and this method can overcome the nonsmooth of the transaction costs function. Then we transform the DRSR potfolio selection models into computable equivalent semi-definite programming problems (SDP) by using the moment theory and duality theory.


        (iii) Based on the actual stock data, the influence of different parameters (such as transaction costs rate, risk level, ambiguity set parameters, sample sizes, etc.) on the optimal solution and optimal value of the models are analyzed. The validity and convergence of the new models are verified.


  • Annamaria Barbagallo
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-017
    Accepted: 2024-09-04
    (Communicated by Andreas Fischer )
    Abstract: In this paper, a link-based emission pollution permit system for dynamic transportation network equilibrium problems is considered. The governing equilibrium conditions for this model are given. Then, their time-dependent variational formulation is obtained. Thanks to that, existence and continuity results for equilibrium distributions are established. Finally, the existence of Lagrange multipliers is proved, and the behavior of the model is analyzed.
  • Danni Xu , Suxiang He , Haibin Chen
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-018
    Accepted: 2024-09-04
    (Communicated by Zheng-Hai Huang)
    Abstract: In this paper, we consider the bi-quadratic optimization problem with a partially symmetric fourth-order tensor. When the objective function is concave, it has the same optimal value as multilinear optimization problems. Then, the relation between the augmented bi-quadratic optimization problem and the original problem is established by limiting the domain of the solution in a unit sphere. A fast alternating proximal gradient method is proposed to find the approximate optimal value of the augmented bi-quadratic optimization problem. Furthermore, we present an enhanced block improvement method to obtain the approximate optimal value of the original problem. Finally, we perform convergence analysis and numerical experiments on the method to show the efficiency
  • Jianchao Bai , Yang Chen , Yuxue Ma
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-019
    Accepted: 2024-09-04
    (Communicated by Xinmin Yang)
    Abstract: In this paper, we develop an Inexact Relaxed Augmented Lagrangian Method (IR-ALM) for solving a class of convex optimization problems. Flexible relative error criteria are designed for approximately solving the resulting subproblem, and a relaxation step is exploited to accelerate its convergence numerically. By a unified variational analysis, we establish the global convergence of this IR-ALM and its sublinear convergence rate in terms of the primal iterative residual, the objective function value gap and constraint violation, respectively. Experimental results on testing the image restoration problem with different types of images show that the proposed method is effective.
  • Qiao Zhu , Liping Tang, Yibin Xiao
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-020
    Accepted: 2024-09-04
    (Communicated by Xinmin Yang)
    Abstract: The present paper proposes a new piecewise convexification method to approximate the globally efficient solution set of non-convex multi-objective optimization problems (NC-MOP) with box constraints. Initially, we utilize the α-based Branch-and-Bound (αBB) method to formulate the problem of piecewise convexification for NC-MOP, which consists of a series of convex relaxation optimization problems. Subsequently, we introduce a box classification strategy to define the efficient solution set for the piecewise convexification problem and demonstrate its effective approximation to the globally efficient solution set of NC-MOP. Finally, a new two-stage piecewise convexification algorithm combining αBB and multi-objective evolutionary algorithm based on decomposition (MOEA/D) is proposed, which incorporates novel techniques such as box partitioning, deletion methods, and termination tests. Moreover, the effectiveness of our approach compared to competitor is validated through experimental results.
  • Xianjun Long , Xiaoting Wang, Gaoxi Li
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-021
    Accepted: 2024-09-04
    (Communicated by Donghui Li)
    Abstract: In this paper, we study a class of fractional optimization problems, where the numerator of which is the sum of a nonsmooth convex function and a second-order differentiable convex function, the denominator is a second-order differentiable concave function. We propose an inexact proximal quasi-Newton algorithm with line search for solving this problem. Under moderate conditions, we prove that the sequences generated by the proposed algorithms converge to optimal solutions. In addition, the inexact proximal quasi-Newton algorithm exhibits the O(1/k) global convergence rate, where k denotes the iteration counter. Finally, two numerical experiments illustrate the effectiveness and the competitiveness of the proposed algorithm.
  • Yazheng Dang, Jingwen Zhang, Jinglei Lu
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-022
    Accepted: 2024-09-04
    (Communicated by Mabel Chou)
    Abstract: The symmetric alternating direction method of multipliers(ADMM) for solving the nonconvex separable optimization has been widely studied. However, the symmetric ADMM may fail to converge when the optimization is nonconvex, nonsmooth and nonseparable. This paper proposes a proximal symmetric ADMM for nonconvex nonseparable optimization, which employs linearized technique in each subproblem and introduces a relaxation factor in the Lagrangian multiplier update step. In addition, when the potential function satisfies the Kurdyka-Łojasiewicz (KL) property, the convergence of the algorithm is established. Finally, numerical experiments verify the effectiveness of the proposed method.
  • Ying Gao, Yunfei Qu, Chunfeng Cui, Deren Han
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-010
    Accepted: 2024-03-11

    (Communicated by Zheng-Hai Huang)

        Overfitting is a common phenomenon in machine learning, wherein models almost can fit the samples on the training set but have poor generalization ability on the test set. Regularization is devoted to tackling this problem by imposing a penalty on the complexity or smoothness of the model. However, the performance of regularization is usually circumscribed by the lack of correlation with data samples, which restricts its potential efficiency for many practical models. In this paper, pursuing the seminal work by Zhu et al. (LDMNet), we develop a coupled tensor norm regularization. It can be customized to the model with small-sized structural samples. The main idea of this regularization, which is built upon empirical manifold observation that input data and output features have a low-dimensional structure, is an alternative representation of low-dimensionality. Concretely, coupled tensor norm regularization is the low-rank approximation of the coupled tensor rank function. Related theoretical properties are presented and we further test this regularization for multinomial logistic regression and deep neural networks by theoretical algorithm analysis and numerical experiments. Numerical simulations on real datasets demonstrate the compelling performance of proposed regularization.

  • Yu You, Jirui Ma
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-011
    Accepted: 2024-03-11

    (Communicated by Guihua Lin)

        We consider the optimization problem of minimizing a smooth and convex function. Based on the accelerated coordinate descent method (ACDM) using probabilities $L_i^{1/2}[\sum_{k=1}^n L_k^{1/2}]^{-1}$ for non-uniform sampling (Nesterov Yu. et al., SIAM J. Optim., 110–123, 2017 [3]), we propose an adaptive accelerated coordinate descent method (AACDM) with the same probability distribution determined by $\{L_i\}$ as in ACDM.

        In [1, 3], the step sizes of their algorithms are fixed and determined by the (global) parameters $\{L_i\}$. Note that this may not be preferable for practical applications where the (local) parameter values differ from the global counterparts to some extent. This implies that methods which can be adaptive to the local parameters might improve the performance in practice. Motivated by this, in this paper we study the adaptive ACDM, which still requires (global) Lipschitz constants for non-uniform sampling as a prior, while the (local) coordinate Lipschitz constants are determined by backtracking (not neceessarily monotone) to achieve better performance. Both the strongly and non-strongly cases are discussed in this paper.

        The non-monotone backtracking line search is included in our adaptive scheme, which performs better (compared with the monotone one) for applications whose local coordinate Lipschitz constants oscillate along the trajectory or become smaller when approaching the tail. The adaptive ACDM is indeed not a monotone method, meaning that the sequence of function values it produces is not necessarily nonincreasing. Since the monotone approach can be used to improve numerical stability (see monotone FISTA in [2]), we also propose an adaptive ACDM in monotone version.

        Numerical results on some classic problems show the efficiency of the adaptive scheme.


  • Hao Wu, Liping Wang, Hongchao Zhang
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-013
    Accepted: 2024-03-11

    (Communicated by Xinwei Liu)

        This paper introduces a novel conjugate gradient method that exploits the m-th order Taylor expansion of the objective function and cubic Hermite interpolation conditions. We derive a set of modified secant equations with enhanced accuracy in approximating the Hessian matrix of the objective function. Additionally, we develop a modified Wolfe line search to address the limitations of the conventional constraint imposed on modified secant equations while ensuring the fulfillment of the curvature condition. Consequently, an improved spectral conjugate gradient algorithm is proposed based on the modified secant equation and Wolfe line search. Under standard assumptions, the algorithm is proven to be globally convergent for minimizing general nonconvex functions. Numerical results are provided to demonstrate the effectiveness of this new proposed algorithm.       

  • Hui Zhang, Xin Liu, Naihua Xiu
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-014
    Accepted: 2024-03-11

    (Communicated by Zheng-Hai Huang)

        Recently, many new challenges in Compressed Sensing (CS) arose. In this paper, we present a new algorithm for solving CS with block sparse constraints (BSC) in complex fields. Firstly, based on block sparsity characteristics, we propose a new model to deal with CS with BSC and analyze the properties of the functions involved in this model. We then present a new  -stationary point and analyze corresponding first-order sufficient and necessary conditions. We further develop a block Newton hard-thresholding pursuit (BNHTP) algorithm for efficiently solving CS with BSC. Finally, numerical experiments demonstrate that the BNHTP algorithm has superior performance in terms of recovery accuracy and time.

  • A. Rikouane, M. Laghdir, M.B. Moustaid
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-007
    Accepted: 2024-01-28

    (Communicated by Nobuo Yamashita)

         In this paper, in the absence of any constraint qualifications, we develop sequential necessary and sufficient optimality conditions for a constrained multiobjective fractional programming problem characterizing a Henig proper efficient solution in terms of the $\epsilon$-subdifferentials and the subdifferentials of the functions. This is achieved by employing a sequential Henig subdifferential calculus rule of the sums of $m\ (m\geq 2)$ proper convex vector valued mappings with a composition of two convex vector valued mappings. In order to present an example illustrating Our results, we establish the classical optimality conditions under Moreau-Rockafellar qualification condition. Our results are presented in the setting of reflexive Banach space in order to avoid the use of nets.

  • Yixuan Chen, Haitao Che, Haibin Chen
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-009
    Accepted: 2024-01-28

    (Communicated by Guanglu Zhou)

        Consider the linear equation

    $Kx = y,$                                                                                    (1)

    where $x\in\mathbb{R}^n $ is the original signal or image, $K:\mathbb{R}^n\rightarrow \mathbb{R}^m$ is a degradation operator, and $y\in\mathbb{R}^m$ is the observed signal or image. It has some important applications to signal processing, non-destructive testing, etc. The typical $\ell_1$-norm sparse regularization method does not have beneficial stability for ill-posed problems with large condition number. What's more, in many applications, the solution $x$ of the question (1) is not really sparse. To overcome the shortcomings of typical sparsity regularization, some researchers have focused on the elastic-net regularization problem, that is,

    $\min_{x\in \mathbb{R}^n} J(x)=\frac{1}{2}\|Kx-y\|^2+\alpha\|x\|_{1}+\frac{\beta}{2}\|x\|^2,$                                                           (2)

    where $\alpha>0, \beta>0, $ $\|\cdot\|$ represents $\ell_2$-norm, $\|\cdot\|_1$ represents $\ell_1$-norm. As a species of multi-parameter Tikhonov regularization, elastic-net regularization is more applicable to ill-conditioned issues. Statistically speaking, elastic-net regularization has superior steadiness than the typical regularization. There is an additional $\ell_2$-norm in elastic-net regularization compared to the classical $\ell_1$-norm sparsity regularization. Elastic-net regularization has the advantages of both $\ell_2$-regularization and $\ell_1$-regularization.

        Even though there exist various approaches to solve the elastic-net regularization problem which mainly focus on the convergence of the algorithm, not much work has been carried out in the convergence time of the algorithm. Therefore, it is necessary to devise a method to estimate the convergence time. For (2), we can relax the problem to be the following constrained optimization problem

    min $g(x)=\frac{1}{2} \|Kx-y\|^2+\frac{\beta}{2}\|x\|^2$

    s.t.  $x\in\Theta,$                                                                                                        (3)

    where $\Theta=\{x\in\mathbb{R}^n|\|x\|_{1}\leq r\}, r>0,$ and transform the problem (3) into the fixed point equation form

    $x=P_\Theta(x-\lambda K^T(Kx-y)-\lambda\beta x),$                                                          (4)

    where the stepsize $\lambda>0$. Obviously, the solution of problem (1) can be obtained by taking the solution of problem (3) when $\beta\rightarrow0$.

        Based on (4), we propose the following  new continuous-time projection algorithm for solving the elastic-net regularization

    $\dot{x}(t)=-\varphi(x)(x(t)-z(x(t))),$                                                                       (5)

    where

    $\varphi(x)=\left\{\begin{array}{l}\frac{k_1}{\|z(x(t))-x(t)\|^{1-\alpha_1}}+\frac{k_2}{\|z(x(t))-x(t)\|^{1-\alpha_2}}+\frac{k_3}{\|z(x(t))-x(t)\|},\mathrm{if}~ x\in\mathbb{R}^n\setminus \{x\in \mathbb{R}^n:x=z(x)\},\\0,~~~~~~~~~\mathrm{otherwise}, \end{array}\right. $

    $z(x(t))=P_\Theta(x(t)-\lambda(K^TKx(t)-K^Ty)-\lambda\beta x(t)),$ $k_1>0, k_2>0, k_3>0,\alpha_1\in(0,1),  \alpha_2>1, \lambda>0$ and $\beta>0$.

        The equivalence between the equilibrium point of (5) and the solution of (4) is displayed. Moreover, we prove that for every $\beta>0$ and $\lambda\in(0,\frac{2\beta}{L^2})$, if $x$ is an equilibrium point of (5), then for all $t\in[0,+\infty)$, the solution of (5) exists uniquely for any given initial value. Furthermore, the convergence time of the proposed method can be reckoned by the setting time

    $T(x(0))\leq T_{\max}=\frac{1}{d_1(1-p_1)}+\frac{1}{d_2(p_2-1)},$

    where $k_1>0, k_2>0, k_3>0, \alpha_1\in(0,1), \alpha_2>1, L=\|K\|^2+\beta, \lambda\in(0,\frac{2\beta}{L^2}), \Phi=\sqrt{\frac{1}{1-\lambda^2 L^2+2\lambda\beta}}\in (0,1),$ $p_1:=\frac{1+\alpha_1}{2},$ $p_2:=\frac{1+\alpha_2}{2},$ $d_1=k_1(1-\Phi)\frac{2^{p_1}}{(\frac{4\beta}{4\beta-\lambda L^2})^{1-\alpha_1}}>0,$ $d_2=2^{p_1}(1-\Phi)^{\alpha_2}k_2>0. $

        To validate the convergence and effectiveness of the proposed algorithm, we apply our algorithm to some numerical examples of signal processing and image restoration. Compared with state-of-the-art methods in the literature such as CAPPA method, THZF method and PNN algorithm,  our algorithm is more effective to calculate the mean squared error (MSE) in signal processing. Furthermore,  our algorithm shows the better performance to achieve the signal-to-noise ratio (SNR), the structural  similarity index (SSIM) and the relative error (RE) in image restoration.

  • Zhibao Li, Deren Han, Guangming Zhou and Huan Gao
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-003
    Accepted: 2024-01-15

    (Communicated by Jie Sun)

        In this paper, we propose an extended CQ algorithm integrated with selection technique to address the multiple-sets split feasibility problem (MSFP). At each iteration, the selection technique is employed to formulate a split feasibility subproblem for MSFP, subsequently it is resolved by means of the CQ algorithm. Under mild conditions, we establish the global convergence results for the extended CQ algorithm. Furthermore, we provide empirical evidence in the form of numerical results, which conclusively affirm the effectiveness and competitiveness of our proposed algorithm.

  • Jing Tian, Shouqiang Du, Yuanyuan Chen
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2023-053
    Accepted: 2023-12-31

    (Communicated by Chen Ling)


        Traditional power systems face unprecedented challenges, such as energy, environment, and economy. At the same time, fossil fuels used for traditional thermal power generation are increasingly depleted. Currently, in the context of carbon reduction and harmonious advance in the environment, environment-friendly smart grid is developing rapidly. A strong smart grid is based on a strong grid structure, supported by communication information platforms, and controlled by intelligent means. It includes various links of power generation, power transmission, power transformation, power distribution, power consumption, and power dispatching in the smart grid.
        Smart grid is the inevitable result of economic and technological development, which specifically employs advanced technique to increase the performance for electricity power system in power utilization, power supply quality and reliability. The foundation of smart grid is divided into data transferring, calculation and control technique between several electricity providing units. Smart grid has excellent features of energy-saving, reliability, self-healing, full controllability and asset efficiency. So in recent years, the theory and method of smart grid with the characteristics of high-quality, reliability, self-healing, interaction, security and environmental protection have been studied.
        There exist two models for solving uncertain optimization systems: stochastic optimization and fuzzy optimization. In stochastic optimization problems, we need to know the distribution function of system parameters. Similarly, in fuzzy optimization problems, the membership function of system parameters is required to be known. However, in practical problems, the distribution function and membership function are not always easy to obtain. In most cases, we only know the coefficients of the system at a certain point. The interval optimization model is more practical and easy to handle when there exist changes within an interval. Research on interval optimization has aroused widespread interest and achieved rich theoretical and practical results. In interval optimization methods, it is not necessary to know the specific numerical values of parameters. It operates in the form of intervals and only needs to know their range of variation, i.e. the upper and lower boundaries, without the need for precise numerical values. This greatly simplifies the preliminary data processing work. Interval optimization has been widely studied, especially its optimality conditions. Interval optimization is divided into single-objective interval optimization and multi-objective interval optimization. Interval optimization is increasingly used in practice such as economic planning, energy development, engineering design, environmental protection and other fields. Among the existing smart grid models, there exists little research about the smart grid with controllable supply, so this paper aims to study the problem of smart grid with controllable supply. Therefore, the interval optimization theory is applied to the real-time pricing problem of smart grid for maximizing social welfare. The intelligent control of electricity generation in the model has interval change in reality, so we combine the interval optimization with the analysis of the real-time pricing problem of smart grid based on social welfare maximization, and add the change in electricity generation to the objective function of the smart grid model, which becomes an interval objective function. We propose the model of smart grid with controllable supply based on maximizing social welfare. The model is transformed into a real-time pricing problem for smart grid with interval change. We transform the interval optimization problem into a real-valued optimization problem and solve it based on Karush-Kuhn-Tuker(KKT) conditions. The KKT conditions for smart grid with controllable supply based on social welfare maximization are also given.
        In smart grid, there are currently three forms of pricing mechanisms: time of use pricing mechanism, critical peak load pricing mechanism, and real-time pricing mechanism. Unlike the time of use pricing mechanism and critical peak load pricing mechanism, real-time pricing is not pre-set, but fluctuates continuously every day, directly reflecting the relationship between market price and market electricity cost. It is an ideal pricing mechanism that can encourage users to consume more wisely and effectively. Therefore, real-time pricing mechanisms has become a current research hotspot. The KKT conditions, which are transformed into a nonsmooth equation system. And we introduce the value function to transform it into an unconstrained optimization problem. The real-time price of smart grid with controllable supply can be obtained by the KKT conditions of interval optimization.
       Levenberg-Marquardt method is a type of optimization method. Its application fields are very wide, such as economics, management optimization, network analysis, optimal design, mechanical or electronic design. In this paper, Levenberg-Marquardt method is applied to solve the transformed problem of smart grid with controllable supply. We give the convergence analysis of Levenberg-Marquardt method under mild conditions. Finally, related numerical experiments have shown that Levenberg-Marquardt method can effectively solve the real-time price problem of smart grid. The real-time price obtained meets the effect of peak-shaving and valley-filling, which indicates that Levenberg-Marquardt method can effectively get real-time price. The research on this kind of smart grid problem further enriches the research work in the field of real-time price of smart grid based on social welfare maximization.


  • Huage Wang
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2023-055
    Accepted: 2023-12-31

    (Communicated by Donghui Li)

         In this paper, we consider a sixth-order paired symmetric Cauchy tensor and its generating vec- tors. Initially, we investigate the conditions for positive definiteness and positive semidefinite- ness of the sixth-order paired symmetric Cauchy tensor. Necessary and sufficient conditions for its positive definiteness are given according to the structural characteristics of a sixth-order paired symmetric Cauchy tensor. Subsequently, we apply the concept of the M-eigenvalue to the sixth-order paired symmetric Cauchy tensor, and further discuss related properties. We give two M-eigenvalue inclusion intervals for a sixth-order paired symmetric Cauchy tensor, which pro- vide two upper bounds for the M-spectral radius. The inclusion relation between them is given. In the end, we provide two numerical examples of eigenvalue inclusion intervals, confirming the inclusion relationship of the intervals.

  • Xin Zhao, Anwa Zhou, Jinyan Fan
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2023-040
    Accepted: 2023-12-07
    (Communicated by Chen Ling)
          In this paper, we introduce the polynomial semidefinite complementarity problem. We formulate it equivalently as a polynomial optimization problem with both scalar polynomial constraints and polynomial matrix inequality constraints. The solutions for the problem can be computed sequentially, if there are finite ones. Each of them can be obtained by Lasserre’s hierarchy of matrix-type semidefinite relaxations. Under suitable assumptions, the asymptotic and finite convergences for such sequence of semidefinite relaxations are also proved. Numerical experiments show that the proposed method is efficient for test problems.
  • Tiange Li, Xiangyu Yang, Hao Wang
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2023-041
    Accepted: 2023-12-07
    (Communicated by Hiroshi Yabe)
    Abstract: The iteratively reweighted $\ell_1$ (IRL1) algorithm is commonly employed for addressing nonconvex optimization problems mainly through solving a sequence of convex subproblems. In this paper, we propose an enhanced IRL1 algorithm tailored for addressing structured optimization problems involving nonconvex $\ell_{q,p}$ regularization. The key to its acceleration lies in a simple yet effective feature screening strategy. The strategy we propose involves a priori screening test capable of identifying potential inactive groups before commencing the subproblem solver and also incorporates an ecient posterior Karush-Kuhn-Tucker (KKT) condition check procedure to ensure an optimal solution even if some screened variables are mistakenly removed. The priori screening procedure is primarily achieved by exploiting the dual subproblem information during each iteration. Furthermore, we establish a theoretical proof that, within a finite number of IRL1 iterations, the screening test correctly removes all inactive variables. Numerical experiments showcase the signi cant computational advantages of our algorithm over several state-of-the-art algorithms.
  • M. Ait Mansour, Z. Chbani, A.-R. Elakri
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2023-037
    Accepted: 2023-11-22

    (Communicated by Robert Csetnek)


          The purpose of this paper is to study the strong convergence of Moudafi's proximal iterative method for finding a common element of the set of solution of finite family of equilibrium problems in a Hilbert space. More precisely, we suppose that, for $i=1,...,m$, $C_i$ is closed convex subset of Hilbert space $H$ and $(F_i)_{i=1,...,m}$ be a sequence of real-valued bifunctions such that $F_i$ is defined on $C_i\times C_i$ for each $i=1,2,\dots,m.$ The corresponding system of equilibrium problems we consider aims at finding a point $x$ in $C=\bigcap_{i=1} ^m C_i$ such that :

    (SEP)     $F_{i}{(x,y )}\geq 0$ for all $y\in C_i$ and all $i=1,...,m.$                                       (1)

    Under appropriate condition on the parameters and the data, the sequence $(x_k)$ generated by the following algorithm 

    $x^{k+1}=\alpha_{k}{x^k}+\beta_{k}{g(x^k)}+\gamma_{k}{J_r^{F_{m}}}{J_r^{F_{m-1}}}\dots{J_r^{F_{1}}}(x)$

    is proved to be strongly converging to a point $\bar{x}\in S,$ where $S$ is the set of solutions to the system (1), $J_r^{F_{i}}(x)$ is the resolvent of $F_i$ at $x$ with parameter $r$ for $i=1,2\dots,m$, $g:C_m \longrightarrow C_m$ is a contraction and $\alpha_k ,\beta_k ,\gamma_k$ are positive scalars such that $\alpha_k +\beta_k +\gamma_k=1$, $\beta_k \rightarrow 0$, $ \sum \beta_k=+\infty$ and $(\alpha_k )\subset [c,d]$ for some $c,d\in (0,1)$.

         The second goal of this paper is to use the iterative algorithm proposed for the equilibrium system (1) to solve the split common fixed point problem as defined by Censor and Segal in [The split common fixed-point problem for directed operators J. Convex Anal.16 (2009) 587–600.], i.e., finding a point $\bar{x}$ satisfying

    $\overline{x}\in\bigcap_{i=1}^{m}{Fix}{U_{i}}$   and    $A{\overline{x}}\in\bigcap_{j=1}^{m}{Fix}{T_{j}}$                              (3)

    where, $H_1$ and $H_2$ are two Hilbert spaces, $A : H_{1} \longrightarrow H_{2}$ is a bounded linear operator, $(U_{i})_{i=1}^{m}: H_{1} \longrightarrow H_{1}$ are F-hemicontinuous operators and $(T_{i})_{i=1}^{m}: H_{2} \longrightarrow H_{2}$ are M-demicontractive mappings. Note that the concept of M-demicontractiveness is new in the literature, we have introduced it in order to develop the relation between problems (1) and (3). Our analysis underlines also that the class of $M$-demicontractive mappings is intermediately situated between nonexpansive operators and demicontractive ones. Finally, to illustrate the convergence of our algorithm, we present a numerical example written in matlab.


  • Xiuyun Feng, Gang Wang and Yiju Wang
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2023-039
    Accepted: 2023-11-22
    (Communicated by Donghui Li)
         Sparse tensors play a fundamental role in hypergraphs and stability of nonlinear systems.
        In this paper, we establish new bounds of the minimum $H$-eigenvalue for a $Z$-tensor by its majorization matrix's digraph and representation matrix's digraph. Numerical examples are proposed to verify that our conclusions are more accurate and require fewer calculations than existing results. Based on the lower bound estimations for the minimum $H$-eigenvalue, we provide some checkable sufficient conditions for the positive definiteness of $Z$-tensors.
  • Beier Chen, Hui Zhang
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2023-035
    Accepted: 2023-11-05

    (Communicated by Xinwei Liu)

    The proximal gradient descent method, well-known for convex composite optimization, can be completely described by the concept of proximal gradient mapping. In this note, we introduce two new results of proximal gradient mapping–norm monotonicity and refined descent, with which we are able to extend the recently proposed potential function-based framework from gradient descent to proximal gradient descent.

  • Fenlan Wang
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2023-031
    Accepted: 2023-10-19

    (Communicated by Fanwen Meng)

         
In this paper, A new approach is presented for exactly solving the following linear mixed integer programming problem (P) with scenario constraints.

    wherecis an n-dimensional column vector,  and for each scenario  .    is an  matrix, d is anm-dimensional column vector,   and  are the lower and upper bounds of x, respectively.

         Consider the following subproblem:

    where  and  and .

        We first decompose the subproblem  into two parts: main problem (MP) and the feasibility subproblems   of  scenario constraints.

           The first part is the following problem (MP) by discarding the scenario constraints of :

    The optimal objective function value of the continuous relaxation problem of problem (MP) provides a lower bound for problem . Also we can obtain two integer points by rounding the continuous optimal solution of (MP) up or down along some directions.

          Another part is only the scenario constraints. For a given feasible solution of (MP), we determine whether is feasible to the scenario constraints or not by solving the following sub-problems:

         Then we develop the domain partition techniques for several cases according to whether an integer point xˆ being feasible to each constraint or not and thus cut off some integer boxes which do not contain any optimal solution of from .

         Integrating the solution scheme into a branch-and-bound frame, we present the new algorithm for solving problem (P). The proposed solution method reduces the optimality gap gradually in the solution iterations and is of a finite-step convergence.

         Finally, we give the numerical experiments and the comparison numerical results with the traditional branch-and-bound method for different sizes of scenario constraints. The comparison results with the traditional branch-and-bound method also show the proposed algorithm is very efficient.

  • Xiaoliang Dong, Deren Han
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2023-028
    Accepted: 2023-10-11
    (Communicated by Lizhi Liao)
    A new three–term conjugate gradient algorithm is developed, in which the involved search direction satisfies the sufficient descent condition at each iteration under Wolfe conditions. Different from the existent methods, a dynamical two–side approximating mechanism is proposed, which adaptively adjusts the relative weight between sufficient descent condition and making fully use of curvature information of objective functions. To some extent, the above strategy meaningfully exploits Hessian approximation of the objective function and therefore increases the efficiency of the algorithm in practical computation. Under mild conditions, we prove that the presented method converges globally for general objective functions. Numerical results are reported, which illustrate that the proposed algorithm is practically encouraging.
  • Weijun Zhou and Li Zhang
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2023-027
    Accepted: 2023-09-21

    (Communicated by Gui-hua Lin)

    A globally and superlinearly convergent BFGS method is introduced to solve general nonlinear equations without computing exact gradient. Compared with existing Gauss-Newton-based BFGS type methods, the proposed method does not require conditions such as symmetry on the underlying function. Moreover, it can be suitably adjusted to solve nonlinear least squares problems and still guarantee global convergence. Some numerical results are reported to show its eciency.

  • Jingya Chang, Dongdong Liu and Min Xi
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2023-025
    Accepted: 2023-08-18
    (Communicated by Guanglu Zhou)
        Clustering and classification are two important tasks in machine learning. Clustering approaches aim to divide a number of items without labels into several groups, while classification methods provide a classifier with the help of labeled data and classify other data by using the classifier. In reality, sometimes few annotated points are melded in the unannotated data set in clustering problems and taking advantage of the priori label information often enhances the clustering performances. Semi-supervised learning is to complete the learning task based on both the labeled and unlabeled data.
        In this paper, we consider the semi-supervised clustering problems that are modeled by hypergraphs. Our task is to cluster the n vertices of the hypergraph into k groups according to the hypergraph structure, while few categorization labels are given.

        We employ the multi way array to generate the clustering costs and use the hypergraph related tensor to construct the constrained optimization model

    for the semi-supervised clustering problem. This is a 0-1 integer programming. To make this optimization model easy, we penalize the constraint and add a regularization term  to the objective function. In terms of the discrete 0-1 constraint, it can be deduced that the column vectors of X are unit vectors and orthogonal to each other. Thus, we relax it to the continuous constraint . Then the optimization model (1) is transformed to an orthogonal constrained model

        The model (2) is an optimization constrained on the Steifel manifold . To solve this optimization problem, we first find a descent direction in the tangent space from the current point, and then employ the polar decompostion to map the descent direction to the Stiefel manifold.
        The tangent space at a point  is

    The direction  is employed to find a new point in the tangent space, where G is the gradient of function f at the current point X. In the iteration process, we find

    in the tangent space from Xk: The next step is to map to the manifold by using a retraction. Consider the univariate function

    with its domain , which is the projection of onto the Stiefel manifold.
        Given and the descent direction ; by using the adaptive feasible BB-like method, we find a step size such that

    where is a reference objective function value. The new iteration point

    is obtained.
        Convergence analysis demonstrates that the first order optimality condition is satisfied at the accumulation point. The sequence generated by our proposed algorithm SSHC either terminates with or is infinite. We prove that when the iteration is infinite, a subsequence of converges to 0, which means

        We employ the proposed SSHC method to cluster synthetic and real data, and compare the results of SSHC method with an unsupervised method SHC. Numerical experiments show that our method works well on synthetic and real data.

  • Baohua Huang and Changfeng Ma
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2023-026
    Accepted: 2023-08-18
    (Communicated by Bingsheng He)
        It is well known that Levenberg-Marquardt (LM) method is widely used for solving nonlinear equations. In this paper, we give an extension of LM method and propose a nonmonotone LM method with correction which produces the LM parameter according to the new nonmonotone strategy of Grippo, Lampariello and Lucidi. Moreover, not only an LM step but also a correction step are computed at every iteration in our proposed nonmonotone LM method with correction. The cubic convergence of the proposed method is proved under the local error bound condition which is weaker than nonsingularity. Some numerical results confirm the feasibility and effectiveness of the proposed algorithm.
  • Xiao Li, Chunlin Hao , Junyu Gao and Zuping Li
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2023-024
    Accepted: 2023-08-13

    (Communicated by Chen Ling)


        It is well-known that tensor eigenvalues have been widely used in signal processing, diffusion tensor imaging, independent component analysis and other fields. Nowadays, there are many methods to solve Z-eigenvalues of tensors. Nevertheless, methods for solving Z-eigenvalues of large-scale tensors tend to fall into a local optimal region. The feasible trust-region method is an effective method for computing the extreme Z-eigenvalue of large-scale tensors, but it also tends to fall into the local optimal. To overcome the problem, we propose three global optimization strategies based on the feasible trust-region method to improve the success rate of computing the extreme Z-eigenvalue.
        The first one is a multi-initial points algorithm. It is a classical, heuristic and easy method of global optimization, which idea is to randomly generate multiple initial points in the unit sphere and select the initial point with the largest objective function.
        The second one is a simulated annealing algorithm. It is also a representative global optimization algorithm with the advantages of easy implementation and strong robustness. Unlike the multi-initial points algorithm, the simulated annealing algorithm accepts the points with relatively poor target value according to probability in the process of algorithm iteration, but the probability of accepting the points with relatively poor target value will gradually decrease. Therefore, it can escape from local maximum and get the maximum value in the global range. In particular, we list two different simulated annealing algorithms based on different ways of generating new iteration points.

    The third one is an infeasible trust-region algorithm, which constructs an infeasible trustregion subproblem, expanding the search scope with the expectation of improving the success rate. The largest Z-eigenvalue of tensors can be obtained by solving the following optimization problem

        To expand the search scope, we construct the following trust-region subproblem with linear constraint at the current iteration point


    where

        For solving subproblem (2), if the constraints on this subproblem are consistent, the trial steps  is solved by interior-point method. Otherwise, if the constraints are inconsistent, a mechanism is introduced into the algorithm which computes the step in two stages. In the first stage, we try to satisfy the linearized constraints, that is,


    here . Then the total trial step dk can be accepted by solving

        Each iteration point of the subproblem (2) may not be feasible (). To expand the search scope, jump out of the local optimal region and find better points, the algorithm can accept the reduction in the objective function f. To prevent moving away from the sphere constraint while the objective function descends, the constraints must remain feasible when accepting a decrease in the objective function. Hence, we employ a trial step and project it onto the unit sphere.
         We demonstrate the global convergence of the infeasible trust region algorithm, which can converge to a KKT point, a strong stationary point, or a better point that jumps out of the local region of the original problem (1). The above three algorithms are considered as the preprocessing strategies based on the feasible trust-region method. We combine the three preprocessing strategies with the feasible trust-region method and conduct comparisons. Numerical experimental results show that the success rate of calculating the extreme Z-eigenvalue is greatly increased with the help of the global optimization strategy.


  • Caifang Wang, Xiaobin Xia and Dan Gao
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2023-023
    Accepted: 2023-08-08
    (Communicated by Adil Bagirov)


        For linear imaging problem

    the EM algorithm can be derived by minimizing the Kullback-Leibler (KL) divergence between forward problem data and the measurements b. However, the Kullback-Leibler divergence is sensitive to noise due to part of small boundary measurements. Thus in this manuscript, a parameter is introduced into Kullback-Leibler divergence and the difference between  and b is measured by the -KL divergence

    Then the linear imaging problem (1) is reduced to a constrained minimization problem

    and solved by a novel parameterized iteration formula

    which is derived from the Karush-Kuhn-Tucker condition of (3).
        To get the convergent property of the iterative method, the original optimization problem is spit into two optimization subproblems
        1. Find minimizer of  with respect to r,

        2. Find minimizer of   with respect to ,


    where and are two mn-dimensional hidden vectors with entries , and . Then the iteration sequence generated by (4) has the following properties
         The nonnegative sequence is bounded by

    In particular, if , then .

         In the first subproblem, the minimizer of the with respect to r with the constraint  is

        In the second subproblem, fix and then there are the following results

    Combining the property that a bounded point sequence must have a convergent subsequence, the convergence of the iterative sequence can be obtained.
        To test the performance of the new iterative formula, a fan-beam X-ray CT reconstruction problem of low-contract Shepp-Logan phantom is used. Compared with the traditional EM algorithm,
    the parameterized algorithm can achieve a relatively better reconstruction effect and faster convergent speed by adjusting the parameter .

  • Diana-Elena Maxim
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2023-010
    Accepted: 2023-07-04

    (Communicated by Michel Théra)

    Abstract: The aim of our paper is to establish optimality conditions on dual spaces for constrained and unconstrained set-valued optimization problems with respect to some new notions of directional Pareto efficiency. The methods we employ are different for each of the two cases. For the constrained problem we use a fuzzy extremal principle, and for the unconstrained one we use the incompatibility between openness and minimality to prove that the limit of a sequence of directional minima for a perturbation of the objective set-valued map F is a critical point for F.