Home Browse Just accepted

Just accepted

Accepted, unedited articles published online and citable. The final edited and typeset version of record will appear in the future.
Please wait a minute...
  • Select all
    |
  • Han Yu, Haitao Che, Haibin Chen
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2026-001
    Accepted: 2026-03-23

    (Communicated by Zheng-Hai Huang)

        The tensor split feasibility problem (TSFP), as a high-order extension of the classical split feasibility problem (SFP), has important applications in areas such as information retrieval. Its objective is to find a vector $x \in \mathbb{R}^n$ such that

    $x\in C~~\text{and}~~\mathcal A x^{m-1}\in Q,$

    where $C \subseteq \mathbb{R}^n$ and $Q \subseteq \mathbb{R}^p$ are nonempty closed convex sets, and $\mathcal{A}$ is a real tensor of order $m$ with dimensions $p \times n \times \cdots \times n$. Existing numerical methods, including the projection algorithm and the Levenberg-Marquardt (LM) algorithm, suffer from several limitations: their performance often depends on the initial point lying within a traditional convergence region, and the inner--outer iterative structure may lead to increased computational cost.

        To enhance the efficiency of solving TSFP, we reformulate the problem as the merit function

    $p(x)=\tfrac12\|P_C(x)-x\|^2+\tfrac12\|P_Q(\mathcal{A}x^{m-1})-\mathcal{A}x^{m-1}\|^2,$

    whose gradient is

    $g(x)=(I-P_C)x+J(x)^\top (I-P_Q)\mathcal{A}x^{m-1},\qquad$

    $J(x)=(m-1)\mathcal{A}x^{m-2}.$

    This leads to the fixed-point formulation

    $x=P_\Omega(x-\alpha g(x)),$

    where $\Omega$ is a closed convex set containing the TSFP solution set. Based on this fixed-point formulation, we introduce a novel second order dynamical system based on the gradient projection method, which yields the following continuous-time algorithm

    $$\begin{cases}\ddot{x}(t)+\gamma(t)\dot{x}(t)+\lambda(t)\bigl(x(t)-P_\Omega(x(t)-\alpha g(x(t)))\bigr)=0,\\[2mm]x(0)=u_0,\quad \dot{x}(0)=v_0,\end{cases}$$

    where $\alpha>0$, $\gamma(t)$ and $\lambda(t)$ are Lebesgue measurable functions.

        In the theoretical analysis, we show that the merit function $p(x)$, its gradient $g(x)$, the tensor mappings $\mathcal{A}x^{m}$, $\mathcal{A}x^{m-1}$, and the Jacobian $J(x)$ are Lipschitz continuous on bounded closed sets, thereby ensuring the well-posedness of the dynamical system. It is further proved that the proposed algorithm has a unique solution under mild conditions. Furthermore, the corresponding trajectory of the algorithm always converges to the unique solution.

        Compared with up-to-date methods, numerical experiments are given to verify the effectiveness and competitiveness of the proposed algorithm. The experimental results show that the proposed algorithm consistently outperforms the projection algorithm and the LM algorithm in terms of computational efficiency.

  • Jianchao Bai, Qixuan Sun
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2026-002
    Accepted: 2026-03-23

    (Communicated by Xinmin Yang)

    This note is to analyze the linear convergence rate of the algorithm proposed in [Li, Q., Zheng, B.,  Zheng, Y.T.: An efficient nonmonotone adaptive cubic regularization method with line search for unconstrained optimization problem. Appl. Math. Lett.  98, 74-80 (2019)] under the assumption that the objective function is strongly convex.


  • Zhonghui Xue, Yazheng Dang, Tiantian Cui
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2026-003
    Accepted: 2026-03-23

    (Communicated by Jie Sun)

        In addressing the challenges of nonconvex, nonsmooth, and nonseparable optimization, this paper introduces a novel symmetric inertial Bregman Alternating Direction Method of Multipliers (SIBADMM). This algorithm extends the symmetric ADMM by incorporating an inertial mechanism and Bregman divergence within the x-subproblem, aiming to accelerate convergence. The asymptotic convergence is established under the assumption of boundedness in the sequence generated by the algorithm, utilizing the Kurdyka-Łojasiewicz property. The practical utility of SIBADMM is demonstrated through its application to various linear regression problems with different regularization terms, thereby showcasing its effectiveness.

  • Huan Gao, Jianyu Xiao, Zhibao Li, Kai Tu
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2026-004
    Accepted: 2026-03-23

    (Communicated by Jie Sun)

        Consider the nonconvex and nonsmooth optimization problem

                       $\min\limits_{x\in \mathbb{R}^n }\ \psi(x)= \frac{1}{m} \sum \limits_{i=1}^m f_i(x)+r_1(x)-r_2(x),$                                                  (1)

    where for each $ i \in \{1, 2, \ldots, m\} $, the function $f_i: \mathbb{R}^n \to (-\infty, \infty] $ is smooth. The functions $r_1: \mathbb{R}^n \to (-\infty, \infty] $ and $ r_2: \mathbb{R}^n \to (-\infty, \infty] $ are nonsmooth convex functions. In this paper, we propose a stochastic variance-reduced proximal difference-of-convex algorithm (SVRPDCA) for $(1)$, i.e.


    Algorithm 1   Stochastic Variance Reduced Proximal Difference-of-Convex Algorithm (SVRPDCA)


    Input:  Let $S$ be an arbitrary positive integer,  initial point $x_0\in \mathbb{R}^n$, set $q = b = [m^{\frac 1 2}]$, $N = Sq$, $\alpha=\frac{1}{2L}.$

    for  $k=1:N-1$  do

    S1  if $\mod(k,q)==0$, calculate the full gradient $v_k = \nabla f(x_k),$

    S2  else

    S3  Randomly select a subset $\mathcal{M}_k\subseteq \{1,2,\ldots,n\}$ such that $|\mathcal{M}_k|=b$, and compute

    $v_{k}=\frac{1}{b} \sum_{i \in \mathcal{M}_{k}} \nabla f_{i}(x_{k})-\nabla f_{i}(x_{k-1})+v_{k-1},$

    S4  end if

    S5  Compute $\xi_{k} \in \partial r_{2}(x_{k})$ and update $x_{k+1}$  by

    $x_{k+1}=\arg\min _{x \in \mathbb{R}^{n}}\left\langle x-x_{k}, v_{k}-\xi_{k}\right\rangle+r_{1}(x)+\frac{1}{2 \alpha}\|x-x_{k}\|^{2}.$                             (2)

    end for

    Output: Return $x_{R}$, where $R$ is chosen uniformly at random from $\{1,\cdots, N-1\}$.


        We also establish that the proposed method attains an $\epsilon$-equilibrium point with a gradient complexity bounded by $O(\sqrt{m}\epsilon^{-2} + m)$ under conditions, where $m$ represents the number of data samples. Numerical experiments validate the efficiency and practical relevance of the proposed algorithm.


  • Xiaoni Chi, Lin Gan, Qian Cheng and Jein-Shan Chen
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2026-005
    Accepted: 2026-03-23

    (Communicated by Akiko Yoshise)

        In this paper, we present a predictor-corrector interior-point algorithm (PC IPA) with new search directions to solve $P_*(\kappa)-$weighted horizontal linear complementary problem (WHLCP). $P_*(\kappa)-$WHLCP includes monotone WLCP, $P_*(\kappa)-$horizontal linear complementary problem (HLCP), $P_*(\kappa)-$linear complementary problem (LCP), monotone LCP and convex quadratic programming as special cases, and could model a wide range of equilibrium problems in scientific engineering and economic management. The main idea of our PC IPA is transforming the centering equations of the central path by the algebraic equivalent transformation (AET) technique based on a power function with an arbitrary positive integer $q$. Upon analyzing the effect of different $q$ on the transformed system, we select a power function $\varphi(t) = t^{\frac{5}{2}}$ in order to get the search direction. The feasibility and global convergence of the proposed algorithm are verified under appropriate conditions. Additionally, the iteration bound of our algorithm is comparable to the best-known bounds for such available IPAs. The efficacy of the proposed algorithm is demonstrated through the presentation of numerical results.

  • Fatemeh Dargahi, Saman Babaie-Kafaki, Zohre Aminifard
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2026-006
    Accepted: 2026-03-23

    (Communicated by Xinwei Liu)

        Due to the significant need for efficiently handling the huge data sets which mostly emerge in the contemporary world models, here we focus on modifying a well-known memoryless continuous optimization algorithm as well as improving a classic data compression model. With these issues at the forefront, firstly we suggest two optimal settings for the parameter of the Dai--Liao conjugate gradient method by approaching the search direction of the method to that of the efficient memoryless BFGS quasi--Newton method in an ellipsoid norm least-squares framework. Then, we deal with modifying the optimization model of the nonnegative matrix factorization problem. More precisely, we add penalty terms to the classic least-squares models of the nonnegative matrix factorization subproblems in the popular alternative solution process, as a plan to control collinearity between the columns/rows of the factorization elements. We put our theoretical assertions to the test on the CUTEr problems as well as several random nonnegative matrix factorization cases, and assess the outputs in various aspects. The outputs generally show the acceptable impact of our modifications.

  • Mei Lu, Ke Guo
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2026-007
    Accepted: 2026-03-23

    (Communicated by Xinwei Liu)

        The alternating direction method of multipliers (ADMM) is one of the effective methods for solving two-block separable optimization problems with linear constraints, and has been widely applied in image processing, power systems, sparse learning, and other fields. Its essence is the application of the Douglas-Rachford splitting method to the dual problem. Classical ADMM updates the Lagrange multipliers only once per iteration, while symmetric ADMM achieves dual updates of the Lagrange multipliers in each iteration by introducing an additional multiplier update step, thereby significantly improving the convergence performance and numerical stability of the algorithm. In this paper, we propose a novel symmetric ADMM algorithmic framework for two-block separable nonconvex optimization problems with linear constraints, which introduces two distinct relaxation factors to enhance the flexibility and convergence efficiency of the algorithm. This method has been comprehensively studied for convex problems. However, for nonconvex problems, in the convergence analysis of symmetric alternating direction method of multipliers with two different relaxation factors without introducing Bregman distances or regularization terms, proving the monotonicity of the Lagrangian function remains a challenging problem.

        In terms of theoretical analysis, we establish the convergence theory of the proposed algorithm in this paper. Notably, our convergence proof does not rely on common technical assumptions such as Bregman distances or regularization terms. Specifically, by constructing a novel auxiliary function and under the mild condition that the Kurdyka–Łojasiewicz (KŁ) inequality is satisfied, we prove that the iterative sequence generated by the algorithm converges to a stationary point of the problem.

        The main contributions of this paper can be summarized as follows: First, we design a symmetric ADMM scheme with dual relaxation factors to solve two-block separable nonconvex and nonsmooth optimization problems with linear constraints, and the proposed method allows for a wider range of parameters, which can better adapt to the structural characteristics of different problems through flexible adjustment of relaxation parameters. Second, we establish a concise convergence analysis framework that does not depend on Bregman distances and regularization terms, reducing the complexity of theoretical analysis; moreover, it can degenerate into the classical ADMM. Finally, we validate the practical application effectiveness of the proposed algorithm through numerical experiments, and the experimental results demonstrate that the algorithm outperforms traditional ADMM and its variants in terms of convergence speed and solution accuracy.

  • Chen Ling, Erbo Zhao, Ruiqi Xue, Hong Yan
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2026-008
    Accepted: 2026-03-23

    (Communicated by Zheng-Hai Huang)

        Tensor low rank approximation is an important tool in tensor data analysis and processing. In the sense of tensor-tensor product (T-product) derived from general invertible transformation, the best low tubal-rank approximation of third order tensors can be obtained through truncated tensor singular value decomposition (T-SVD). In this paper, we first present two deterministic frequent directions type algorithms for near optimal low tubal-rank approximations of third order tensors. Moreover, we propose a randomized frequent directions algorithm for near optimal low tubal-rank approximations of third order tensors. Corresponding relative error bounds for the presented algorithms are derived. The related numerical examples on third order tensors of color image, grayscale video and synthetic data with larger scale illustrate the favorable performance of the presented methods compared to some existing methods.

  • R. Deb and A. K. Das
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-043
    Accepted: 2025-11-13

    (Communicated by Guanglu Zhou)

    The crucial component of tensor theory is the $H$-tensor. An even order real symmetric nonsingular $H$-tensor is positive definite tensor and it has a significant impact on tensor complementarity theory. In this article, we introduce generalized $S$-Nekrasov tensor, a new subclass of $H$-tensor. We introduce the concept of weak Nekrasov tensor and provide a sufficient condition for a weak Nekrasov tensor to be $H$-tensor. We prove that the solution set of tensor complementarity problem is nonempty and compact for a real generalized $S$-Nekrasov tensor with positive diagonal entries.

  • Jielan YANG and Shengwei YAO
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-044
    Accepted: 2025-11-13

    (Communicated by Walaa Moursi)

    In this paper, a class of three-dimensional subspace conjugate gradient algorithm based on the cubic regularization model is discussed. The regularization parameter in the cubic regularization model are updated by an interpolation function. According to the criterion of approximate model, the algorithm adaptively selects the quadratic approximation or the cubic regularization model to approximate the objective function by adjusting the regularization parameter. The corresponding subspace conjugate gradient algorithm is obtained by minimizing the approximation model of the objective function in the given subspace. The algorithm has global convergence for the general non-convex objective functions, and numerical experiments imply that the algorithm is robust and efficient.

  • Jiazhen Wei, Wei Bian
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-045
    Accepted: 2025-11-13

    (Communicated by Xinmin Yang)

    Lately, a novel swarm intelligence model, namely the consensus-based optimization (CBO) algorithm, was introduced to deal with the global optimization problems. Limited by the conditions of It$\hat{\rm o}$'s formula, the convergence analysis of the previous CBO finite particle system mainly focuses on the problem with smooth objective function. With the help of smoothing method, this paper achieves a breakthrough by proposing an effective CBO algorithm for solving the global solution of a nonconvex, nonsmooth, and possible non-Lipschitz continuous minimization problem with theoretical analysis, which dose not rely on the mean-field limit. We indicate that the proposed algorithm exhibits a global consensus and converges to a common state with any initial data. Then, we give a more detailed error estimation on the objective function values along the state of the proposed algorithm towards the global minimum. Finally, some numerical examples are presented to illustrate the appreciable performance of the proposed method on solving the nonsmooth, nonconvex minimization problems.

  • Nan Lu, Sanyang Liu and Lixia Liu
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-046
    Accepted: 2025-11-13

    (Communicated by Ting Kei Pong)

    By utilizing $T$-algebras, we introduce the concepts of $w$-$P$ and $w$-uniqueness properties for nonlinear transformations, and investigate some interconnections among these concepts.

    The results above are applied to relaxation transformations and self-adjoint linear transformations. Moreover, we study the finiteness of $w$-solutions for the homogeneous cone complementarity problem.

  • Zexian Liu, Taiyong Song and Hongwei Liu
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-047
    Accepted: 2025-11-13

    (Communicated by Fanwen Meng)

    Subspace minimization conjugate gradient (SMCG) methods are highly efficient iterative schemes for unconstrained optimization and have garnered increasing attention recently. The inertial step strategy is often utilized to speed up iterative methods. By taking advantage of the information at the recent three iterations further, we exploit a new adaptive inertial step strategy. We design a new update form for regularization parameter, and extend the search direction in the SMCG method based on regularization model to the solution of constrained nonlinear equations by incorporating the resulting adaptive inertial step strategy. Based on the resulting search direction and the projection technique, we present an efficient inertial SMCG method based on regularization model for solving constrained nonlinear monotone equations, where the search direction satisfies the sufficient descent property and the trust region property independent of any line search. The global convergence and improved convergence rate of the proposed method are established under the assumptions without imposing the Lipschitz continuity on the underlying mapping. Numerical experiments demonstrate that the proposed method is very promising.

  • Ning Shao, Ying Li, Mingcui Zhang and YinPing Li
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-048
    Accepted: 2025-11-13

    (Communicated by Chen Ling)

    In the fields of computer graphics and robot kinematics, the application of dual quaternions has become a common trend, but in-depth research  on dual quaternion matrix theory remains relatively scarce. This paper aims to propose an efficient strategy for solving the dual quaternion matrix equation $AXB=C$. First, we investigate the  unique theoretical framework of the dual quaternion matrix vector operator and combine it with the complex representation of dual quaternion matrix. This approach provides a powerful tool for deriving equivalent forms of dual quaternion matrix equations, enabling us to obtain the general solution of $AXB=C$. Subsequently, we analyze the structure of the $\eta$-Hermitian matrix and propose the $GH$-representation that significantly simplifies the computation process when solving for the $\eta$-Hermitian solution. Finally, we design corresponding algorithms and validate their effectiveness through numerical experiments.

  • Chunmei Li, Yuying Gu, Xuefeng Duan
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-049
    Accepted: 2025-11-13

    (Communicated by Chen Ling)

    In this paper, we consider a class of tensor optimization problem

    $\frac{1}{2} \| \hat{\mathcal{X}} \times_{1} A_{1} + \hat{\mathcal{X}} \times_{2} A_{2} + \cdots + \hat{\mathcal{X}} \times_{N} A_{N} - \mathcal{B} \|^{2}$

    $=\frac{1}{2} \min_{\mathcal{X} \in \mathbb{R}_{+}^{J_{1} \times J_{2} \times \cdots \times J_{N}}}\| \mathcal{X} \times_{1} A_{1} + \mathcal{X} \times_{2} A_{2} + \cdots + \mathcal{X} \times_{N} A_{N}- \mathcal{B} \|^{2},$

    where the tensor $\mathcal{B} \in  \mathbb{R}^{J_{1} \times J_{2} \times \cdots \times J_{N}}$, and the coefficient matrices  $A_{n} \in \mathbb{R}^{J_{n} \times J_{n}},  n = 1, 2, \cdots , N$. In order to find a nonnegative tensor $\hat{\mathcal{X}} \in  \mathbb{R}_{+}^{J_{1} \times J_{2} \times \cdots \times J_{N}}$, we design a nonmonotonic descent stepsize and construct a new gradient projection algorithm to solve this problem by making use of the BB stepsize. The convergence analysis of the new algorithm is also given. Numerical experiments are performed to illustrate the feasibility and effectiveness of the new algorithm, including tests on synthetic data, solutions to microscopic heat transport problems, and image restoration tasks. Comparisons with some previous methods are also given.

  • Jie Shen, Yuan Lu, Yu-Zhu Tian and Yu-Hui Song
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-050
    Accepted: 2025-11-13

    (Communicated by Jie Sun)

    With the increasing complexity of the problems in practical fields, traditional numerical algorithms are not very ideal for solving non-smooth non-Lipschitz optimization problems. We focus on a broader class of non-smooth non-Lipschitz optimization problems, in which the objective function is the sum of a non-smooth non-convex function and a non-Lipschitz function. Based on the smoothing approximation technique and projection neural network, we propose a smooth inertial projection neural network (SIPNN) by introducing inertial terms into the neural network. Under certain conditions we show that the solution of SIPNN is convergent to the stationary point of non-Lipschitz optimization problems. Compared with other methods employing differential inclusions for solving non-smooth problems, the smoothing approximation technique can effectively overcome the difficulty of choosing subdifferentials, and the introduction of inertial terms can control variables which allows us to explore globally the optimal solution of the original optimization problems. The simulation results are presented to illustrate the performance and effectiveness of the proposed method.

  • Elimhan N. Mahmudov , Shakir Sh. Yusubov, Rza N. Mahmudov
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-038
    Accepted: 2025-09-20

    (Communicated by Robert Csetnek )


          In this paper, a new approach to solving the considered optimization problem is proposed for both differential inclusions (DFIs) and inequality constraints. The problem is reduced to a problem with one DFI defined by the intersection of two set-valued mappings. The optimality conditions for such a problem are expressed as the sum of two locally adjoint mappings (LAMs). In turn, this also requires calculating the argmaximum sets and subdifferentials of Hamiltonian functions. Then it is natural that LAM can be computed for a set-valued mapping generated by a system of inequality constraints. Finally, we consider some applications in a linear optimal control problem with polyhedral constraints and a constraint given by a convex cone.

  • Mingyu Song , Yanjun Wang
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-039
    Accepted: 2025-09-20

    (Communicated by Jie Sun)

          This study develops a data-driven distributionally robust two-stage stochastic programming model with second-order stochastic dominance (SSD) constraints. Uncertainty is modeled within a 1-Wasserstein ball centered at an empirical distribution. Two cases are analyzed based on how uncertainty affects the second stage. In the first case, where uncertainty impacts only the objective

    function, the problem can be reformulated as a convex program and further simplified into a tractable conic program. In the second case, where uncertainty influences constraints, the problem is generally NP-hard, as feasibility verification educes to a norm maximization problem over a polytope. However, under specific conditions, it remains tractable. Numerical experiments on a supply chain problem compare the proposed method with Sample Average Approximation (SAA). Results show that for large sample sizes, the proposed approach exhibits greater robustness and stability against sample fluctuations.

  • Anwa Zhou, Yangyang Ni, Jinyan Fan
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-041
    Accepted: 2025-09-20

    (Communicated by Chen Ling)

        This paper studies the real multivariate eigenvalue problem for nonsymmetric tensor tuple. We formulate the tensor real multivariate eigenvalue problem equivalently as polynomial optimization. We generate a random objective function that achieves different values at different multivariate values. Each of them can be computed by Lasserre's hierarchy of semidefinite relaxations if there are finitely many ones. Under suitable conditions, we prove that the semidefinite relaxation has finite convergence for nonsymmetric tensor tuples. Numerical experiments are presented to show the efficiency of the proposed method.

  • Fuying Jing, Xiangrui Chao
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-042
    Accepted: 2025-09-20

    (Communicated by Jein-Shan Chen)

        This paper investigates the minimal planning horizon required to determine optimal production schedules in dynamic lot-sizing models under batch changeover constraints. The classical dynamic lot-sizing problem assumes the ability to make setup decisions each period independently. However, in many industrial environments such as semiconductor manufacturing, pharmaceutical production, or automotive component assembly, batch-based changeovers are required. This introduces nontrivial temporal coupling between decision periods, rendering classical planning horizon results inapplicable.


        We study a generalized dynamic lot-sizing model where products are subject to joint setup constraints over contiguous planning periods---so-called \emph{batch changeovers}. Our main contribution is the derivation of sufficient conditions under which finite minimal planning horizons exist, allowing for optimal production decisions to be made without requiring infinite forward-looking forecasts.


        Let $T$ denote the length of the planning horizon, and suppose demands $\{d_t\}_{t=1}^T$ are deterministic. Each product incurs a per-unit production cost $c_t$, a holding cost $h_t$, and a fixed batch setup cost $S$. The novelty in our model lies in the constraint that setup decisions apply to batches of periods, i.e., production is constrained to occur only if a batch-wide setup is activated.


        We show that under reasonable assumptions (bounded holding and production costs, stable demand), the optimal policy structure retains certain properties akin to the classical Wagner-Whitin model, such as the existence of \emph{non-speculative production}. However, the batch setup constraint introduces a new layer of complexity: the optimal solution may depend on multi-period patterns that require careful treatment to determine horizon sufficiency.


        To formalize this, we derive a minimal planning horizon bound $T^*$, such that for any $T \geq T^*$, the truncated problem yields the same initial-period decision as the infinite-horizon problem. This is achieved by constructing a cost-to-go function and showing it satisfies monotonicity and submodularity properties under batch constraints.


        Numerical experiments on benchmark instances demonstrate that the proposed minimal planning horizon bound is tight in practice and significantly reduces computational burden without compromising optimality. Moreover, we propose a dynamic programming algorithm tailored to exploit the batch structure, improving runtime efficiency by an order of magnitude compared to standard MILP solvers.


        Our results contribute to the theoretical understanding of planning horizons in complex lot-sizing settings and provide practical tools for industries facing setup-intensive operations with limited forecast visibility.

  • Huizhen Zhang, Qin Huang, David Rios Insua
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-029
    Accepted: 2025-06-05

    (Communicated by Jie Sun)

          This paper presents a novel multi-objective location-routing problem with time windows for the effective management of multimodal transportation networks at two levels: the upper level addresses multimodal transportation issues, whereas the lower one focuses on the location-routing aspects. In our model, two objectives are included referring to maximizing customer satisfaction and minimizing total costs, aggregating transportation, transit, and location costs. To solve the problem efficiently, we introduce a new approach based on a squirrel search algorithm using three crossover and four mutation operators designed to handle Pareto optimality in multi-objective optimization. Besides, we present a greedy clustering algorithm combined with Metropolis criterion to generate high-quality initial solutions and accelerate convergence. We evaluate the performance of proposed approach using various scale instances in multimodal transportation networks. Our experimental results suggest the efficiency and scalability of our approach.

  • Ziye Zhang, Ke Su
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-030
    Accepted: 2025-06-05

    (Communicated by Guanglu Zhou)

         In this paper, we focus on the mixed-constrained semi-infinite programming and present an adaptive augmented Lagrangian filter method. Continuous infinite inequality constraints are smoothed and transformed into equivalent finite constraints in integral form, which are then combined with the objective function via adaptive parameters. Compared with the existing methods, the new approach is a penalty-function-free method that employs a filter instead of forcing sequences to make the optimality measure approach to zero. Our algorithm also includes a feasibility recovery phase to quickly detect infeasible problems. The global convergence is proved under some suitable conditions. Numerical results demonstrate that the proposed method is effective.

  • Huan Gao, Jianyu Xiao, Zhibao Li, Haibin Zhang
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-031
    Accepted: 2025-06-05

    (Communicated by Jie Sun)

           In this paper, we consider the 3-block linearly constrained difference-of-convex (DC) optimization problems:

    $\min_{x,y,z}\ \{ \ f(x) + g (y) + h(z) \mid A x +By+Cz =b \ \},$                                                       (1)

    where

    $f(x) = f_1(x)-f_2(x)~ ~\text{and}~~  g(y) = g_1(y)-g_2(y),$

    $h:\mathbb{R}^{n_3}\rightarrow \mathbb{R}$ be a continuous differentiable convex function with Lipschitz continuous gradient, $A\in \mathbb{R}^{m\times n_1}$, $B\in \mathbb{R}^{m\times n_2}$ and $C\in\mathbb{R}^{m\times n_3}$ are given matrixes, $b\in \mathbb{R}^{m}$ is a vector, with $f_1:\mathbb{R}^{n_1}\rightarrow \mathbb{R}\cup\{+\infty\}$ and $g_1:\mathbb{R}^{n_2}\rightarrow \mathbb{R}\cup\{+\infty\}$  are  proper closed convex functions, $f_2:\mathbb{R}^{n_1}\rightarrow \mathbb{R}\cup\{+\infty\}$ and $g_2:\mathbb{R}^{n_2}\rightarrow \mathbb{R}\cup\{+\infty\}$ are continuous convex functions. We propose a majorized Bregman ADMM to solve the DC problem (1). Compared with the classical Bregman ADMM, the majorized Bregman ADMM only requires solving convex subproblems at each iteration, rather than DC subproblems. We prove that the sequence generated by the proposed method converges to a critical point of the augmented Lagrangian function, under the assumption that the potential function satisfies the Kurdyka-Łojasiewicz (KŁ) property. Preliminary numerical experiments are conducted to support our theoretical analysis.


  • Wenfeng Zhu, Zhou Sheng, Yuehuan Zhu
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-032
    Accepted: 2025-06-05

    (Communicated by Lingchen Kong)

          We study a complex Newton-based method with a correction step for homogeneous polynomial optimization on the complex unit sphere. In particular, its constrained stationary point meets the definition for being a US-eigenpair of the corresponding to symmetric complex tensor.

          We analyze the local quadratic convergence rate of the complex Newton-based method with a correction step, provided that a sufficiently close initial point. As two concrete applications, we apply it to two tasks: computing the US-eigenpairs of symmetric complex tensors; and computing the geometric measure of entanglement of quantum multipartite pure states. We provide several numerical examples of our complex Newton-based method with a correction step, and observe that it performs fast and effective on these two tasks.

  • Danping Yang, Biao Qu and Jiayi Song
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-033
    Accepted: 2025-06-05

    (Communicated by Fanwen Meng)

        The split feasibility problem (SFP) has many important and wide applications, such as radiotherapy, image reconstruction, and signal processing. Let $C$ and $Q$ be nonempty closed convex sets in $\Re^{n}$ and $\Re^{m}$, respectively, and $A$ an $m \times n$ real matrix, this kind of problem is about finding

    $x\in C, ~\mathrm{s.t.}~Ax\in Q,$                                                                             (1)

    if such $x$ exists.

        Recent researches on the SFP have been deeply integrated with inertial acceleration method, and some scholars have discovered that these techniques can speed up the process of first-order optimization algorithms. However, the most of them limit the range of inertial factors and cause the sequence $\|x^{k}-z\|(z\in S)$ no longer monotonically non-increasing. The alternating inertial method, expressed as

    $w^{k}=\begin{cases}x^{k},&\text{if ~}k\text{~is even},\\ x^{k}+\theta _{k}( x^{k}-x^{k-1}) ,&{\text{if ~}k\text{~is odd}},\end{cases}$                                                      (2)

    is a appropriate improvement strategy, which retains the acceleration effect of inertial method while alleviating the degree of oscillation.

        In this paper, we propose three KM-CQ-like alternating inertial algorithms that all expand the range of inertial factors. We use projections onto general closed convex sets in the first version, which exploring the acceleration effects of alternating inertial technique preliminarily. The second version simplifies the computation of procedure by relaxing projections onto half-spaces. The last version further optimizes the previous methods by incorporating double projection technique.

        Our specific contributions in this paper are summarized as follows.

        (i) The proposed three KM-CQ-like algorithms incorporate alternating inertial steps allowing them to improve the convergence of the algorithms without inertial steps, which improve the range of values of inertial factors and simplify the form of limiting condition.

        (ii) The second algorithm also add relaxation effects that allows it to accelerate the convergence of the first algorithm and simplify the calculation of projections. On the other hand, we can see it is faster in Example 4.2.

        (iii) The last algorithm use double projections that are generated by half-spaces in the previous iteration and the current iteration. Similar to the expected conclusions, it converges faster than the second algorithm in Example 4.2.

        (iv) The convergence of the iterative sequences generated by the proposed algorithms is established, with the monotonicity of sequence $\|{x}^{2k}-z\|(z\in S)$, which alleviates the oscillation of the iterative points.

        (v) The performance and advantages of the algorithms proposed in this paper are confirmed by two applications in a simple the SFP and signal processing. On the other hand, the increase of dimension will make the advantages of our algorithms more obvious.

        Under simple parameter settings, these algorithms restore the monotonicity of $\|x^{2k}-z\|$ and achieve the convergence of the algorithm. Experiments demonstrate the feasibility of each algorithm in practical applications and the effectiveness of the acceleration procedure.

  • Changzhi Wu, Wah June Leong, Hong Seng Sim, Jinlong Yuan
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-034
    Accepted: 2025-06-05

    (Communicated by Kok Lay Teo)

        In this study, we examine the linear quadratic (LQ) optimal control problem for large-scale interconnected systems, with a focus on designing sparse static state-feedback controllers. Traditional LQ control methods often result in dense feedback matrices that require global information sharing among all subsystems. While such dense designs may be optimal in terms of performance, they are often impractical in distributed or resource-constrained environments due to excessive communication and implementation costs. Consequently, this paper seeks to develop a framework that simultaneously promotes sparsity in the controller while maintaining an acceptable level of system performance degradation.

        To achieve this, we formulate a constrained optimization problem where the primary objective is to minimize the number of nonzero entries in the feedback matrix $K$, expressed via the nonconvex and discontinuous $\ell_0-$norm. This sparsity-promoting goal is constrained by two critical system-level considerations: (i) the cost associated with the closed-loop system's response to disturbances must not exceed a predefined threshold above the optimal LQ cost, and (ii) the stability of the closed-loop system must be preserved via a Lyapunov-type equation. The cost of the closed-loop system is defined as the trace of a controllability Gramian $P$, which satisfies the Lyapunov-type equation involving the feedback matrix $K$. The constraint on system performance introduces a tradeoff parameter $\gamma > 0$, which allows the system designer to control the degree of allowable performance loss relative to the optimal centralized LQ controller. This parameter directly impacts the achievable sparsity of the feedback matrix: larger values of $\gamma$ encourage sparser solutions at the expense of higher communication cost, and vice versa.

        Given the nonconvexity introduced by the $\ell_0-$norm and the complexity of the Lyapunov-based equality constraint, solving this problem directly is computationally challenging. To address this, we adopt an augmented Lagrangian framework that transforms the constrained optimization into an unconstrained minimization problem. The resulting augmented Lagrangian function comprises three components, namely the $\ell_0-$norm of the feedback matrix, an indicator function representing the constraint on the cost function, and a matrices quadratic equation associated with the Lyapunov-tpe equation, involving a matrix of Lagrange multipliers. This formulation separates the objective into two nonsmooth components, corresponding to the sparsity and cost constraints, coupled through a smooth term arising from the system dynamics and Lyapunov relation. To solve the augmented problem, we propose a Proximal Linearized Minimization (PLM) method tailored to the two-block structure of the augmented objective function. Each iteration alternates between updating $K$ and $P$ using a proximal-gradient step designed for nonsmooth and nonconvex functions. The convergence of our method is analyzed using the Kurdyka-Łojasiewicz (KŁ) inequality, a powerful framework for analyzing nonconvex and nonsmooth optimization algorithms. Under mild regularity conditions, we prove that the PLM algorithm converges to a critical point of the augmented objective function. This analysis guarantees stability and consistency of the proposed framework, making it suitable for control system applications where reliable performance is essential.

        We demonstrate the effectiveness of our approach through a series of numerical experiments on interconnected systems. The results show that our method can identify sparse feedback matrices that achieve comparable closed-loop performance to traditional optimal control designs, but with significantly reduced controller complexity and communication requirements.

  • Yixin Chen, Xi Zhu, Changjun Yu
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-036
    Accepted: 2025-06-05

    (Communicated by Jie Sun)

        This paper investigates a class of optimal control problems characterized by piecewise constant time delays, which frequently arise in engineering, biological, and networked systems. These problems pose significant challenges due to the discontinuous nature of the delays and the resulting complexity in computing gradients required for optimization. Within the control parameterization framework, the original infinite-dimensional problem is transformed into a finite-dimensional nonlinear programming problem, enabling the application of gradient-based optimization techniques. However, the traditional variational method for computing gradients of cost and constraint functions becomes increasingly inefficient as the control discretization is refined. Although the co-state method is well known for its computational efficiency in delay-free optimal control problems, its application to systems with piecewise constant time delays has been hindered by the absence of an explicit co-state system. In this work, we rigorously derive the co-state system corresponding to such problems, facilitating efficient and accurate gradient computation. Building on this derivation, we propose a computational framework that significantly accelerates the solution process without compromising accuracy. Numerical experiments demonstrate that the co-state method offers substantial improvements in computational efficiency over the variational approach, particularly in cases involving fine control discretization.

  • Changjun Yu, Peiyu Wang, Di Wu and Yanqin Bai
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-016
    Accepted: 2025-04-21

    (Communicated by Aifan Ling)

        This paper introduces a new numerical method for solving nonlinear optimal control problems. In each iteration, the method linearizes the system and constraints based on the results from the previous iteration. Simultaneously, the objective function is approximated by its second-order Taylor expansion with respect to the control and state vectors, resulting in a linear-quadratic optimal control sub-problem. In solving the sub-problem, both the state and control vectors are approximated using Chebyshev series, where the coefficients of the Chebyshev polynomials are to be optimized. Furthermore, we approximate the known coefficient functions of the dynamic system and constraints using Chebyshev series. By the properties of Chebyshev polynomials, the sub-problem is finally transformed into a quadratic programming problem. We solve the dual problem of this optimization problem since it admits an exact solution. The effectiveness of the proposed method is demonstrated by examining three distinct types of optimal control problems. The numerical results conclusively illustrate that our approach outperforms the conventional iterative Chebyshev approximation method in terms of computation efficiency.

  • Anas Mifrani, Dominikus Noll
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-028
    Accepted: 2025-04-21

    (Communicated by Maria Josefa Canovas)

         We propose a vector linear programming formulation for a non-stationary, finite-horizon Markov decision process with vector-valued rewards. Pareto efficient policies are shown to correspond to efficient solutions of the linear program, and vector linear programming theory allows us to fully characterize deterministic efficient policies. An algorithm for enumerating all efficient deterministic policies is presented then tested numerically in an engineering application.

  • Meijia Yang, Hanxiao Zhang, Zhuoyi Xu, Yan Xu
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-011
    Accepted: 2025-02-18

    (Communicated by Xinwei Liu)

         In this paper, we study the trust region subproblem with a linear cut over the complex domain ${\rm (TL)}$.

    $\begin{eqnarray*}{\rm \left(TL\right)}~&\min_{w\in\Bbb C^n}&F(w)=\frac{1}{2}w^HAw+\mathcal{R}(b^Hw)\\~~~~~~~~~~~&{\rm s.t.}& w^Hw-1\leq 0,\\~~~~~~~~~~~&&\mathcal{R}(c^Hw)-d\leq 0,\end{eqnarray*}$

    where $A\in\Bbb C^{n\times n}$ is an Hermitian matrix, whose eigenvalues are real numbers. We assume that $A\nsucceq 0$ and denote its eigenvalues by $\lambda_1\leq \lambda_2\leq...\leq\lambda_n$, where at least one of them is negative, i.e.,$\lambda_1<0$.

         We also assume that the Slater condition holds, i.e., there exists $\hat{w}$ such that $\hat{w}^H\hat{w}<1,~\mathcal{R}(c^H\hat{w})<d$ holds.

        We first reveal the hidden convexity property of {\rm (TL)} by proving that it is equivalent to a convex programming problem:

    $\begin{eqnarray*}{\rm \left(C\right)}~&\min_{w\in\Bbb C^{n}}& P(w)=\frac{1}{2}w^H(A-\lambda_1I)w+\mathcal{R}(b^Hw)+\frac{1}{2}\lambda_1\\~~~~~~~~~~~&{\rm s.t.}& w^Hw-1\leq 0,\\~~~~~~~~~~~&&\mathcal{R}(c^Hw)-d\leq 0.\end{eqnarray*}$

    As an approximation of ${\rm (C)}$, we turn to solve the following convex programming problem

    $\begin{eqnarray*}&{\rm \left(AC\right)}&\min_{w\in\Bbb C^{n}} \frac{1}{2}w^H(A-\tilde{\lambda}_1 I)w+\mathcal{R}(b^Hw)+\frac{1}{2}\tilde{\lambda}_1\\~~~~~~~~~~~&{\rm s.t.}& w\in W:=\left\{w^Hw\leq 1,~\mathcal{R}(c^Hw)\leq d\right\},\end{eqnarray*}$

    which can be further equivalently reformulated as the problem ${\rm (AC^r)}$:

    $\begin{eqnarray*}&{\rm \left(AC^r\right)}&\min_{z\in\Bbb R^{2n}} g(z):=\frac{1}{2}z^T(Q-\tilde{\lambda}_1I)z+q^Tz+\frac{1}{2}\tilde{\lambda}_1\\~~~~~~~~~~~&{\rm s.t.}& z\in S:=\left\{z^Tz-1\leq 0,~h^Tz-d\leq 0\right\}.\end{eqnarray*}$

        Then we present a linear-time algorithm approximation scheme (in terms of the number of nonzero entries) for solving ${\rm (TL)}$ based on ${\rm (AC^r)}$. It mainly employs the eigenvalue approximation technique and Nesterov's accelerated gradient descent algorithm.

    Algorithm 1

    (a) Choose $\theta_0=\theta_{-1}\in(0,1]$, $z_0=z_{-1}\in S$. Let $k:=0$.

    (b) Let 

    $y_k=z_k+\theta_k(\theta_{k-1}^{-1}-1)(z_k-z_{k-1})$
    and $\nabla g_k=\nabla g(y_k)$. Update
    \begin{align*}z_{k+1}=\arg \min_{z\in S}~G(z)=z^T\nabla g_k+\rho\|z-y_k\|^2.\label{1}\tag{1}\end{align*}
    (c) Choose $\theta_{k+1}\in(0,1]$ satisfying
    $\frac{1-\theta_{k+1}}{\theta_{k+1}^2}\leq \frac{1}{\theta_k^2}.$
    If the stopping criterion does not hold, update $k:=k+1$ and go to step (b).

        Then given the parameters $\epsilon>0$, $\delta>0$, with probability at least $1-\delta$ over the randomization of an approximate eigenvector oracle, we can find an $\epsilon$-approximation solution of ${\rm (TL)}$ in total time $O\left(\frac{N\sqrt{\|A\|}}{\sqrt{\epsilon}}\log\frac{n}{\delta}\right).$We show the efficiency of the algorithm by comparing it with CVX solver in the numerical experiment. We can also see from Figure 1 that for the same dimension, the required time increases linearly with density.

    Figure 1: Computational time for different density with $\epsilon=10^{-6}$

  • Ao Shi, Jun Jiang, Zhao Deng , Yuqiang Feng
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-012
    Accepted: 2025-02-18

    (Communicated by Xinmin Yang)

    Abstract:The Peaceman-Rachford splitting method (PRSM) can effectively solve two-block separable convex optimization problems with linear constraints. However, extending it directly to multi-block separable convex optimization problems lacks of convergence guarantees. To address this limitation, we introduce a generalized PRSM with an indefinite proximal term and a substitution step (GPRSM-S). The global convergence and the iteration complexity are analyzed by using variational inequality theory under mild assumptions. Finally, numerical experiments on the robust Principal Component Analysis (PCA) problem demonstrate that GPRSM-S has higher efficiency compared to previous approaches.

  • Chong-Yang Shao, Yi-Bin Xiao, Zai-Yun Peng
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-013
    Accepted: 2025-02-18

    (Communicated by Xinmin Yang)

         In this paper, we study a history-dependent quasi-variational hemivariational inequality as follows.

         Problem 1.  Find $u \in C(I;\Lambda)$ such that, for all $t \in I$, $u(t) \in K(u(t))$ and

    $Au(t),v-u(t)\rangle + \varphi(Su(t),u(t),v)- \varphi(Su(t),u(t),u(t))$

    $+ j^\circ(\gamma u(t),\gamma u(t);\gamma v-\gamma u(t)) \geq \langle f(t),v-u(t)\rangle, ~~\forall v \in K(u(t)).$

        Here $I=[0,+\infty)$ is an infinite time interval, $V$ and $X$ are reflexive Banach spaces, $Y$ is a normed space, $\Lambda$ is a nonempty  subset of $V$, $C(I;\Lambda)$ stands for a set of all continuous functions  defined on $I$ with values on $\Lambda$, $A: V \to V^*$  represents a nonlinear operator, $\gamma: V \to X$ is a linear continuous operator, $S: C(I;V) \to C(I;Y)$ is a history-dependent operator, $\varphi: Y \times V \times V \to \mathbb{R}$ is convex, lower semicontinuous  with respect to its last argument, $j:X \times X \to \mathbb{R}$ is a locally Lipschitz functional with respect to its last argument, $j^\circ$ is the generalized directional derivative (in the sense of Clarke) of $j$, $K: \Lambda \to 2^\Lambda$ is a set-valued mapping and $f \in C(I;V^*)$ is given.

        By applying a fixed point argument about history-dependent operators and the Gronwall inequality, we obtain a unique solvability result to the history-dependent quasi-variational hemivariational inequality in a space of continuous functions. In addition, when the data of the history-dependent quasi-variational hemivariational inequality are perturbed, sufficient conditions are given to guarantee that the solution sequence of the perturbed problem converges to the unique solution of the original problem.


  • Haoming Xia , Ke Guo , Zhonglin Xu
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-001
    Accepted: 2025-02-17

    (Communicated by Xinmin Yang)

    Abstract: The alternating direction method of multipliers (ADMM) has demonstrated its efficiency and well-understood convergence properties when applied to minimization problems where the objective function is the sum of two nonconvex separable functions and the constraint is linear. However, the requirement for global Lipschitz continuity of the gradient of differentiable functions, which is often impractical in nonconvex optimization problems, restricts its applicability across various domains. Recently, a novel Bregman ADMM has been introduced for two-block nonconvex optimization problems with linear constraints. This new Bregman ADMM not only removes the need for global Lipschitz continuity of the gradient, making it suitable for a broader range of practical problems, but also ensures that it can reduce to the classical ADMM in specific cases. Building on this Bregman ADMM, we address multi-block nonconvex separable optimization problems with linear constraints. We demonstrate that any cluster point of the iterative sequence generated by Bregman ADMM is a critical point, provided that the associated function satisfies the Kurdyka-Łojasiewicz inequality. Additionally, we present sufficient conditions to ensure both the convergence and convergence rate of the algorithm.

  • Weimi Zhou, Yong-Jin Liu
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-002
    Accepted: 2025-02-17

    (Communicated by Xinmin Yang)

    Abstract: Solving the distributional worst-case in the distributionally robust optimization problem equivalent to finding the projection onto the intersection of simplex and singly linear inequality constraint. This projection is a key component in the design of efficient first-order algorithms. This paper focuses on developing efficient algorithms for computing the projection onto the intersection of simplex and singly linear inequality constraint. Based on the Lagrangian duality theory, the studied projection can be obtained by solving a univariate nonsmooth equation. We employ an algorithm called LRSA, which leverages the Lagrangian duality approach and the secant method to compute this projection. In this algorithm, a modified secant method is specifically designed to solve the piecewise linear equation. Additionally, due to semismoothness the resulting equation, the semismooth Newton (SSN) method is a natural choice for solving it. Numerical experiments demonstrate that LRSA outperforms SSN algorithm and the state-of-the-art optimization solver called Gurobi. Moreover, we derive explicit formulas for the generalized HS-Jacobian of the projection, which are essential for designing second-order nonsmooth Newton algorithms.

  • Jibiao Yuan, Juan Gao, Dongmei Ren, Jie Li, Xinwei Liu
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-003
    Accepted: 2025-02-17

    (Communicated by Xinmin Yang)

    Abstract: We investigate the following distributed convex optimization problem:

    $ {\min }\limits_{\tilde{s} \in {\mathbb{R}^n}} f( \tilde{s}) = \sum\limits_{i = 1}^m {{f_i}( \tilde{s} )}$                                                                               (1)

    where $\tilde{s} \in {\mathbb{R}^n}$ is the global decision variable and the local objective function ${f_i}:{\mathbb{R}^n} \to \mathbb{R}$ belongs to agent $i$. All agents are nested in a strongly connected directed graph. The goal of $m$ agents is collaboratively to find the global minimizer of the problem (1) through local communication with their neighbors. Assuming that each agent $i$ has a local copy $s^i$ of the global decision variable $\tilde{s}$, the problem (1) can be equivalently written as:

    ${\min }\limits_{s \in {\mathbb{R}^{nm}}} f( s ) = \sum\limits_{i = 1}^m {{f_i}( {{s^i}} )}$                                                                       (2)

    ${\rm{s.t.}} \ {s^i} = {s^j}, \ \forall i,j = 1, \ldots ,m,$

    where $s = {[ {(s^1)^{\mathsf{T}},(s^2)^{\mathsf{T}}, \ldots ,(s^m)^{\mathsf{T}}} ]^{\mathsf{T}}} \in {\mathbb{R}^{nm}}$. We propose a unified and accelerated version of distributed gradient methods to solve the problem (2). 

          Since the agents exchange the information over a directed graph, we consider the row-stochastic weight matrix $W = [ {{w_{ij}}} ] \in {\mathbb{R}^{m \times m}}$ associated the graph which satisfies the following conditions:

    $w_{ij} = \begin{cases} >0, & j \in \mathcal{N}^i_m \cup \{i\} \\ 0, & \rm{otherwise} \end{cases} $ ,$\sum\limits_{j = 1}^m w_{ij} = 1, \forall i \in \mathcal{V}.$

    where $\mathcal{N}^i_{in}$ is the in-degree set of agent $i$. Define the matrix $L = [ l_{ij}] = I_m - W$ and the matrix $B  = [ b_{jl}] \in {\mathbb{R}^{m\times m}}$ with $B1_m = b1_m$ for some $b \in \mathbb{R}$.

          With the help of the row-stochastic weight matrix and the Nesterov's monentum acceleration technique, we propose a unified and accelerated version of distributed gradient methods (UADM). In our algorithm, each agent $i \in \mathcal{V}$ stores four variables $s_k^i \in {\mathbb{R}^n},x_k^i \in {\mathbb{R}^n},y_k^i \in {\mathbb{R}^m},u_k^i \in {\mathbb{R}^n}$. For $k \geq 0$ and all $i \in \mathcal{V}$, UADM is initialized with $x_0^i = s_0^i \in {\mathbb{R}^n},y_0^i = {e_i}\in {\mathbb{R}^m},u_0^i = 0_n$ and updates its variables as follows:

    $x_{k + 1}^i = \sum\limits_{j = 1}^m {{w_{ij}}s_k^j}  - {\alpha _i}\left( {\frac{{\nabla {f_i}( {s_k^i})}}{{{{[{y_k^i}]}_i}}} + u_k^i} \right)$                                                  (3a)

    $s_{k + 1}^i = x_{k + 1}^i + \beta_i ( {x_{k + 1}^i - x_k^I})$                                                           (3b)

    $y_{k + 1}^i = \sum\limits_{j = 1}^m {{w_{ij}}y_k^j}$                                                                    (3c)

    $u_{k + 1}^i = u_k^i - \sum\limits_{j = 1}^m {{l_{ij}}} \left( { \frac{{\nabla {f_j}({s_k^j})}}{{{{[ {y_k^j} ]}_j}}}  + u_k^j - \sum\limits_{l = 1}^m {{b_{jl}}s_k^l} }\right)$                                         (3d)

    where $\alpha_i > 0$ is the local step-size, $ 0 \leq \beta_i< 1 $ is the local momentum coefficient.

         When the parameters $\beta_i, b_{jl}$ in UADM take different values, UADM can include the accelerated method FROZEN as its special case and bring some new accelerated methods such as the accelerated versions of FROST and Li-Row. In contrast to the methods using the column-stochastic weight matrices, UADM based on the row-stochastic weight matrices does not require each agent to know its out-degree and is easier to implement in a distributed setting because each agent can assign appropriate weights to the received information from its in-neighbors.

         If the step-sizes and the momentum coefficients satisfy some upper bounds, we prove that UADM has a linear convergence rate for smooth and strongly convex problems. The convergence result of UADM unifies the convergence results of distributed exact first-order algorithms, Li-Row, FROST and FROZEN.

        Some numerical experiments on the distributed quadratic programming show that UADM significantly outperforms the nonaccelerated algorithm (Li-Row) as the condition number increases. Some numerical experiments on the distributed logistic regression problems demonstrate that our algorithm can achieve accelerated convergence in comparison with some existing distributed algorithms.

  • Xiaoli Huang, Yuelin Gao, Xiaohua Ma, Xia Liu
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-004
    Accepted: 2025-02-17

    (Communicated by Xinmin Yang)

    Abstract: This paper investigates an effective branch-and-bound algorithm for solving the generalized linear fractional  multiplicative programming (GLFMP) problem. Initially, leveraging the structure of GLFMP, some variables are introduced to transform it into an equivalent problem. Subsequently, bidirectional linear relaxation is applied to the constraint functions of the equivalent problem, resulting in its linear relaxation problem, which is embedded into the branch-and-bound framework. Furthermore, incorporating tailored region reduction technique, we propose an output space branch-and-bound optimization algorithm. Additionally, the convergence and complexity of the algorithm are analyzed. Finally, the feasibility and effectiveness of the algorithm are validated using GLFMP instances ranging from specific to general cases.

         This paper considers the following generalized linear fractional multiplicative programming problem:

    GLFMP: $\begin{align*}\left\{\begin{aligned}&\min \;\; f(x)=\sum_{i=1}^p\gamma_i\,\prod_{j=1}^{N_i}(f_{ij}(x))^{\sigma_{ij}}\\&\mbox{s.t. } x\in X=\{x\in\mathbb{R}^n|Ax\leq b,x\geq 0\},\end{aligned}\right.\end{align*}$

    where $f_{ij}(x)=\frac{n_{ij}^{\top}x+\alpha_{ij}}{d_{ij}^{\top}x+\beta_{ij}}$, $p$ and $N_{i}$ are natural numbers. $n_{ij}, d_{ij}\in \mathbb{R}^{n}$, $\alpha_{ij}, \beta_{ij}, \sigma_{ij}\in \mathbb{R}$, $A\in \mathbb{R}^{m\times n}$, $b\in \mathbb{R}^{m}$. $X$ is a non-empty, bounded and closed set. Assume that for any $x\in X$, $\min\{n_{ij}^{\top}x+\alpha_{ij}, d_{ij}^{\top}x+\beta_{ij}\}>0$ holds. Furthermore, when $\sigma_{ij}<0$, $\left(f_{ij}(x)\right)^{\sigma_{ij}}=\left(\frac{1}{f_{ij}(x)}\right)^{-\sigma_{ij}}$. Thus, for each $i\in I=\{1,...,p\}$, $j\in J_{i}=\{1,...,N_{i}\}$, assuming $\sigma_{ij}>0$ is not loss of generality. If ``min" in GLFMP is replaced by ``max", then

    $\min_{x\in X}\sum_{i=1}^{p}\gamma_{i}\prod_{j=1}^{N_{i}}\left(f_{ij}(x)\right)^{\sigma_{ij}}=-\max_{x\in X}\sum_{i=1}^{p}(-\gamma_{i})\prod_{j=1}^{N_{I}} \left(f_{ij}(x)\right)^{\sigma_{ij}},$

    which indicates that maximizing GLFMP can be transformed into the form of GLFMP.

         An increasing number of scholars and experts are developing a keen interest in GLFMP and its various variants. One reason is that GLFMP and its variants have strong practical application capabilities in many areas. On the other hand, the complex structures of GLFMP and its variants result in them becoming non-convex problems, which often leads to them being NP-hard problems with multiple local optima instead of global solutions. This significantly hinders the search for a global optimum. Nevertheless, researchers have risen to the challenge and proposed numerous optimization algorithms.

         The main purpose of this article is to design an output space branch-and-bound global optimization algorithm for the general form of GLFMP. In pursuit of this goal, we first introduce some variables and transform GLFMP through two resets to obtain its equivalent problem (EGLFMP). Subsequently, we approximate the constraints to obtain the linear relaxation problem of EGLFMP. Furthermore, in order to remove rectangular regions that do not contain the global optimal solution to a greater extent, region reduction technique is devised based on the structural characteristics of the equivalent problem and relaxation problem, aiming to expedite the convergence speed of the algorithm. Integrating the linear relaxation programming, region reduction technique, and branching strategy into the branch-and-bound framework yields an output space branch-and-bound algorithm distinct from existing algorithms. This algorithm can not only solve GLFMP, but also special variants of GLFMP mentioned earlier. Finally, we conduct a convergence analysis on the proposed algorithm and estimate the maximum number of iterations in the worst case. Numerous numerical experimental results are used to demonstrate the performance of the algorithm.


  • Longhui Liu, Congying Han, Tiande Guo
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-005
    Accepted: 2025-02-17

    (Communicated by Xinmin Yang)

    Abstract: In this paper, we concentrate on a broad class of large-scale nonconvex and nonsmooth optimization problems. We first propose a novel inertial stochastic Bregman proximal alternating linearized minimization algorithm (TiSBPALM), which employs variance-reduced stochastic gradient estimators. Subsequently, under the assumption that the objective function satisfies the Kurdyka-Łojasiewicz property and certain conditions on the parameters are imposed, we prove that the sequence generated by our algorithm converges to a critical point in expectation. Additionally, we provide the convergence rate for the iteration sequence. Finally, we conduct numerical experiments on sparse nonnegative matrix factorization and blind image-deblurring to verify the effectiveness of our proposed algorithms.

  • Zihao Jia , Youlin Shang , Yan Feng , Yufei Ren
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-023
    Accepted: 2024-10-12

    (Communicated by Xinmin Yang)

    Abstract: A large number of problems in social sciences and natural sciences today can be attributed to global optimization problems, which are widely found in economic models, finance, network transportation, database, integrated circuit design image processing, chemical engineering design and control, molecular biology, environmental engineering, etc. At present, the methods for solving global optimization problems can be divided into two categories, namely, deterministic algorithm and stochastic algorithm. Stochastic algorithm, such as genetic algorithm, simulated annealing algorithm and so on, is constructed by using the point column generated by a probabilistic mechanism to describe the iterative process. Deterministic algorithms use the analytic property of the problem to generate deterministic finite (or infinite) point sequences that converge to the global optimal solution, such as interval method, D.C. Planning algorithm, branch and bound method, auxiliary function method and integral level set method, etc. Among them, the filled function method is a deterministic method which can search the optimization problem globally. Ren pu Ge took the lead in proposing the filled function method[1]. The main idea is to use the filled function to jump out of the current local minimizer to find a better minimizer. The filled function method consists of two parts, one is to local minimization and the other is to fill. In the minimization phase, a local minimization algorithm is used to obtain a local minimizer of the original problem. In the filling phase, the original problem is minimized by means of the filled function, and the two phases are alternated until a global minimizer is found.

         The following global optimization problem will be discussed

    min $f(x)$,

    s.t.  $x \in \mathbb{R}^n$.                                                                           (1)

    Ge states that if a function satisfies the filled function definition and the filling property, then it can be used to solve the problem (1). The first generation filled function formula given in [1] is as follows:

    $G(x,x_k^*,r,\rho)=\frac1{r+f(x)}\exp(-\frac{\|x-x_k^*\|^2}{\rho^2})$,                                                   (2)

    where $r$ and $\rho$ are given parameters. The selection of parameters in formula (2) is quite difficult. This is a challenge for coming up with new filled function methods. The filled function of equation (2) has some shortcomings. 

    It has been discussed in depth. These shortcomings are mainly related to the two parameters $r$ and $\rho$ contained in the function. According to Gao, Ge's filled function contains an exponential term $\exp(-\frac{\|x-x_k^*\|^2}{\rho^2})$, and the picture of the filled function is almost flat if $\|x-x_k^*\|^2$ is large or $\rho$ is small. This makes it difficult for the computer to discern changes in function values.

        In this paper, a new non-parameter filled function is constructed. It satisfies the definition of filled function and has excellent properties. Then, the corresponding algorithm is designed according to the new filled function. Numerical experiments are made and comparisons on several test problems are shown which exhibit the feasibility and effectiveness of the algorithm.



  • Qiao Chen , Liping Tang , Ke Zhang, Xinmin Yang
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-024
    Accepted: 2024-10-12

    (Communicated by Kok Lay Teo)

    Abstract:Global optimization problems are common issue in numerous fields, such as molecular biology, economics and finance, environmental engineering, industrial manufacturing. In the global optimization problem, if it is a convex programming, we can use a local minimization algorithm to find its global optimal solution. Unfortunately, in most cases, these problems are non-convex optimization problems with multiple minimum points, which cause a scientific question arise: How to find a better minimizer from the known local minimizer of the objective function.

        The traditional nonlinear programming algorithms, such as Quasi-Newton methods or other algorithms, cannot directly solve the puzzle. Therefore, it is a great challenge to find the global minimizer of non-convex objective function. In recent years, lots of studies have focused on global optimization theory and algorithm. In general, global optimization methods can be divided into two categories: stochastic methods and deterministic methods. The stochastic methods for global optimization include evolutionary algorithm, simulated annealing, and particle swarm optimization, etc. These methods are inspired by physical principles or natural evolution, which jump out of the local minimum by a probability-based approach. Although the stochastic methods have been effectively applied to solve many intricate problems, some problems remain unsolved, such as convergence analysis and parameter selection. The deterministic methods, for example, the filled function method, tunnelling method, covering method, can converge faster and often find a better minimizer than the current minima quickly. Compared with other deterministic methods, the filled function method is a more popular global optimization method because it is relatively easy to implement. The basic idea of filled function is to construct an auxiliary function at the current local minima of the objective function. By minimizing the auxiliary function, the current local minima is quickly jumped out and a better local minima is reached. This process cycles until a better local minimum cannot be found.

        The first filled function introduced by Ge is as follows, i.e.

    $P(x,x_{1}^*, r, \rho)=\frac{1}{r+F(x)}\text{exp}(-\frac{\|x-x_{1}^{\ast}\|^{2}}{\rho^{2}}),$

    where the parameters $r$ and $\rho$ need to be chosen appropriately. However, Ge's filled function has some shortcomings. Because there are too many restrictions on the parameters $r$ and $\rho$, it is difficult for them to be properly selected so that $P(x,x_{1}^*, r, \rho)$ satisfies the definition of the filled function. Constructing a filled function with good properties is one of the keys to the success of a filled function algorithm. Therefore, researchers have done a lot of beneficial research work in enriching the forms of filled function. Unfortunately, These filled functions indeed possess several merits, yet they also exhibit drawbacks that necessitate further refinement and improvement. The adjustment of parameters for some two-parameter or single-parameter filled functions is still difficult. Non-smooth, non-differentiable or even discontinuous filled functions are not minimized by effective local optimization procedures based on gradient information, which decreases the efficiency of the filled algorithm. To address the difficulties in adjusting parameters, some scholars have proposed parameter-free filled functions. However, some parameter-free filled functions do not have any information about the objective function when $f(x)\geq f(x_{1}^{*})$, which may lead to difficulties in finding the global minimum point. In addition, without the adjustment of parameters, the proposed function in literature may fail to satisfy the definition of the filled function.

        Based on the above considerations, a new one-parameter filled function is proposed in this paper. The main contributions of this work are summarized below:

        (i) The new filled function is continuously differentiable and its gradient information can be used in local minimization procedure. Thus, the realization of the minimization process is simplified, and the algorithm is more efficient and convenient.

          (ii)  In any case, our proposed filled function always contains information about the objective function.

         (iii)  The new filled function has an easily adjustable parameter that can be tuned according to the specific problem, enabling the algorithm to adapt to the problem and avoids any issues similar to those encountered in literature.

        Next, we conducted a theoretical analysis and research on the properties of the new filled function. These good function properties ensure the convergence of the minimization process of the proposed filled function. Therefore, the minimum iteration process of the new filled function algorithm will run successfully in the whole region. Without these function properties, the filled function may behave similarly to that in reference, having no local minima within the entire feasible region. Furthermore, supported by thorough theoretical analysis, we have meticulously conceived a filled function method that is not only easy to implement but also highly efficient. Through a series of rigorous numerical experiments, we have fully verified the effectiveness and reliability of this new algorithm. It is worth mentioning that in the comparative tests with two other filled function methods, our algorithm has demonstrated more significant advantages, and the experimental results convincingly prove its superiority.

  • Feifei Shen , Liu Yang , Jiaxin Yang , Xiaojiao Tong ,
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-016
    Accepted: 2024-09-04
    (Communicated by Guihua Lin)
    Abstract: With the development of financial investment risk management, utility-based shortfall risk measures have received increasing attention for its potential to quantify the risk of large tail losses more effectively than conditional value at risk. For portfolio selection, transaction costs incurred by changes of investment proportions on risky assets have a significant impact on the investment strategy and the return. In this paper, we discuss a distributionally robust portfolio selection problem with shortfall risk measures and investigate the impact of transaction costs on portfolio selection. We first constructed the distributionally robust shortfall risk (DRSR) portfolio models without and with transaction costs. Then we construct moment ambiguity sets based on empirical data for the unknown probability distribution functions of random variables and discuss the convergence of optimal values and solutions of models as the sample size increases. Under the condition that the loss function is a piecewise affine function, we transform models into equivalent semi-definite programming problems. Numerical tests illustrate the validity of our models. Our main contributions can be summarized as follows:
         (i) We construct the DRSR portfolio selection models without and with transaction costs based on the moment ambiguity set, and discuss the convergence analysis of optimal value and solution of the models with moment ambiguity sets based on empirical data as the sample size increases.
         (ii) We use a linearization method to convert the V-shaped transaction costs function into inequality constraints, and this method can overcome the nonsmooth of the transaction costs function. Then we transform the DRSR potfolio selection models into computable equivalent semi-definite programming problems (SDP) by using the moment theory and duality theory.


        (iii) Based on the actual stock data, the influence of different parameters (such as transaction costs rate, risk level, ambiguity set parameters, sample sizes, etc.) on the optimal solution and optimal value of the models are analyzed. The validity and convergence of the new models are verified.