Home Browse Just accepted

Just accepted

Accepted, unedited articles published online and citable. The final edited and typeset version of record will appear in the future.
Please wait a minute...
  • Select all
    |
  • Elimhan N. Mahmudov , Shakir Sh. Yusubov, Rza N. Mahmudov
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-038
    Accepted: 2025-09-20

    (Communicated by Robert Csetnek )


          In this paper, a new approach to solving the considered optimization problem is proposed for both differential inclusions (DFIs) and inequality constraints. The problem is reduced to a problem with one DFI defined by the intersection of two set-valued mappings. The optimality conditions for such a problem are expressed as the sum of two locally adjoint mappings (LAMs). In turn, this also requires calculating the argmaximum sets and subdifferentials of Hamiltonian functions. Then it is natural that LAM can be computed for a set-valued mapping generated by a system of inequality constraints. Finally, we consider some applications in a linear optimal control problem with polyhedral constraints and a constraint given by a convex cone.

  • Mingyu Song , Yanjun Wang
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-039
    Accepted: 2025-09-20

    (Communicated by Jie Sun)

          This study develops a data-driven distributionally robust two-stage stochastic programming model with second-order stochastic dominance (SSD) constraints. Uncertainty is modeled within a 1-Wasserstein ball centered at an empirical distribution. Two cases are analyzed based on how uncertainty affects the second stage. In the first case, where uncertainty impacts only the objective

    function, the problem can be reformulated as a convex program and further simplified into a tractable conic program. In the second case, where uncertainty influences constraints, the problem is generally NP-hard, as feasibility verification educes to a norm maximization problem over a polytope. However, under specific conditions, it remains tractable. Numerical experiments on a supply chain problem compare the proposed method with Sample Average Approximation (SAA). Results show that for large sample sizes, the proposed approach exhibits greater robustness and stability against sample fluctuations.

  • Daniel Mimouni , Welington de Oliveira, Gregorio M. Sempere
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-040
    Accepted: 2025-09-20

    (Communicated by Michel Théra )

    (Dedicated to Professor R. Tyrrell Rockafellar on the occasion of his 90th birthday)


        This work presents two optimization methods to compute, subject to constraints, a Wasserstein barycenter (WB) of finitely many empirical probability measures.

       The new measure, denoted by constrained Wasserstein barycenter, extends the applicability of the standard WB to pre-required geometrical or statistical constraints. Our first approach is an extension of the Method of Averaged Marginals (Mimouni et al., 2024) to compute WBs subject to convex constraints. In the nonconvex setting, we propose an optimization model whose necessary optimality conditions are written as a linkage problem with non-elicitable monotonicity. To solve such a linkage problem, we combine the Progressive Decoupling Algorithm (Rockafellar, 2019) with Difference-of-Convex programming techniques. We give the mathematical properties of our approaches and evaluate their numerical performances in two applications, demonstrating both their computational efficiency and the practical relevance of constrained Wasserstein barycenters.

  • Anwa Zhou, Yangyang Ni, Jinyan Fan
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-041
    Accepted: 2025-09-20

    (Communicated by Chen Ling)

        This paper studies the real multivariate eigenvalue problem for nonsymmetric tensor tuple. We formulate the tensor real multivariate eigenvalue problem equivalently as polynomial optimization. We generate a random objective function that achieves different values at different multivariate values. Each of them can be computed by Lasserre's hierarchy of semidefinite relaxations if there are finitely many ones. Under suitable conditions, we prove that the semidefinite relaxation has finite convergence for nonsymmetric tensor tuples. Numerical experiments are presented to show the efficiency of the proposed method.

  • Fuying Jing, Xiangrui Chao
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-042
    Accepted: 2025-09-20

    (Communicated by Jein-Shan Chen)

        This paper investigates the minimal planning horizon required to determine optimal production schedules in dynamic lot-sizing models under batch changeover constraints. The classical dynamic lot-sizing problem assumes the ability to make setup decisions each period independently. However, in many industrial environments such as semiconductor manufacturing, pharmaceutical production, or automotive component assembly, batch-based changeovers are required. This introduces nontrivial temporal coupling between decision periods, rendering classical planning horizon results inapplicable.


        We study a generalized dynamic lot-sizing model where products are subject to joint setup constraints over contiguous planning periods---so-called \emph{batch changeovers}. Our main contribution is the derivation of sufficient conditions under which finite minimal planning horizons exist, allowing for optimal production decisions to be made without requiring infinite forward-looking forecasts.


        Let $T$ denote the length of the planning horizon, and suppose demands $\{d_t\}_{t=1}^T$ are deterministic. Each product incurs a per-unit production cost $c_t$, a holding cost $h_t$, and a fixed batch setup cost $S$. The novelty in our model lies in the constraint that setup decisions apply to batches of periods, i.e., production is constrained to occur only if a batch-wide setup is activated.


        We show that under reasonable assumptions (bounded holding and production costs, stable demand), the optimal policy structure retains certain properties akin to the classical Wagner-Whitin model, such as the existence of \emph{non-speculative production}. However, the batch setup constraint introduces a new layer of complexity: the optimal solution may depend on multi-period patterns that require careful treatment to determine horizon sufficiency.


        To formalize this, we derive a minimal planning horizon bound $T^*$, such that for any $T \geq T^*$, the truncated problem yields the same initial-period decision as the infinite-horizon problem. This is achieved by constructing a cost-to-go function and showing it satisfies monotonicity and submodularity properties under batch constraints.


        Numerical experiments on benchmark instances demonstrate that the proposed minimal planning horizon bound is tight in practice and significantly reduces computational burden without compromising optimality. Moreover, we propose a dynamic programming algorithm tailored to exploit the batch structure, improving runtime efficiency by an order of magnitude compared to standard MILP solvers.


        Our results contribute to the theoretical understanding of planning horizons in complex lot-sizing settings and provide practical tools for industries facing setup-intensive operations with limited forecast visibility.

  • Huizhen Zhang, Qin Huang, David Rios Insua
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-029
    Accepted: 2025-06-05

    (Communicated by Jie Sun)

          This paper presents a novel multi-objective location-routing problem with time windows for the effective management of multimodal transportation networks at two levels: the upper level addresses multimodal transportation issues, whereas the lower one focuses on the location-routing aspects. In our model, two objectives are included referring to maximizing customer satisfaction and minimizing total costs, aggregating transportation, transit, and location costs. To solve the problem efficiently, we introduce a new approach based on a squirrel search algorithm using three crossover and four mutation operators designed to handle Pareto optimality in multi-objective optimization. Besides, we present a greedy clustering algorithm combined with Metropolis criterion to generate high-quality initial solutions and accelerate convergence. We evaluate the performance of proposed approach using various scale instances in multimodal transportation networks. Our experimental results suggest the efficiency and scalability of our approach.

  • Ziye Zhang, Ke Su
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-030
    Accepted: 2025-06-05

    (Communicated by Guanglu Zhou)

         In this paper, we focus on the mixed-constrained semi-infinite programming and present an adaptive augmented Lagrangian filter method. Continuous infinite inequality constraints are smoothed and transformed into equivalent finite constraints in integral form, which are then combined with the objective function via adaptive parameters. Compared with the existing methods, the new approach is a penalty-function-free method that employs a filter instead of forcing sequences to make the optimality measure approach to zero. Our algorithm also includes a feasibility recovery phase to quickly detect infeasible problems. The global convergence is proved under some suitable conditions. Numerical results demonstrate that the proposed method is effective.

  • Huan Gao, Jianyu Xiao, Zhibao Li, Haibin Zhang
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-031
    Accepted: 2025-06-05

    (Communicated by Jie Sun)

           In this paper, we consider the 3-block linearly constrained difference-of-convex (DC) optimization problems:

    $\min_{x,y,z}\ \{ \ f(x) + g (y) + h(z) \mid A x +By+Cz =b \ \},$                                                       (1)

    where

    $f(x) = f_1(x)-f_2(x)~ ~\text{and}~~  g(y) = g_1(y)-g_2(y),$

    $h:\mathbb{R}^{n_3}\rightarrow \mathbb{R}$ be a continuous differentiable convex function with Lipschitz continuous gradient, $A\in \mathbb{R}^{m\times n_1}$, $B\in \mathbb{R}^{m\times n_2}$ and $C\in\mathbb{R}^{m\times n_3}$ are given matrixes, $b\in \mathbb{R}^{m}$ is a vector, with $f_1:\mathbb{R}^{n_1}\rightarrow \mathbb{R}\cup\{+\infty\}$ and $g_1:\mathbb{R}^{n_2}\rightarrow \mathbb{R}\cup\{+\infty\}$  are  proper closed convex functions, $f_2:\mathbb{R}^{n_1}\rightarrow \mathbb{R}\cup\{+\infty\}$ and $g_2:\mathbb{R}^{n_2}\rightarrow \mathbb{R}\cup\{+\infty\}$ are continuous convex functions. We propose a majorized Bregman ADMM to solve the DC problem (1). Compared with the classical Bregman ADMM, the majorized Bregman ADMM only requires solving convex subproblems at each iteration, rather than DC subproblems. We prove that the sequence generated by the proposed method converges to a critical point of the augmented Lagrangian function, under the assumption that the potential function satisfies the Kurdyka-Łojasiewicz (KŁ) property. Preliminary numerical experiments are conducted to support our theoretical analysis.


  • Wenfeng Zhu, Zhou Sheng, Yuehuan Zhu
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-032
    Accepted: 2025-06-05

    (Communicated by Lingchen Kong)

          We study a complex Newton-based method with a correction step for homogeneous polynomial optimization on the complex unit sphere. In particular, its constrained stationary point meets the definition for being a US-eigenpair of the corresponding to symmetric complex tensor.

          We analyze the local quadratic convergence rate of the complex Newton-based method with a correction step, provided that a sufficiently close initial point. As two concrete applications, we apply it to two tasks: computing the US-eigenpairs of symmetric complex tensors; and computing the geometric measure of entanglement of quantum multipartite pure states. We provide several numerical examples of our complex Newton-based method with a correction step, and observe that it performs fast and effective on these two tasks.

  • Danping Yang, Biao Qu and Jiayi Song
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-033
    Accepted: 2025-06-05

    (Communicated by Fanwen Meng)

        The split feasibility problem (SFP) has many important and wide applications, such as radiotherapy, image reconstruction, and signal processing. Let $C$ and $Q$ be nonempty closed convex sets in $\Re^{n}$ and $\Re^{m}$, respectively, and $A$ an $m \times n$ real matrix, this kind of problem is about finding

    $x\in C, ~\mathrm{s.t.}~Ax\in Q,$                                                                             (1)

    if such $x$ exists.

        Recent researches on the SFP have been deeply integrated with inertial acceleration method, and some scholars have discovered that these techniques can speed up the process of first-order optimization algorithms. However, the most of them limit the range of inertial factors and cause the sequence $\|x^{k}-z\|(z\in S)$ no longer monotonically non-increasing. The alternating inertial method, expressed as

    $w^{k}=\begin{cases}x^{k},&\text{if ~}k\text{~is even},\\ x^{k}+\theta _{k}( x^{k}-x^{k-1}) ,&{\text{if ~}k\text{~is odd}},\end{cases}$                                                      (2)

    is a appropriate improvement strategy, which retains the acceleration effect of inertial method while alleviating the degree of oscillation.

        In this paper, we propose three KM-CQ-like alternating inertial algorithms that all expand the range of inertial factors. We use projections onto general closed convex sets in the first version, which exploring the acceleration effects of alternating inertial technique preliminarily. The second version simplifies the computation of procedure by relaxing projections onto half-spaces. The last version further optimizes the previous methods by incorporating double projection technique.

        Our specific contributions in this paper are summarized as follows.

        (i) The proposed three KM-CQ-like algorithms incorporate alternating inertial steps allowing them to improve the convergence of the algorithms without inertial steps, which improve the range of values of inertial factors and simplify the form of limiting condition.

        (ii) The second algorithm also add relaxation effects that allows it to accelerate the convergence of the first algorithm and simplify the calculation of projections. On the other hand, we can see it is faster in Example 4.2.

        (iii) The last algorithm use double projections that are generated by half-spaces in the previous iteration and the current iteration. Similar to the expected conclusions, it converges faster than the second algorithm in Example 4.2.

        (iv) The convergence of the iterative sequences generated by the proposed algorithms is established, with the monotonicity of sequence $\|{x}^{2k}-z\|(z\in S)$, which alleviates the oscillation of the iterative points.

        (v) The performance and advantages of the algorithms proposed in this paper are confirmed by two applications in a simple the SFP and signal processing. On the other hand, the increase of dimension will make the advantages of our algorithms more obvious.

        Under simple parameter settings, these algorithms restore the monotonicity of $\|x^{2k}-z\|$ and achieve the convergence of the algorithm. Experiments demonstrate the feasibility of each algorithm in practical applications and the effectiveness of the acceleration procedure.

  • Changzhi Wu, Wah June Leong, Hong Seng Sim, Jinlong Yuan
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-034
    Accepted: 2025-06-05

    (Communicated by Kok Lay Teo)

        In this study, we examine the linear quadratic (LQ) optimal control problem for large-scale interconnected systems, with a focus on designing sparse static state-feedback controllers. Traditional LQ control methods often result in dense feedback matrices that require global information sharing among all subsystems. While such dense designs may be optimal in terms of performance, they are often impractical in distributed or resource-constrained environments due to excessive communication and implementation costs. Consequently, this paper seeks to develop a framework that simultaneously promotes sparsity in the controller while maintaining an acceptable level of system performance degradation.

        To achieve this, we formulate a constrained optimization problem where the primary objective is to minimize the number of nonzero entries in the feedback matrix $K$, expressed via the nonconvex and discontinuous $\ell_0-$norm. This sparsity-promoting goal is constrained by two critical system-level considerations: (i) the cost associated with the closed-loop system's response to disturbances must not exceed a predefined threshold above the optimal LQ cost, and (ii) the stability of the closed-loop system must be preserved via a Lyapunov-type equation. The cost of the closed-loop system is defined as the trace of a controllability Gramian $P$, which satisfies the Lyapunov-type equation involving the feedback matrix $K$. The constraint on system performance introduces a tradeoff parameter $\gamma > 0$, which allows the system designer to control the degree of allowable performance loss relative to the optimal centralized LQ controller. This parameter directly impacts the achievable sparsity of the feedback matrix: larger values of $\gamma$ encourage sparser solutions at the expense of higher communication cost, and vice versa.

        Given the nonconvexity introduced by the $\ell_0-$norm and the complexity of the Lyapunov-based equality constraint, solving this problem directly is computationally challenging. To address this, we adopt an augmented Lagrangian framework that transforms the constrained optimization into an unconstrained minimization problem. The resulting augmented Lagrangian function comprises three components, namely the $\ell_0-$norm of the feedback matrix, an indicator function representing the constraint on the cost function, and a matrices quadratic equation associated with the Lyapunov-tpe equation, involving a matrix of Lagrange multipliers. This formulation separates the objective into two nonsmooth components, corresponding to the sparsity and cost constraints, coupled through a smooth term arising from the system dynamics and Lyapunov relation. To solve the augmented problem, we propose a Proximal Linearized Minimization (PLM) method tailored to the two-block structure of the augmented objective function. Each iteration alternates between updating $K$ and $P$ using a proximal-gradient step designed for nonsmooth and nonconvex functions. The convergence of our method is analyzed using the Kurdyka-Łojasiewicz (KŁ) inequality, a powerful framework for analyzing nonconvex and nonsmooth optimization algorithms. Under mild regularity conditions, we prove that the PLM algorithm converges to a critical point of the augmented objective function. This analysis guarantees stability and consistency of the proposed framework, making it suitable for control system applications where reliable performance is essential.

        We demonstrate the effectiveness of our approach through a series of numerical experiments on interconnected systems. The results show that our method can identify sparse feedback matrices that achieve comparable closed-loop performance to traditional optimal control designs, but with significantly reduced controller complexity and communication requirements.

  • Huimin Li, Yuya Yamakawa, Ellen H. Fukuda, Nobuo Yamashita
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-035
    Accepted: 2025-06-05

    (Communicated by Liqun Qi)

    Dedicated to Professor Terry Rockafellar on the occasion of his 90th birthday

        Most numerical methods developed for solving nonlinear programming problems are designed to find points that satisfy certain optimality conditions. While the Karush-Kuhn-Tucker conditions are well-known, they become invalid when constraint qualifications (CQ) are not met. Recent advances in sequential optimality conditions address this limitation in both first- and second-order cases, providing genuine optimality guarantees at local optima, even when CQs do not hold. However, some second-order sequential optimality conditions still require some restrictive conditions on constraints in the recent literature.

        In this paper, we propose a new strong second-order sequential optimality condition without CQs. We also show that a penalty-type method and an augmented Lagrangian method generate points satisfying these new optimality conditions.

  • Yixin Chen, Xi Zhu, Changjun Yu
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-036
    Accepted: 2025-06-05

    (Communicated by Jie Sun)

        This paper investigates a class of optimal control problems characterized by piecewise constant time delays, which frequently arise in engineering, biological, and networked systems. These problems pose significant challenges due to the discontinuous nature of the delays and the resulting complexity in computing gradients required for optimization. Within the control parameterization framework, the original infinite-dimensional problem is transformed into a finite-dimensional nonlinear programming problem, enabling the application of gradient-based optimization techniques. However, the traditional variational method for computing gradients of cost and constraint functions becomes increasingly inefficient as the control discretization is refined. Although the co-state method is well known for its computational efficiency in delay-free optimal control problems, its application to systems with piecewise constant time delays has been hindered by the absence of an explicit co-state system. In this work, we rigorously derive the co-state system corresponding to such problems, facilitating efficient and accurate gradient computation. Building on this derivation, we propose a computational framework that significantly accelerates the solution process without compromising accuracy. Numerical experiments demonstrate that the co-state method offers substantial improvements in computational efficiency over the variational approach, particularly in cases involving fine control discretization.

  • Peiran Yu, Liaoyuan Zeng, Ting Kei Pong
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-037
    Accepted: 2025-06-05

    (Communicated by Liqun Qi)

    (Dedicated to Professor Terry Rockafellar on the occasion of his 90th birthday)

        We incorporate an iteratively reweighted strategy in the manifold proximal point algorithm (ManPPA) in [1] to solve an enhanced sparsity inducing model for identifying sparse yet nonzero vectors in a given subspace. We establish the global convergence of the whole sequence generated by our algorithm by assuming the Kurdyka-Łojasiewicz (KŁ) properties of suitable potential functions. We also study how the KŁ exponents of the different potential functions are related. More importantly, when our enhanced model and algorithm reduce, respectively, to the model and ManPPA with constant stepsize considered in [1], we show that the sequence generated converges linearly as long as the optimal value of the model is positive, and converges finitely when the limit of the sequence lies in a set of weak sharp minima. Our results improve [2,Theorem 2.4], which asserts local quadratic convergence in the presence of weak sharp minima when the constant stepsize is small.

  • Li-Zhi Liao
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-014
    Accepted: 2025-04-21
    (Communicated by Liqun Qi)
    This paper is dedicated to Professor Terry Rockafellar in celebration of his 90th birthday.

            In this paper, an optimal control approach is proposed to compute Newton's direction in $O(n)$ flops for certain large scale optimization problems, where $n$ is the number of variables. Consequently, these optimization problems can be solved in $O(n)$ flops by two optimal control algorithms (including Newton's method) which have local quadratic convergence. The essence of our methodology is to convert the unconstrained optimization problem into a discrete-time optimal control problem. Our discussion begins with the introduction of a new framework to describe and identify the structure of a general function with a large number of variables. Based on the new framework, a sufficient condition is established so that for any large scale unconstrained optimization problem satisfying this sufficient condition, its Newton's direction can be computed in $O(n)$ flops. Our application of this sufficient condition on a collection of large scale unconstrained optimization problems in the literature indicates that the majority of these problems are amenable to this approach.

  • Xiaoxue Jiang, Jian Chen, Jinjie Liu, Xinmin Yang
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-015
    Accepted: 2025-04-21

    (Communicated by Lingchen Kong)

    This paper is dedicated to Professor Terry Rockafellar in celebration of his 90th birthday

         Multiobjective optimization is an important field of research closely related to applications, including statistics, engineering, management science, finance, and others. In this paper, we consider the following multiobjective optimization problem:

    $\min_{x\in\mathbb{R}^n} F(x)=\big(F_1(x),F_2(x),\dots,F_m(x)\big)^\text{T},$                                                  (1)

    where $F_{j}: \mathbb{R}^{n} \rightarrow \mathbb{R}, j=1,...,m$ are continuously differentiable functions.
         Solution strategies play a pivotal role in the application of multiobjective optimization problem (1). Over the past two decades, multiobjective gradient descent methods have garnered increasing attention and are generally categorized into two classes. One class is the line search type algorithm, which first determines a descent direction and then finds a step size along that direction. In multiobjective line search type Newton algorithms, if the objective functions are nonconvex, the direction obtained by solving the Newton subproblem is not necessarily a descent direction. To address this issue, researchers have begun to study multiobjective trust region algorithms that simultaneously determine both the direction and the step size.
        Multiobjective trust region algorithms typically consist of two crucial components: one involves constructing appropriate model functions to design subproblems, while the other focuses on designing the trust region ratio as the criterion for accepting the trial step and updating the trust region radius. For the first component, existing multiobjective Newton trust region algorithm can address the issue where the direction obtained by solving the Newton subproblem is not necessarily a descent direction due to the non-convexity of the objective functions. However, the solution of its subproblem is computationally expensive. Although the multiobjective quasi-Newton trust region algorithm improves the computational efficiency of the multiobjective Newton trust region algorithm, it requires updating a symmetric positive definite matrix at each iteration, which also leads to high computational costs for solving the subproblem when the variable dimension is large. If a common symmetric positive definite matrix is used to approximate the Hessian matrices of all objective functions for designing the model function, there is an additional issue of insufficient approximation accuracy. Therefore, is it possible to use a new method to effectively approximate the Hessian matrix of each objective function, which does not involve complex matrix updating mode, so as to improve the solving efficiency of the multiobjective (quasi-)Newton trust region algorithm, reduce the calculation cost, and balance the approximate accuracy?
        In addition to model and subproblem construction, the method for updating the trust region radius is also crucial in multiobjective trust region algorithms. In existing multiobjective trust region algorithms, the radius of the trust region is determined by the trust region ratio $\rho_{j}^{k}$, which represents the ratio of the actual descent in the objective function to the predicted descent. This ratio serves two purposes: first, to determine whether to accept the trial step; and second, to update  the trust region radius for the next iteration, that is, use the $k$-step model function to update the radius of the trust region $\Delta^{k+1}$. In the single objective optimization, some scholars use the information obtained in the optimization process to adjust the accuracy of the objective function calculation, and propose a new strategy of updating the radius of the trust region, which can effectively improve the efficiency of the algorithm, that is, use the $k+1$-step model function to update the radius of the trust region $\Delta^{k+1}$. We extend it to design multiobjective trust region algorithm.

    \par Based on the above considerations, a multiobjective Barzilai-Borwein retrospective trust region method is proposed in this paper. The main contributions of this work are summarized below:

        (i) From the perspective of approximation accuracy, by introducing the Barzilai-Borwein criterion and utilizing the approximate curvature information of the objective functions to construct the model function and the trust region subproblem, the need for expensive second-order derivative computations is reduced while maintaining a balance in approximation accuracy.

        (ii) By using the general trust region ratio as the sole criterion for accepting trial steps, the computation of the multiobjective retrospective trust region ratio is achieved, thereby facilitating the update method for adjusting the trust region radius.

        (iii) A merit function under the trust region constraint is defined, and the convergence rate of the algorithm is established.
        Next, we conducted a theoretical analysis of the proposed multiobjective Barzilai-Borwein retrospective trust region algorithm. Under some reasonable assumptions, we established the convergence of the algorithm. Furthermore, by utilizing the merit function under the trust region constraint, we obtained the convergence rate of the algorithm. Finally, numerical experimental results demonstrated the feasibility and effectiveness of the proposed algorithm. It is worth mentioning that, in comparison with algorithms without the Barzilai-Borwein criterion or without the retrospective strategy, the multi objective Barzilai-Borwein retrospective trust region algorithm exhibited more significant advantages, which also highlights its superiority.
  • Changjun Yu, Peiyu Wang, Di Wu and Yanqin Bai
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-016
    Accepted: 2025-04-21

    (Communicated by Aifan Ling)

        This paper introduces a new numerical method for solving nonlinear optimal control problems. In each iteration, the method linearizes the system and constraints based on the results from the previous iteration. Simultaneously, the objective function is approximated by its second-order Taylor expansion with respect to the control and state vectors, resulting in a linear-quadratic optimal control sub-problem. In solving the sub-problem, both the state and control vectors are approximated using Chebyshev series, where the coefficients of the Chebyshev polynomials are to be optimized. Furthermore, we approximate the known coefficient functions of the dynamic system and constraints using Chebyshev series. By the properties of Chebyshev polynomials, the sub-problem is finally transformed into a quadratic programming problem. We solve the dual problem of this optimization problem since it admits an exact solution. The effectiveness of the proposed method is demonstrated by examining three distinct types of optimal control problems. The numerical results conclusively illustrate that our approach outperforms the conventional iterative Chebyshev approximation method in terms of computation efficiency.

  • Anas Mifrani, Dominikus Noll
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-028
    Accepted: 2025-04-21

    (Communicated by Maria Josefa Canovas)

         We propose a vector linear programming formulation for a non-stationary, finite-horizon Markov decision process with vector-valued rewards. Pareto efficient policies are shown to correspond to efficient solutions of the linear program, and vector linear programming theory allows us to fully characterize deterministic efficient policies. An algorithm for enumerating all efficient deterministic policies is presented then tested numerically in an engineering application.

  • Jiaxiang Li, Xuxing Chen, Shiqian Ma, Mingyi Hong
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-009
    Accepted: 2025-02-18

    (Communicated by Liqun Qi)

    (Dedicated to Professor Terry Rockafellar on the occasion of his 90th birthday)

    Abstract: Existing decentralized algorithms usually require knowledge of problem parameters for updating local iterates. For example, the hyperparameters (such as learning rate) usually require the knowledge of Lipschitz constant of the global gradient or topological information of the communication networks, which are usually not accessible in practice. In this paper, we propose D-NASA, the first algorithm for decentralized nonconvex stochastic optimization that requires no prior knowledge of any problem parameters. We show that D-NASA has the optimal rate of convergence for nonconvex objectives under very mild conditions and enjoys the linear-speedup effect, i.e. the computation becomes faster as the number of nodes in the system increases. Extensive numerical experiments are conducted to support our findings.

  • Shaoze Li, Junhao Wu, Cheng Lu, Zhibin Deng, Shu-Cherng Fang
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-010
    Accepted: 2025-02-18

    (Communicated by Liqun Qi)

    (Dedicated to Professor Terry Rockafellar on the occasion of his 90th birthday)

        Convex separable quadratic optimization problems arise in numerous practical applications. These problems take the form:

    $\begin{align*}\min _{\boldsymbol{y} \in \mathbb{R}^n} f(\boldsymbol{y}) & =\boldsymbol{y}^T \Delta \boldsymbol{y}+\boldsymbol{\alpha}^T \boldsymbol{y} \\ \text { s.t. } g_{i}(\boldsymbol{y})& = \boldsymbol{y}^T \Theta_i \boldsymbol{y} +\boldsymbol{\beta}_i ^T \boldsymbol{y} +\sigma_i \leq 0,~ i=1,\dots,m, \label{SQPQC}\tag{SQPQC}\\\boldsymbol{y} & \in[\boldsymbol{l}, \boldsymbol{u}],\end{align*}$

    where $\Delta$ and $\Theta_i$ are $n \times n$ diagonal matrices.

        Building on an iterative resolution scheme for the KKT system, we develop an efficient method for solving problem \ref{SQPQC}. We begin by applying the dual method for problem (SQPQC) with one single quadratic constraint, exploiting some distinct properties of this subclass, such as the differentiability of the dual function and the uniqueness of the optimal solution of the dual problem. Leveraging on the properties, we then develop a highly efficient algorithm for the general case of multiple quadratic constraints, and we show that the proposed approach leads to a dual coordinate ascent algorithm. Finally, we prove the convergence of the proposed algorithm and demonstrate, through extensive numerical experiments, that it outperforms the Gurobi solver, particularly for large-scale convex separable quadratic programming problems.



  • Meijia Yang, Hanxiao Zhang, Zhuoyi Xu, Yan Xu
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-011
    Accepted: 2025-02-18

    (Communicated by Xinwei Liu)

         In this paper, we study the trust region subproblem with a linear cut over the complex domain ${\rm (TL)}$.

    $\begin{eqnarray*}{\rm \left(TL\right)}~&\min_{w\in\Bbb C^n}&F(w)=\frac{1}{2}w^HAw+\mathcal{R}(b^Hw)\\~~~~~~~~~~~&{\rm s.t.}& w^Hw-1\leq 0,\\~~~~~~~~~~~&&\mathcal{R}(c^Hw)-d\leq 0,\end{eqnarray*}$

    where $A\in\Bbb C^{n\times n}$ is an Hermitian matrix, whose eigenvalues are real numbers. We assume that $A\nsucceq 0$ and denote its eigenvalues by $\lambda_1\leq \lambda_2\leq...\leq\lambda_n$, where at least one of them is negative, i.e.,$\lambda_1<0$.

         We also assume that the Slater condition holds, i.e., there exists $\hat{w}$ such that $\hat{w}^H\hat{w}<1,~\mathcal{R}(c^H\hat{w})<d$ holds.

        We first reveal the hidden convexity property of {\rm (TL)} by proving that it is equivalent to a convex programming problem:

    $\begin{eqnarray*}{\rm \left(C\right)}~&\min_{w\in\Bbb C^{n}}& P(w)=\frac{1}{2}w^H(A-\lambda_1I)w+\mathcal{R}(b^Hw)+\frac{1}{2}\lambda_1\\~~~~~~~~~~~&{\rm s.t.}& w^Hw-1\leq 0,\\~~~~~~~~~~~&&\mathcal{R}(c^Hw)-d\leq 0.\end{eqnarray*}$

    As an approximation of ${\rm (C)}$, we turn to solve the following convex programming problem

    $\begin{eqnarray*}&{\rm \left(AC\right)}&\min_{w\in\Bbb C^{n}} \frac{1}{2}w^H(A-\tilde{\lambda}_1 I)w+\mathcal{R}(b^Hw)+\frac{1}{2}\tilde{\lambda}_1\\~~~~~~~~~~~&{\rm s.t.}& w\in W:=\left\{w^Hw\leq 1,~\mathcal{R}(c^Hw)\leq d\right\},\end{eqnarray*}$

    which can be further equivalently reformulated as the problem ${\rm (AC^r)}$:

    $\begin{eqnarray*}&{\rm \left(AC^r\right)}&\min_{z\in\Bbb R^{2n}} g(z):=\frac{1}{2}z^T(Q-\tilde{\lambda}_1I)z+q^Tz+\frac{1}{2}\tilde{\lambda}_1\\~~~~~~~~~~~&{\rm s.t.}& z\in S:=\left\{z^Tz-1\leq 0,~h^Tz-d\leq 0\right\}.\end{eqnarray*}$

        Then we present a linear-time algorithm approximation scheme (in terms of the number of nonzero entries) for solving ${\rm (TL)}$ based on ${\rm (AC^r)}$. It mainly employs the eigenvalue approximation technique and Nesterov's accelerated gradient descent algorithm.

    Algorithm 1

    (a) Choose $\theta_0=\theta_{-1}\in(0,1]$, $z_0=z_{-1}\in S$. Let $k:=0$.

    (b) Let 

    $y_k=z_k+\theta_k(\theta_{k-1}^{-1}-1)(z_k-z_{k-1})$
    and $\nabla g_k=\nabla g(y_k)$. Update
    \begin{align*}z_{k+1}=\arg \min_{z\in S}~G(z)=z^T\nabla g_k+\rho\|z-y_k\|^2.\label{1}\tag{1}\end{align*}
    (c) Choose $\theta_{k+1}\in(0,1]$ satisfying
    $\frac{1-\theta_{k+1}}{\theta_{k+1}^2}\leq \frac{1}{\theta_k^2}.$
    If the stopping criterion does not hold, update $k:=k+1$ and go to step (b).

        Then given the parameters $\epsilon>0$, $\delta>0$, with probability at least $1-\delta$ over the randomization of an approximate eigenvector oracle, we can find an $\epsilon$-approximation solution of ${\rm (TL)}$ in total time $O\left(\frac{N\sqrt{\|A\|}}{\sqrt{\epsilon}}\log\frac{n}{\delta}\right).$We show the efficiency of the algorithm by comparing it with CVX solver in the numerical experiment. We can also see from Figure 1 that for the same dimension, the required time increases linearly with density.

    Figure 1: Computational time for different density with $\epsilon=10^{-6}$

  • Ao Shi, Jun Jiang, Zhao Deng , Yuqiang Feng
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-012
    Accepted: 2025-02-18

    (Communicated by Xinmin Yang)

    Abstract:The Peaceman-Rachford splitting method (PRSM) can effectively solve two-block separable convex optimization problems with linear constraints. However, extending it directly to multi-block separable convex optimization problems lacks of convergence guarantees. To address this limitation, we introduce a generalized PRSM with an indefinite proximal term and a substitution step (GPRSM-S). The global convergence and the iteration complexity are analyzed by using variational inequality theory under mild assumptions. Finally, numerical experiments on the robust Principal Component Analysis (PCA) problem demonstrate that GPRSM-S has higher efficiency compared to previous approaches.

  • Chong-Yang Shao, Yi-Bin Xiao, Zai-Yun Peng
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-013
    Accepted: 2025-02-18

    (Communicated by Xinmin Yang)

         In this paper, we study a history-dependent quasi-variational hemivariational inequality as follows.

         Problem 1.  Find $u \in C(I;\Lambda)$ such that, for all $t \in I$, $u(t) \in K(u(t))$ and

    $Au(t),v-u(t)\rangle + \varphi(Su(t),u(t),v)- \varphi(Su(t),u(t),u(t))$

    $+ j^\circ(\gamma u(t),\gamma u(t);\gamma v-\gamma u(t)) \geq \langle f(t),v-u(t)\rangle, ~~\forall v \in K(u(t)).$

        Here $I=[0,+\infty)$ is an infinite time interval, $V$ and $X$ are reflexive Banach spaces, $Y$ is a normed space, $\Lambda$ is a nonempty  subset of $V$, $C(I;\Lambda)$ stands for a set of all continuous functions  defined on $I$ with values on $\Lambda$, $A: V \to V^*$  represents a nonlinear operator, $\gamma: V \to X$ is a linear continuous operator, $S: C(I;V) \to C(I;Y)$ is a history-dependent operator, $\varphi: Y \times V \times V \to \mathbb{R}$ is convex, lower semicontinuous  with respect to its last argument, $j:X \times X \to \mathbb{R}$ is a locally Lipschitz functional with respect to its last argument, $j^\circ$ is the generalized directional derivative (in the sense of Clarke) of $j$, $K: \Lambda \to 2^\Lambda$ is a set-valued mapping and $f \in C(I;V^*)$ is given.

        By applying a fixed point argument about history-dependent operators and the Gronwall inequality, we obtain a unique solvability result to the history-dependent quasi-variational hemivariational inequality in a space of continuous functions. In addition, when the data of the history-dependent quasi-variational hemivariational inequality are perturbed, sufficient conditions are given to guarantee that the solution sequence of the perturbed problem converges to the unique solution of the original problem.


  • Haoming Xia , Ke Guo , Zhonglin Xu
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-001
    Accepted: 2025-02-17

    (Communicated by Xinmin Yang)

    Abstract: The alternating direction method of multipliers (ADMM) has demonstrated its efficiency and well-understood convergence properties when applied to minimization problems where the objective function is the sum of two nonconvex separable functions and the constraint is linear. However, the requirement for global Lipschitz continuity of the gradient of differentiable functions, which is often impractical in nonconvex optimization problems, restricts its applicability across various domains. Recently, a novel Bregman ADMM has been introduced for two-block nonconvex optimization problems with linear constraints. This new Bregman ADMM not only removes the need for global Lipschitz continuity of the gradient, making it suitable for a broader range of practical problems, but also ensures that it can reduce to the classical ADMM in specific cases. Building on this Bregman ADMM, we address multi-block nonconvex separable optimization problems with linear constraints. We demonstrate that any cluster point of the iterative sequence generated by Bregman ADMM is a critical point, provided that the associated function satisfies the Kurdyka-Łojasiewicz inequality. Additionally, we present sufficient conditions to ensure both the convergence and convergence rate of the algorithm.

  • Weimi Zhou, Yong-Jin Liu
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-002
    Accepted: 2025-02-17

    (Communicated by Xinmin Yang)

    Abstract: Solving the distributional worst-case in the distributionally robust optimization problem equivalent to finding the projection onto the intersection of simplex and singly linear inequality constraint. This projection is a key component in the design of efficient first-order algorithms. This paper focuses on developing efficient algorithms for computing the projection onto the intersection of simplex and singly linear inequality constraint. Based on the Lagrangian duality theory, the studied projection can be obtained by solving a univariate nonsmooth equation. We employ an algorithm called LRSA, which leverages the Lagrangian duality approach and the secant method to compute this projection. In this algorithm, a modified secant method is specifically designed to solve the piecewise linear equation. Additionally, due to semismoothness the resulting equation, the semismooth Newton (SSN) method is a natural choice for solving it. Numerical experiments demonstrate that LRSA outperforms SSN algorithm and the state-of-the-art optimization solver called Gurobi. Moreover, we derive explicit formulas for the generalized HS-Jacobian of the projection, which are essential for designing second-order nonsmooth Newton algorithms.

  • Jibiao Yuan, Juan Gao, Dongmei Ren, Jie Li, Xinwei Liu
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-003
    Accepted: 2025-02-17

    (Communicated by Xinmin Yang)

    Abstract: We investigate the following distributed convex optimization problem:

    $ {\min }\limits_{\tilde{s} \in {\mathbb{R}^n}} f( \tilde{s}) = \sum\limits_{i = 1}^m {{f_i}( \tilde{s} )}$                                                                               (1)

    where $\tilde{s} \in {\mathbb{R}^n}$ is the global decision variable and the local objective function ${f_i}:{\mathbb{R}^n} \to \mathbb{R}$ belongs to agent $i$. All agents are nested in a strongly connected directed graph. The goal of $m$ agents is collaboratively to find the global minimizer of the problem (1) through local communication with their neighbors. Assuming that each agent $i$ has a local copy $s^i$ of the global decision variable $\tilde{s}$, the problem (1) can be equivalently written as:

    ${\min }\limits_{s \in {\mathbb{R}^{nm}}} f( s ) = \sum\limits_{i = 1}^m {{f_i}( {{s^i}} )}$                                                                       (2)

    ${\rm{s.t.}} \ {s^i} = {s^j}, \ \forall i,j = 1, \ldots ,m,$

    where $s = {[ {(s^1)^{\mathsf{T}},(s^2)^{\mathsf{T}}, \ldots ,(s^m)^{\mathsf{T}}} ]^{\mathsf{T}}} \in {\mathbb{R}^{nm}}$. We propose a unified and accelerated version of distributed gradient methods to solve the problem (2). 

          Since the agents exchange the information over a directed graph, we consider the row-stochastic weight matrix $W = [ {{w_{ij}}} ] \in {\mathbb{R}^{m \times m}}$ associated the graph which satisfies the following conditions:

    $w_{ij} = \begin{cases} >0, & j \in \mathcal{N}^i_m \cup \{i\} \\ 0, & \rm{otherwise} \end{cases} $ ,$\sum\limits_{j = 1}^m w_{ij} = 1, \forall i \in \mathcal{V}.$

    where $\mathcal{N}^i_{in}$ is the in-degree set of agent $i$. Define the matrix $L = [ l_{ij}] = I_m - W$ and the matrix $B  = [ b_{jl}] \in {\mathbb{R}^{m\times m}}$ with $B1_m = b1_m$ for some $b \in \mathbb{R}$.

          With the help of the row-stochastic weight matrix and the Nesterov's monentum acceleration technique, we propose a unified and accelerated version of distributed gradient methods (UADM). In our algorithm, each agent $i \in \mathcal{V}$ stores four variables $s_k^i \in {\mathbb{R}^n},x_k^i \in {\mathbb{R}^n},y_k^i \in {\mathbb{R}^m},u_k^i \in {\mathbb{R}^n}$. For $k \geq 0$ and all $i \in \mathcal{V}$, UADM is initialized with $x_0^i = s_0^i \in {\mathbb{R}^n},y_0^i = {e_i}\in {\mathbb{R}^m},u_0^i = 0_n$ and updates its variables as follows:

    $x_{k + 1}^i = \sum\limits_{j = 1}^m {{w_{ij}}s_k^j}  - {\alpha _i}\left( {\frac{{\nabla {f_i}( {s_k^i})}}{{{{[{y_k^i}]}_i}}} + u_k^i} \right)$                                                  (3a)

    $s_{k + 1}^i = x_{k + 1}^i + \beta_i ( {x_{k + 1}^i - x_k^I})$                                                           (3b)

    $y_{k + 1}^i = \sum\limits_{j = 1}^m {{w_{ij}}y_k^j}$                                                                    (3c)

    $u_{k + 1}^i = u_k^i - \sum\limits_{j = 1}^m {{l_{ij}}} \left( { \frac{{\nabla {f_j}({s_k^j})}}{{{{[ {y_k^j} ]}_j}}}  + u_k^j - \sum\limits_{l = 1}^m {{b_{jl}}s_k^l} }\right)$                                         (3d)

    where $\alpha_i > 0$ is the local step-size, $ 0 \leq \beta_i< 1 $ is the local momentum coefficient.

         When the parameters $\beta_i, b_{jl}$ in UADM take different values, UADM can include the accelerated method FROZEN as its special case and bring some new accelerated methods such as the accelerated versions of FROST and Li-Row. In contrast to the methods using the column-stochastic weight matrices, UADM based on the row-stochastic weight matrices does not require each agent to know its out-degree and is easier to implement in a distributed setting because each agent can assign appropriate weights to the received information from its in-neighbors.

         If the step-sizes and the momentum coefficients satisfy some upper bounds, we prove that UADM has a linear convergence rate for smooth and strongly convex problems. The convergence result of UADM unifies the convergence results of distributed exact first-order algorithms, Li-Row, FROST and FROZEN.

        Some numerical experiments on the distributed quadratic programming show that UADM significantly outperforms the nonaccelerated algorithm (Li-Row) as the condition number increases. Some numerical experiments on the distributed logistic regression problems demonstrate that our algorithm can achieve accelerated convergence in comparison with some existing distributed algorithms.

  • Xiaoli Huang, Yuelin Gao, Xiaohua Ma, Xia Liu
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-004
    Accepted: 2025-02-17

    (Communicated by Xinmin Yang)

    Abstract: This paper investigates an effective branch-and-bound algorithm for solving the generalized linear fractional  multiplicative programming (GLFMP) problem. Initially, leveraging the structure of GLFMP, some variables are introduced to transform it into an equivalent problem. Subsequently, bidirectional linear relaxation is applied to the constraint functions of the equivalent problem, resulting in its linear relaxation problem, which is embedded into the branch-and-bound framework. Furthermore, incorporating tailored region reduction technique, we propose an output space branch-and-bound optimization algorithm. Additionally, the convergence and complexity of the algorithm are analyzed. Finally, the feasibility and effectiveness of the algorithm are validated using GLFMP instances ranging from specific to general cases.

         This paper considers the following generalized linear fractional multiplicative programming problem:

    GLFMP: $\begin{align*}\left\{\begin{aligned}&\min \;\; f(x)=\sum_{i=1}^p\gamma_i\,\prod_{j=1}^{N_i}(f_{ij}(x))^{\sigma_{ij}}\\&\mbox{s.t. } x\in X=\{x\in\mathbb{R}^n|Ax\leq b,x\geq 0\},\end{aligned}\right.\end{align*}$

    where $f_{ij}(x)=\frac{n_{ij}^{\top}x+\alpha_{ij}}{d_{ij}^{\top}x+\beta_{ij}}$, $p$ and $N_{i}$ are natural numbers. $n_{ij}, d_{ij}\in \mathbb{R}^{n}$, $\alpha_{ij}, \beta_{ij}, \sigma_{ij}\in \mathbb{R}$, $A\in \mathbb{R}^{m\times n}$, $b\in \mathbb{R}^{m}$. $X$ is a non-empty, bounded and closed set. Assume that for any $x\in X$, $\min\{n_{ij}^{\top}x+\alpha_{ij}, d_{ij}^{\top}x+\beta_{ij}\}>0$ holds. Furthermore, when $\sigma_{ij}<0$, $\left(f_{ij}(x)\right)^{\sigma_{ij}}=\left(\frac{1}{f_{ij}(x)}\right)^{-\sigma_{ij}}$. Thus, for each $i\in I=\{1,...,p\}$, $j\in J_{i}=\{1,...,N_{i}\}$, assuming $\sigma_{ij}>0$ is not loss of generality. If ``min" in GLFMP is replaced by ``max", then

    $\min_{x\in X}\sum_{i=1}^{p}\gamma_{i}\prod_{j=1}^{N_{i}}\left(f_{ij}(x)\right)^{\sigma_{ij}}=-\max_{x\in X}\sum_{i=1}^{p}(-\gamma_{i})\prod_{j=1}^{N_{I}} \left(f_{ij}(x)\right)^{\sigma_{ij}},$

    which indicates that maximizing GLFMP can be transformed into the form of GLFMP.

         An increasing number of scholars and experts are developing a keen interest in GLFMP and its various variants. One reason is that GLFMP and its variants have strong practical application capabilities in many areas. On the other hand, the complex structures of GLFMP and its variants result in them becoming non-convex problems, which often leads to them being NP-hard problems with multiple local optima instead of global solutions. This significantly hinders the search for a global optimum. Nevertheless, researchers have risen to the challenge and proposed numerous optimization algorithms.

         The main purpose of this article is to design an output space branch-and-bound global optimization algorithm for the general form of GLFMP. In pursuit of this goal, we first introduce some variables and transform GLFMP through two resets to obtain its equivalent problem (EGLFMP). Subsequently, we approximate the constraints to obtain the linear relaxation problem of EGLFMP. Furthermore, in order to remove rectangular regions that do not contain the global optimal solution to a greater extent, region reduction technique is devised based on the structural characteristics of the equivalent problem and relaxation problem, aiming to expedite the convergence speed of the algorithm. Integrating the linear relaxation programming, region reduction technique, and branching strategy into the branch-and-bound framework yields an output space branch-and-bound algorithm distinct from existing algorithms. This algorithm can not only solve GLFMP, but also special variants of GLFMP mentioned earlier. Finally, we conduct a convergence analysis on the proposed algorithm and estimate the maximum number of iterations in the worst case. Numerous numerical experimental results are used to demonstrate the performance of the algorithm.


  • Longhui Liu, Congying Han, Tiande Guo
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-005
    Accepted: 2025-02-17

    (Communicated by Xinmin Yang)

    Abstract: In this paper, we concentrate on a broad class of large-scale nonconvex and nonsmooth optimization problems. We first propose a novel inertial stochastic Bregman proximal alternating linearized minimization algorithm (TiSBPALM), which employs variance-reduced stochastic gradient estimators. Subsequently, under the assumption that the objective function satisfies the Kurdyka-Łojasiewicz property and certain conditions on the parameters are imposed, we prove that the sequence generated by our algorithm converges to a critical point in expectation. Additionally, we provide the convergence rate for the iteration sequence. Finally, we conduct numerical experiments on sparse nonnegative matrix factorization and blind image-deblurring to verify the effectiveness of our proposed algorithms.

  • Jian Liu, Ziheng Su, Huifu Xu
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-006
    Accepted: 2025-02-17

    (Communicated by Liqun Qi)

    (Dedicated to Professor Terry Rockafellar on the occasion of his 90th birthday)


    Abstract: Inspired by the recent work of Shapiro et al., we propose a Bayesian distributionally robust Nash equilibrium (BDRNE) model where each player lacks complete information on the true probability distribution of the underlying exogenous uncertainty represented by a random variable and subsequently determines the optimal decision by solving a Bayesian distributionally robust optimization (BDRO) problem under the Nash conjecture. Unlike most of the DRO models in the literature, the BDRO model assumes (a) the true unknown distribution of the random varialbe can be approximated by a randomized parametric family of distributions, (b) the average of the worst-case expected value of the objective function with respect to the posterior distribution of the parameter, instead of the worst-case expected value of the objective function is considered in each player’s decision making, and (c) the posterior distribution of the parameter is updated as more and more sampling information of the random variable is gathered. Under some moderate conditions, we demonstrate the existence of a BDRNE and derive asymptotic convergence of the equilibrium as the sample size increases. Moreover, we propose to solve the BDRNE problem by Gauss-Seidel-type iterative method in the case when the ambiguity set of each player is constructed via Kullback-Leibler (KL) divergence. Finally, we apply the BDRNE model to a price competition problem under multinomial logit demand. The preliminary numerical test results show that the proposed model and computational scheme perform well.

  • Yu-Lin Chang , Jein-Shan Chen, Chu-Chin Hu , Wei-Cheng Hu , Ching-Yu Yang
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-007
    Accepted: 2025-02-17

    (Communicated by Liqun Qi)

    Dedicated to Professor Terry Rockafellar on the occasion of his 90th birthday

    Abstract: Mean inequalities on the second order cone have been studied as an extension of the SPD cone. In this article, we turn our eyes on non-symmetric cones. In fact, we investigate two types of decompositions associated with circular cones, and establish their own mean inequalities. These inequalities are ground bricks for further study regarding circular cone optimization. We also find under the condition $0<\theta<\frac{\pi}{4}$ some inequalities cannot hold if we apply different decomposition, and correspondingly we raise a conjecture.

  • Alexander J. Zaslavski
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-008
    Accepted: 2025-02-17

    (Communicated by Liqun Qi)

    (Dedicated to Professor Terry Rockafellar on the occasion of his 90th birthday)

    Abstract: In this work using subgradient algorithm we study a minimization problem with a convex objective function on a domain, which is the solution set of a common fixed point problem with a countable family of quasi-nonexpansive mappings. Our algorithm generates a sequence of iterates which are approximate solutions of the corresponding fixed point problem. Additionally, also either this sequence converges to a solution of our minimization problem or the sequence is strictly Fejer monotone regarding the solution set of the common fixed point problem.

  • Zihao Jia , Youlin Shang , Yan Feng , Yufei Ren
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-023
    Accepted: 2024-10-12

    (Communicated by Xinmin Yang)

    Abstract: A large number of problems in social sciences and natural sciences today can be attributed to global optimization problems, which are widely found in economic models, finance, network transportation, database, integrated circuit design image processing, chemical engineering design and control, molecular biology, environmental engineering, etc. At present, the methods for solving global optimization problems can be divided into two categories, namely, deterministic algorithm and stochastic algorithm. Stochastic algorithm, such as genetic algorithm, simulated annealing algorithm and so on, is constructed by using the point column generated by a probabilistic mechanism to describe the iterative process. Deterministic algorithms use the analytic property of the problem to generate deterministic finite (or infinite) point sequences that converge to the global optimal solution, such as interval method, D.C. Planning algorithm, branch and bound method, auxiliary function method and integral level set method, etc. Among them, the filled function method is a deterministic method which can search the optimization problem globally. Ren pu Ge took the lead in proposing the filled function method[1]. The main idea is to use the filled function to jump out of the current local minimizer to find a better minimizer. The filled function method consists of two parts, one is to local minimization and the other is to fill. In the minimization phase, a local minimization algorithm is used to obtain a local minimizer of the original problem. In the filling phase, the original problem is minimized by means of the filled function, and the two phases are alternated until a global minimizer is found.

         The following global optimization problem will be discussed

    min $f(x)$,

    s.t.  $x \in \mathbb{R}^n$.                                                                           (1)

    Ge states that if a function satisfies the filled function definition and the filling property, then it can be used to solve the problem (1). The first generation filled function formula given in [1] is as follows:

    $G(x,x_k^*,r,\rho)=\frac1{r+f(x)}\exp(-\frac{\|x-x_k^*\|^2}{\rho^2})$,                                                   (2)

    where $r$ and $\rho$ are given parameters. The selection of parameters in formula (2) is quite difficult. This is a challenge for coming up with new filled function methods. The filled function of equation (2) has some shortcomings. 

    It has been discussed in depth. These shortcomings are mainly related to the two parameters $r$ and $\rho$ contained in the function. According to Gao, Ge's filled function contains an exponential term $\exp(-\frac{\|x-x_k^*\|^2}{\rho^2})$, and the picture of the filled function is almost flat if $\|x-x_k^*\|^2$ is large or $\rho$ is small. This makes it difficult for the computer to discern changes in function values.

        In this paper, a new non-parameter filled function is constructed. It satisfies the definition of filled function and has excellent properties. Then, the corresponding algorithm is designed according to the new filled function. Numerical experiments are made and comparisons on several test problems are shown which exhibit the feasibility and effectiveness of the algorithm.



  • Qiao Chen , Liping Tang , Ke Zhang, Xinmin Yang
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-024
    Accepted: 2024-10-12

    (Communicated by Kok Lay Teo)

    Abstract:Global optimization problems are common issue in numerous fields, such as molecular biology, economics and finance, environmental engineering, industrial manufacturing. In the global optimization problem, if it is a convex programming, we can use a local minimization algorithm to find its global optimal solution. Unfortunately, in most cases, these problems are non-convex optimization problems with multiple minimum points, which cause a scientific question arise: How to find a better minimizer from the known local minimizer of the objective function.

        The traditional nonlinear programming algorithms, such as Quasi-Newton methods or other algorithms, cannot directly solve the puzzle. Therefore, it is a great challenge to find the global minimizer of non-convex objective function. In recent years, lots of studies have focused on global optimization theory and algorithm. In general, global optimization methods can be divided into two categories: stochastic methods and deterministic methods. The stochastic methods for global optimization include evolutionary algorithm, simulated annealing, and particle swarm optimization, etc. These methods are inspired by physical principles or natural evolution, which jump out of the local minimum by a probability-based approach. Although the stochastic methods have been effectively applied to solve many intricate problems, some problems remain unsolved, such as convergence analysis and parameter selection. The deterministic methods, for example, the filled function method, tunnelling method, covering method, can converge faster and often find a better minimizer than the current minima quickly. Compared with other deterministic methods, the filled function method is a more popular global optimization method because it is relatively easy to implement. The basic idea of filled function is to construct an auxiliary function at the current local minima of the objective function. By minimizing the auxiliary function, the current local minima is quickly jumped out and a better local minima is reached. This process cycles until a better local minimum cannot be found.

        The first filled function introduced by Ge is as follows, i.e.

    $P(x,x_{1}^*, r, \rho)=\frac{1}{r+F(x)}\text{exp}(-\frac{\|x-x_{1}^{\ast}\|^{2}}{\rho^{2}}),$

    where the parameters $r$ and $\rho$ need to be chosen appropriately. However, Ge's filled function has some shortcomings. Because there are too many restrictions on the parameters $r$ and $\rho$, it is difficult for them to be properly selected so that $P(x,x_{1}^*, r, \rho)$ satisfies the definition of the filled function. Constructing a filled function with good properties is one of the keys to the success of a filled function algorithm. Therefore, researchers have done a lot of beneficial research work in enriching the forms of filled function. Unfortunately, These filled functions indeed possess several merits, yet they also exhibit drawbacks that necessitate further refinement and improvement. The adjustment of parameters for some two-parameter or single-parameter filled functions is still difficult. Non-smooth, non-differentiable or even discontinuous filled functions are not minimized by effective local optimization procedures based on gradient information, which decreases the efficiency of the filled algorithm. To address the difficulties in adjusting parameters, some scholars have proposed parameter-free filled functions. However, some parameter-free filled functions do not have any information about the objective function when $f(x)\geq f(x_{1}^{*})$, which may lead to difficulties in finding the global minimum point. In addition, without the adjustment of parameters, the proposed function in literature may fail to satisfy the definition of the filled function.

        Based on the above considerations, a new one-parameter filled function is proposed in this paper. The main contributions of this work are summarized below:

        (i) The new filled function is continuously differentiable and its gradient information can be used in local minimization procedure. Thus, the realization of the minimization process is simplified, and the algorithm is more efficient and convenient.

          (ii)  In any case, our proposed filled function always contains information about the objective function.

         (iii)  The new filled function has an easily adjustable parameter that can be tuned according to the specific problem, enabling the algorithm to adapt to the problem and avoids any issues similar to those encountered in literature.

        Next, we conducted a theoretical analysis and research on the properties of the new filled function. These good function properties ensure the convergence of the minimization process of the proposed filled function. Therefore, the minimum iteration process of the new filled function algorithm will run successfully in the whole region. Without these function properties, the filled function may behave similarly to that in reference, having no local minima within the entire feasible region. Furthermore, supported by thorough theoretical analysis, we have meticulously conceived a filled function method that is not only easy to implement but also highly efficient. Through a series of rigorous numerical experiments, we have fully verified the effectiveness and reliability of this new algorithm. It is worth mentioning that in the comparative tests with two other filled function methods, our algorithm has demonstrated more significant advantages, and the experimental results convincingly prove its superiority.


  • Feifei Shen , Liu Yang , Jiaxin Yang , Xiaojiao Tong ,
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-016
    Accepted: 2024-09-04
    (Communicated by Guihua Lin)
    Abstract: With the development of financial investment risk management, utility-based shortfall risk measures have received increasing attention for its potential to quantify the risk of large tail losses more effectively than conditional value at risk. For portfolio selection, transaction costs incurred by changes of investment proportions on risky assets have a significant impact on the investment strategy and the return. In this paper, we discuss a distributionally robust portfolio selection problem with shortfall risk measures and investigate the impact of transaction costs on portfolio selection. We first constructed the distributionally robust shortfall risk (DRSR) portfolio models without and with transaction costs. Then we construct moment ambiguity sets based on empirical data for the unknown probability distribution functions of random variables and discuss the convergence of optimal values and solutions of models as the sample size increases. Under the condition that the loss function is a piecewise affine function, we transform models into equivalent semi-definite programming problems. Numerical tests illustrate the validity of our models. Our main contributions can be summarized as follows:
         (i) We construct the DRSR portfolio selection models without and with transaction costs based on the moment ambiguity set, and discuss the convergence analysis of optimal value and solution of the models with moment ambiguity sets based on empirical data as the sample size increases.
         (ii) We use a linearization method to convert the V-shaped transaction costs function into inequality constraints, and this method can overcome the nonsmooth of the transaction costs function. Then we transform the DRSR potfolio selection models into computable equivalent semi-definite programming problems (SDP) by using the moment theory and duality theory.


        (iii) Based on the actual stock data, the influence of different parameters (such as transaction costs rate, risk level, ambiguity set parameters, sample sizes, etc.) on the optimal solution and optimal value of the models are analyzed. The validity and convergence of the new models are verified.


  • Annamaria Barbagallo
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-017
    Accepted: 2024-09-04
    (Communicated by Andreas Fischer )
    Abstract: In this paper, a link-based emission pollution permit system for dynamic transportation network equilibrium problems is considered. The governing equilibrium conditions for this model are given. Then, their time-dependent variational formulation is obtained. Thanks to that, existence and continuity results for equilibrium distributions are established. Finally, the existence of Lagrange multipliers is proved, and the behavior of the model is analyzed.
  • Danni Xu , Suxiang He , Haibin Chen
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-018
    Accepted: 2024-09-04
    (Communicated by Zheng-Hai Huang)
    Abstract: In this paper, we consider the bi-quadratic optimization problem with a partially symmetric fourth-order tensor. When the objective function is concave, it has the same optimal value as multilinear optimization problems. Then, the relation between the augmented bi-quadratic optimization problem and the original problem is established by limiting the domain of the solution in a unit sphere. A fast alternating proximal gradient method is proposed to find the approximate optimal value of the augmented bi-quadratic optimization problem. Furthermore, we present an enhanced block improvement method to obtain the approximate optimal value of the original problem. Finally, we perform convergence analysis and numerical experiments on the method to show the efficiency
  • Jianchao Bai , Yang Chen , Yuxue Ma
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-019
    Accepted: 2024-09-04
    (Communicated by Xinmin Yang)
    Abstract: In this paper, we develop an Inexact Relaxed Augmented Lagrangian Method (IR-ALM) for solving a class of convex optimization problems. Flexible relative error criteria are designed for approximately solving the resulting subproblem, and a relaxation step is exploited to accelerate its convergence numerically. By a unified variational analysis, we establish the global convergence of this IR-ALM and its sublinear convergence rate in terms of the primal iterative residual, the objective function value gap and constraint violation, respectively. Experimental results on testing the image restoration problem with different types of images show that the proposed method is effective.
  • Qiao Zhu , Liping Tang, Yibin Xiao
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-020
    Accepted: 2024-09-04
    (Communicated by Xinmin Yang)
    Abstract: The present paper proposes a new piecewise convexification method to approximate the globally efficient solution set of non-convex multi-objective optimization problems (NC-MOP) with box constraints. Initially, we utilize the α-based Branch-and-Bound (αBB) method to formulate the problem of piecewise convexification for NC-MOP, which consists of a series of convex relaxation optimization problems. Subsequently, we introduce a box classification strategy to define the efficient solution set for the piecewise convexification problem and demonstrate its effective approximation to the globally efficient solution set of NC-MOP. Finally, a new two-stage piecewise convexification algorithm combining αBB and multi-objective evolutionary algorithm based on decomposition (MOEA/D) is proposed, which incorporates novel techniques such as box partitioning, deletion methods, and termination tests. Moreover, the effectiveness of our approach compared to competitor is validated through experimental results.
  • Xianjun Long , Xiaoting Wang, Gaoxi Li
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-021
    Accepted: 2024-09-04
    (Communicated by Donghui Li)
    Abstract: In this paper, we study a class of fractional optimization problems, where the numerator of which is the sum of a nonsmooth convex function and a second-order differentiable convex function, the denominator is a second-order differentiable concave function. We propose an inexact proximal quasi-Newton algorithm with line search for solving this problem. Under moderate conditions, we prove that the sequences generated by the proposed algorithms converge to optimal solutions. In addition, the inexact proximal quasi-Newton algorithm exhibits the O(1/k) global convergence rate, where k denotes the iteration counter. Finally, two numerical experiments illustrate the effectiveness and the competitiveness of the proposed algorithm.
  • Yazheng Dang, Jingwen Zhang, Jinglei Lu
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-022
    Accepted: 2024-09-04
    (Communicated by Mabel Chou)
    Abstract: The symmetric alternating direction method of multipliers(ADMM) for solving the nonconvex separable optimization has been widely studied. However, the symmetric ADMM may fail to converge when the optimization is nonconvex, nonsmooth and nonseparable. This paper proposes a proximal symmetric ADMM for nonconvex nonseparable optimization, which employs linearized technique in each subproblem and introduces a relaxation factor in the Lagrangian multiplier update step. In addition, when the potential function satisfies the Kurdyka-Łojasiewicz (KL) property, the convergence of the algorithm is established. Finally, numerical experiments verify the effectiveness of the proposed method.