Home Browse Just accepted

Just accepted

Accepted, unedited articles published online and citable. The final edited and typeset version of record will appear in the future.
Please wait a minute...
  • Select all
    |
  • Jiaxiang Li, Xuxing Chen, Shiqian Ma, Mingyi Hong
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-009
    Accepted: 2025-02-18

    (Communicated by Liqun Qi)

    (Dedicated to Professor Terry Rockafellar on the occasion of his 90th birthday)

    Abstract: Existing decentralized algorithms usually require knowledge of problem parameters for updating local iterates. For example, the hyperparameters (such as learning rate) usually require the knowledge of Lipschitz constant of the global gradient or topological information of the communication networks, which are usually not accessible in practice. In this paper, we propose D-NASA, the first algorithm for decentralized nonconvex stochastic optimization that requires no prior knowledge of any problem parameters. We show that D-NASA has the optimal rate of convergence for nonconvex objectives under very mild conditions and enjoys the linear-speedup effect, i.e. the computation becomes faster as the number of nodes in the system increases. Extensive numerical experiments are conducted to support our findings.

  • Shaoze Li, Junhao Wu, Cheng Lu, Zhibin Deng, Shu-Cherng Fang
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-010
    Accepted: 2025-02-18

    (Communicated by Liqun Qi)

    (Dedicated to Professor Terry Rockafellar on the occasion of his 90th birthday)

        Convex separable quadratic optimization problems arise in numerous practical applications. These problems take the form:

    $\begin{align*}\min _{\boldsymbol{y} \in \mathbb{R}^n} f(\boldsymbol{y}) & =\boldsymbol{y}^T \Delta \boldsymbol{y}+\boldsymbol{\alpha}^T \boldsymbol{y} \\ \text { s.t. } g_{i}(\boldsymbol{y})& = \boldsymbol{y}^T \Theta_i \boldsymbol{y} +\boldsymbol{\beta}_i ^T \boldsymbol{y} +\sigma_i \leq 0,~ i=1,\dots,m, \label{SQPQC}\tag{SQPQC}\\\boldsymbol{y} & \in[\boldsymbol{l}, \boldsymbol{u}],\end{align*}$

    where $\Delta$ and $\Theta_i$ are $n \times n$ diagonal matrices.

        Building on an iterative resolution scheme for the KKT system, we develop an efficient method for solving problem \ref{SQPQC}. We begin by applying the dual method for problem (SQPQC) with one single quadratic constraint, exploiting some distinct properties of this subclass, such as the differentiability of the dual function and the uniqueness of the optimal solution of the dual problem. Leveraging on the properties, we then develop a highly efficient algorithm for the general case of multiple quadratic constraints, and we show that the proposed approach leads to a dual coordinate ascent algorithm. Finally, we prove the convergence of the proposed algorithm and demonstrate, through extensive numerical experiments, that it outperforms the Gurobi solver, particularly for large-scale convex separable quadratic programming problems.



  • Meijia Yang, Hanxiao Zhang, Zhuoyi Xu, Yan Xu
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-011
    Accepted: 2025-02-18

    (Communicated by Xinwei Liu)

         In this paper, we study the trust region subproblem with a linear cut over the complex domain ${\rm (TL)}$.

    $\begin{eqnarray*}{\rm \left(TL\right)}~&\min_{w\in\Bbb C^n}&F(w)=\frac{1}{2}w^HAw+\mathcal{R}(b^Hw)\\~~~~~~~~~~~&{\rm s.t.}& w^Hw-1\leq 0,\\~~~~~~~~~~~&&\mathcal{R}(c^Hw)-d\leq 0,\end{eqnarray*}$

    where $A\in\Bbb C^{n\times n}$ is an Hermitian matrix, whose eigenvalues are real numbers. We assume that $A\nsucceq 0$ and denote its eigenvalues by $\lambda_1\leq \lambda_2\leq...\leq\lambda_n$, where at least one of them is negative, i.e.,$\lambda_1<0$.

         We also assume that the Slater condition holds, i.e., there exists $\hat{w}$ such that $\hat{w}^H\hat{w}<1,~\mathcal{R}(c^H\hat{w})<d$ holds.

        We first reveal the hidden convexity property of {\rm (TL)} by proving that it is equivalent to a convex programming problem:

    $\begin{eqnarray*}{\rm \left(C\right)}~&\min_{w\in\Bbb C^{n}}& P(w)=\frac{1}{2}w^H(A-\lambda_1I)w+\mathcal{R}(b^Hw)+\frac{1}{2}\lambda_1\\~~~~~~~~~~~&{\rm s.t.}& w^Hw-1\leq 0,\\~~~~~~~~~~~&&\mathcal{R}(c^Hw)-d\leq 0.\end{eqnarray*}$

    As an approximation of ${\rm (C)}$, we turn to solve the following convex programming problem

    $\begin{eqnarray*}&{\rm \left(AC\right)}&\min_{w\in\Bbb C^{n}} \frac{1}{2}w^H(A-\tilde{\lambda}_1 I)w+\mathcal{R}(b^Hw)+\frac{1}{2}\tilde{\lambda}_1\\~~~~~~~~~~~&{\rm s.t.}& w\in W:=\left\{w^Hw\leq 1,~\mathcal{R}(c^Hw)\leq d\right\},\end{eqnarray*}$

    which can be further equivalently reformulated as the problem ${\rm (AC^r)}$:

    $\begin{eqnarray*}&{\rm \left(AC^r\right)}&\min_{z\in\Bbb R^{2n}} g(z):=\frac{1}{2}z^T(Q-\tilde{\lambda}_1I)z+q^Tz+\frac{1}{2}\tilde{\lambda}_1\\~~~~~~~~~~~&{\rm s.t.}& z\in S:=\left\{z^Tz-1\leq 0,~h^Tz-d\leq 0\right\}.\end{eqnarray*}$

        Then we present a linear-time algorithm approximation scheme (in terms of the number of nonzero entries) for solving ${\rm (TL)}$ based on ${\rm (AC^r)}$. It mainly employs the eigenvalue approximation technique and Nesterov's accelerated gradient descent algorithm.

    Algorithm 1

    (a) Choose $\theta_0=\theta_{-1}\in(0,1]$, $z_0=z_{-1}\in S$. Let $k:=0$.

    (b) Let 

    $y_k=z_k+\theta_k(\theta_{k-1}^{-1}-1)(z_k-z_{k-1})$
    and $\nabla g_k=\nabla g(y_k)$. Update
    \begin{align*}z_{k+1}=\arg \min_{z\in S}~G(z)=z^T\nabla g_k+\rho\|z-y_k\|^2.\label{1}\tag{1}\end{align*}
    (c) Choose $\theta_{k+1}\in(0,1]$ satisfying
    $\frac{1-\theta_{k+1}}{\theta_{k+1}^2}\leq \frac{1}{\theta_k^2}.$
    If the stopping criterion does not hold, update $k:=k+1$ and go to step (b).

        Then given the parameters $\epsilon>0$, $\delta>0$, with probability at least $1-\delta$ over the randomization of an approximate eigenvector oracle, we can find an $\epsilon$-approximation solution of ${\rm (TL)}$ in total time $O\left(\frac{N\sqrt{\|A\|}}{\sqrt{\epsilon}}\log\frac{n}{\delta}\right).$We show the efficiency of the algorithm by comparing it with CVX solver in the numerical experiment. We can also see from Figure 1 that for the same dimension, the required time increases linearly with density.

    Figure 1: Computational time for different density with $\epsilon=10^{-6}$

  • Ao Shi, Jun Jiang, Zhao Deng , Yuqiang Feng
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-012
    Accepted: 2025-02-18

    (Communicated by Xinmin Yang)

    Abstract:The Peaceman-Rachford splitting method (PRSM) can effectively solve two-block separable convex optimization problems with linear constraints. However, extending it directly to multi-block separable convex optimization problems lacks of convergence guarantees. To address this limitation, we introduce a generalized PRSM with an indefinite proximal term and a substitution step (GPRSM-S). The global convergence and the iteration complexity are analyzed by using variational inequality theory under mild assumptions. Finally, numerical experiments on the robust Principal Component Analysis (PCA) problem demonstrate that GPRSM-S has higher efficiency compared to previous approaches.

  • Chong-Yang Shao, Yi-Bin Xiao, Zai-Yun Peng
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-013
    Accepted: 2025-02-18

    (Communicated by Xinmin Yang)

         In this paper, we study a history-dependent quasi-variational hemivariational inequality as follows.

         Problem 1.  Find $u \in C(I;\Lambda)$ such that, for all $t \in I$, $u(t) \in K(u(t))$ and

    $Au(t),v-u(t)\rangle + \varphi(Su(t),u(t),v)- \varphi(Su(t),u(t),u(t))$

    $+ j^\circ(\gamma u(t),\gamma u(t);\gamma v-\gamma u(t)) \geq \langle f(t),v-u(t)\rangle, ~~\forall v \in K(u(t)).$

        Here $I=[0,+\infty)$ is an infinite time interval, $V$ and $X$ are reflexive Banach spaces, $Y$ is a normed space, $\Lambda$ is a nonempty  subset of $V$, $C(I;\Lambda)$ stands for a set of all continuous functions  defined on $I$ with values on $\Lambda$, $A: V \to V^*$  represents a nonlinear operator, $\gamma: V \to X$ is a linear continuous operator, $S: C(I;V) \to C(I;Y)$ is a history-dependent operator, $\varphi: Y \times V \times V \to \mathbb{R}$ is convex, lower semicontinuous  with respect to its last argument, $j:X \times X \to \mathbb{R}$ is a locally Lipschitz functional with respect to its last argument, $j^\circ$ is the generalized directional derivative (in the sense of Clarke) of $j$, $K: \Lambda \to 2^\Lambda$ is a set-valued mapping and $f \in C(I;V^*)$ is given.

        By applying a fixed point argument about history-dependent operators and the Gronwall inequality, we obtain a unique solvability result to the history-dependent quasi-variational hemivariational inequality in a space of continuous functions. In addition, when the data of the history-dependent quasi-variational hemivariational inequality are perturbed, sufficient conditions are given to guarantee that the solution sequence of the perturbed problem converges to the unique solution of the original problem.


  • Haoming Xia , Ke Guo , Zhonglin Xu
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-001
    Accepted: 2025-02-17

    (Communicated by Xinmin Yang)

    Abstract: The alternating direction method of multipliers (ADMM) has demonstrated its efficiency and well-understood convergence properties when applied to minimization problems where the objective function is the sum of two nonconvex separable functions and the constraint is linear. However, the requirement for global Lipschitz continuity of the gradient of differentiable functions, which is often impractical in nonconvex optimization problems, restricts its applicability across various domains. Recently, a novel Bregman ADMM has been introduced for two-block nonconvex optimization problems with linear constraints. This new Bregman ADMM not only removes the need for global Lipschitz continuity of the gradient, making it suitable for a broader range of practical problems, but also ensures that it can reduce to the classical ADMM in specific cases. Building on this Bregman ADMM, we address multi-block nonconvex separable optimization problems with linear constraints. We demonstrate that any cluster point of the iterative sequence generated by Bregman ADMM is a critical point, provided that the associated function satisfies the Kurdyka-Łojasiewicz inequality. Additionally, we present sufficient conditions to ensure both the convergence and convergence rate of the algorithm.

  • Weimi Zhou, Yong-Jin Liu
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-002
    Accepted: 2025-02-17

    (Communicated by Xinmin Yang)

    Abstract: Solving the distributional worst-case in the distributionally robust optimization problem equivalent to finding the projection onto the intersection of simplex and singly linear inequality constraint. This projection is a key component in the design of efficient first-order algorithms. This paper focuses on developing efficient algorithms for computing the projection onto the intersection of simplex and singly linear inequality constraint. Based on the Lagrangian duality theory, the studied projection can be obtained by solving a univariate nonsmooth equation. We employ an algorithm called LRSA, which leverages the Lagrangian duality approach and the secant method to compute this projection. In this algorithm, a modified secant method is specifically designed to solve the piecewise linear equation. Additionally, due to semismoothness the resulting equation, the semismooth Newton (SSN) method is a natural choice for solving it. Numerical experiments demonstrate that LRSA outperforms SSN algorithm and the state-of-the-art optimization solver called Gurobi. Moreover, we derive explicit formulas for the generalized HS-Jacobian of the projection, which are essential for designing second-order nonsmooth Newton algorithms.

  • Jibiao Yuan, Juan Gao, Dongmei Ren, Jie Li, Xinwei Liu
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-003
    Accepted: 2025-02-17

    (Communicated by Xinmin Yang)

    Abstract: We investigate the following distributed convex optimization problem:

    $ {\min }\limits_{\tilde{s} \in {\mathbb{R}^n}} f( \tilde{s}) = \sum\limits_{i = 1}^m {{f_i}( \tilde{s} )}$                                                                               (1)

    where $\tilde{s} \in {\mathbb{R}^n}$ is the global decision variable and the local objective function ${f_i}:{\mathbb{R}^n} \to \mathbb{R}$ belongs to agent $i$. All agents are nested in a strongly connected directed graph. The goal of $m$ agents is collaboratively to find the global minimizer of the problem (1) through local communication with their neighbors. Assuming that each agent $i$ has a local copy $s^i$ of the global decision variable $\tilde{s}$, the problem (1) can be equivalently written as:

    ${\min }\limits_{s \in {\mathbb{R}^{nm}}} f( s ) = \sum\limits_{i = 1}^m {{f_i}( {{s^i}} )}$                                                                       (2)

    ${\rm{s.t.}} \ {s^i} = {s^j}, \ \forall i,j = 1, \ldots ,m,$

    where $s = {[ {(s^1)^{\mathsf{T}},(s^2)^{\mathsf{T}}, \ldots ,(s^m)^{\mathsf{T}}} ]^{\mathsf{T}}} \in {\mathbb{R}^{nm}}$. We propose a unified and accelerated version of distributed gradient methods to solve the problem (2). 

          Since the agents exchange the information over a directed graph, we consider the row-stochastic weight matrix $W = [ {{w_{ij}}} ] \in {\mathbb{R}^{m \times m}}$ associated the graph which satisfies the following conditions:

    $w_{ij} = \begin{cases} >0, & j \in \mathcal{N}^i_m \cup \{i\} \\ 0, & \rm{otherwise} \end{cases} $ ,$\sum\limits_{j = 1}^m w_{ij} = 1, \forall i \in \mathcal{V}.$

    where $\mathcal{N}^i_{in}$ is the in-degree set of agent $i$. Define the matrix $L = [ l_{ij}] = I_m - W$ and the matrix $B  = [ b_{jl}] \in {\mathbb{R}^{m\times m}}$ with $B1_m = b1_m$ for some $b \in \mathbb{R}$.

          With the help of the row-stochastic weight matrix and the Nesterov's monentum acceleration technique, we propose a unified and accelerated version of distributed gradient methods (UADM). In our algorithm, each agent $i \in \mathcal{V}$ stores four variables $s_k^i \in {\mathbb{R}^n},x_k^i \in {\mathbb{R}^n},y_k^i \in {\mathbb{R}^m},u_k^i \in {\mathbb{R}^n}$. For $k \geq 0$ and all $i \in \mathcal{V}$, UADM is initialized with $x_0^i = s_0^i \in {\mathbb{R}^n},y_0^i = {e_i}\in {\mathbb{R}^m},u_0^i = 0_n$ and updates its variables as follows:

    $x_{k + 1}^i = \sum\limits_{j = 1}^m {{w_{ij}}s_k^j}  - {\alpha _i}\left( {\frac{{\nabla {f_i}( {s_k^i})}}{{{{[{y_k^i}]}_i}}} + u_k^i} \right)$                                                  (3a)

    $s_{k + 1}^i = x_{k + 1}^i + \beta_i ( {x_{k + 1}^i - x_k^I})$                                                           (3b)

    $y_{k + 1}^i = \sum\limits_{j = 1}^m {{w_{ij}}y_k^j}$                                                                    (3c)

    $u_{k + 1}^i = u_k^i - \sum\limits_{j = 1}^m {{l_{ij}}} \left( { \frac{{\nabla {f_j}({s_k^j})}}{{{{[ {y_k^j} ]}_j}}}  + u_k^j - \sum\limits_{l = 1}^m {{b_{jl}}s_k^l} }\right)$                                         (3d)

    where $\alpha_i > 0$ is the local step-size, $ 0 \leq \beta_i< 1 $ is the local momentum coefficient.

         When the parameters $\beta_i, b_{jl}$ in UADM take different values, UADM can include the accelerated method FROZEN as its special case and bring some new accelerated methods such as the accelerated versions of FROST and Li-Row. In contrast to the methods using the column-stochastic weight matrices, UADM based on the row-stochastic weight matrices does not require each agent to know its out-degree and is easier to implement in a distributed setting because each agent can assign appropriate weights to the received information from its in-neighbors.

         If the step-sizes and the momentum coefficients satisfy some upper bounds, we prove that UADM has a linear convergence rate for smooth and strongly convex problems. The convergence result of UADM unifies the convergence results of distributed exact first-order algorithms, Li-Row, FROST and FROZEN.

        Some numerical experiments on the distributed quadratic programming show that UADM significantly outperforms the nonaccelerated algorithm (Li-Row) as the condition number increases. Some numerical experiments on the distributed logistic regression problems demonstrate that our algorithm can achieve accelerated convergence in comparison with some existing distributed algorithms.

  • Xiaoli Huang, Yuelin Gao, Xiaohua Ma, Xia Liu
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-004
    Accepted: 2025-02-17

    (Communicated by Xinmin Yang)

    Abstract: This paper investigates an effective branch-and-bound algorithm for solving the generalized linear fractional  multiplicative programming (GLFMP) problem. Initially, leveraging the structure of GLFMP, some variables are introduced to transform it into an equivalent problem. Subsequently, bidirectional linear relaxation is applied to the constraint functions of the equivalent problem, resulting in its linear relaxation problem, which is embedded into the branch-and-bound framework. Furthermore, incorporating tailored region reduction technique, we propose an output space branch-and-bound optimization algorithm. Additionally, the convergence and complexity of the algorithm are analyzed. Finally, the feasibility and effectiveness of the algorithm are validated using GLFMP instances ranging from specific to general cases.

         This paper considers the following generalized linear fractional multiplicative programming problem:

    GLFMP: $\begin{align*}\left\{\begin{aligned}&\min \;\; f(x)=\sum_{i=1}^p\gamma_i\,\prod_{j=1}^{N_i}(f_{ij}(x))^{\sigma_{ij}}\\&\mbox{s.t. } x\in X=\{x\in\mathbb{R}^n|Ax\leq b,x\geq 0\},\end{aligned}\right.\end{align*}$

    where $f_{ij}(x)=\frac{n_{ij}^{\top}x+\alpha_{ij}}{d_{ij}^{\top}x+\beta_{ij}}$, $p$ and $N_{i}$ are natural numbers. $n_{ij}, d_{ij}\in \mathbb{R}^{n}$, $\alpha_{ij}, \beta_{ij}, \sigma_{ij}\in \mathbb{R}$, $A\in \mathbb{R}^{m\times n}$, $b\in \mathbb{R}^{m}$. $X$ is a non-empty, bounded and closed set. Assume that for any $x\in X$, $\min\{n_{ij}^{\top}x+\alpha_{ij}, d_{ij}^{\top}x+\beta_{ij}\}>0$ holds. Furthermore, when $\sigma_{ij}<0$, $\left(f_{ij}(x)\right)^{\sigma_{ij}}=\left(\frac{1}{f_{ij}(x)}\right)^{-\sigma_{ij}}$. Thus, for each $i\in I=\{1,...,p\}$, $j\in J_{i}=\{1,...,N_{i}\}$, assuming $\sigma_{ij}>0$ is not loss of generality. If ``min" in GLFMP is replaced by ``max", then

    $\min_{x\in X}\sum_{i=1}^{p}\gamma_{i}\prod_{j=1}^{N_{i}}\left(f_{ij}(x)\right)^{\sigma_{ij}}=-\max_{x\in X}\sum_{i=1}^{p}(-\gamma_{i})\prod_{j=1}^{N_{I}} \left(f_{ij}(x)\right)^{\sigma_{ij}},$

    which indicates that maximizing GLFMP can be transformed into the form of GLFMP.

         An increasing number of scholars and experts are developing a keen interest in GLFMP and its various variants. One reason is that GLFMP and its variants have strong practical application capabilities in many areas. On the other hand, the complex structures of GLFMP and its variants result in them becoming non-convex problems, which often leads to them being NP-hard problems with multiple local optima instead of global solutions. This significantly hinders the search for a global optimum. Nevertheless, researchers have risen to the challenge and proposed numerous optimization algorithms.

         The main purpose of this article is to design an output space branch-and-bound global optimization algorithm for the general form of GLFMP. In pursuit of this goal, we first introduce some variables and transform GLFMP through two resets to obtain its equivalent problem (EGLFMP). Subsequently, we approximate the constraints to obtain the linear relaxation problem of EGLFMP. Furthermore, in order to remove rectangular regions that do not contain the global optimal solution to a greater extent, region reduction technique is devised based on the structural characteristics of the equivalent problem and relaxation problem, aiming to expedite the convergence speed of the algorithm. Integrating the linear relaxation programming, region reduction technique, and branching strategy into the branch-and-bound framework yields an output space branch-and-bound algorithm distinct from existing algorithms. This algorithm can not only solve GLFMP, but also special variants of GLFMP mentioned earlier. Finally, we conduct a convergence analysis on the proposed algorithm and estimate the maximum number of iterations in the worst case. Numerous numerical experimental results are used to demonstrate the performance of the algorithm.


  • Longhui Liu, Congying Han, Tiande Guo
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-005
    Accepted: 2025-02-17

    (Communicated by Xinmin Yang)

    Abstract: In this paper, we concentrate on a broad class of large-scale nonconvex and nonsmooth optimization problems. We first propose a novel inertial stochastic Bregman proximal alternating linearized minimization algorithm (TiSBPALM), which employs variance-reduced stochastic gradient estimators. Subsequently, under the assumption that the objective function satisfies the Kurdyka-Łojasiewicz property and certain conditions on the parameters are imposed, we prove that the sequence generated by our algorithm converges to a critical point in expectation. Additionally, we provide the convergence rate for the iteration sequence. Finally, we conduct numerical experiments on sparse nonnegative matrix factorization and blind image-deblurring to verify the effectiveness of our proposed algorithms.

  • Jian Liu, Ziheng Su, Huifu Xu
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-006
    Accepted: 2025-02-17

    (Communicated by Liqun Qi)

    (Dedicated to Professor Terry Rockafellar on the occasion of his 90th birthday)


    Abstract: Inspired by the recent work of Shapiro et al., we propose a Bayesian distributionally robust Nash equilibrium (BDRNE) model where each player lacks complete information on the true probability distribution of the underlying exogenous uncertainty represented by a random variable and subsequently determines the optimal decision by solving a Bayesian distributionally robust optimization (BDRO) problem under the Nash conjecture. Unlike most of the DRO models in the literature, the BDRO model assumes (a) the true unknown distribution of the random varialbe can be approximated by a randomized parametric family of distributions, (b) the average of the worst-case expected value of the objective function with respect to the posterior distribution of the parameter, instead of the worst-case expected value of the objective function is considered in each player’s decision making, and (c) the posterior distribution of the parameter is updated as more and more sampling information of the random variable is gathered. Under some moderate conditions, we demonstrate the existence of a BDRNE and derive asymptotic convergence of the equilibrium as the sample size increases. Moreover, we propose to solve the BDRNE problem by Gauss-Seidel-type iterative method in the case when the ambiguity set of each player is constructed via Kullback-Leibler (KL) divergence. Finally, we apply the BDRNE model to a price competition problem under multinomial logit demand. The preliminary numerical test results show that the proposed model and computational scheme perform well.

  • Yu-Lin Chang , Jein-Shan Chen, Chu-Chin Hu , Wei-Cheng Hu , Ching-Yu Yang
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-007
    Accepted: 2025-02-17

    (Communicated by Liqun Qi)

    Dedicated to Professor Terry Rockafellar on the occasion of his 90th birthday

    Abstract: Mean inequalities on the second order cone have been studied as an extension of the SPD cone. In this article, we turn our eyes on non-symmetric cones. In fact, we investigate two types of decompositions associated with circular cones, and establish their own mean inequalities. These inequalities are ground bricks for further study regarding circular cone optimization. We also find under the condition $0<\theta<\frac{\pi}{4}$ some inequalities cannot hold if we apply different decomposition, and correspondingly we raise a conjecture.

  • Alexander J. Zaslavski
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2025-008
    Accepted: 2025-02-17

    (Communicated by Liqun Qi)

    (Dedicated to Professor Terry Rockafellar on the occasion of his 90th birthday)

    Abstract: In this work using subgradient algorithm we study a minimization problem with a convex objective function on a domain, which is the solution set of a common fixed point problem with a countable family of quasi-nonexpansive mappings. Our algorithm generates a sequence of iterates which are approximate solutions of the corresponding fixed point problem. Additionally, also either this sequence converges to a solution of our minimization problem or the sequence is strictly Fejer monotone regarding the solution set of the common fixed point problem.

  • Zihao Jia , Youlin Shang , Yan Feng , Yufei Ren
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-023
    Accepted: 2024-10-12

    (Communicated by Xinmin Yang)

    Abstract: A large number of problems in social sciences and natural sciences today can be attributed to global optimization problems, which are widely found in economic models, finance, network transportation, database, integrated circuit design image processing, chemical engineering design and control, molecular biology, environmental engineering, etc. At present, the methods for solving global optimization problems can be divided into two categories, namely, deterministic algorithm and stochastic algorithm. Stochastic algorithm, such as genetic algorithm, simulated annealing algorithm and so on, is constructed by using the point column generated by a probabilistic mechanism to describe the iterative process. Deterministic algorithms use the analytic property of the problem to generate deterministic finite (or infinite) point sequences that converge to the global optimal solution, such as interval method, D.C. Planning algorithm, branch and bound method, auxiliary function method and integral level set method, etc. Among them, the filled function method is a deterministic method which can search the optimization problem globally. Ren pu Ge took the lead in proposing the filled function method[1]. The main idea is to use the filled function to jump out of the current local minimizer to find a better minimizer. The filled function method consists of two parts, one is to local minimization and the other is to fill. In the minimization phase, a local minimization algorithm is used to obtain a local minimizer of the original problem. In the filling phase, the original problem is minimized by means of the filled function, and the two phases are alternated until a global minimizer is found.

         The following global optimization problem will be discussed

    min $f(x)$,

    s.t.  $x \in \mathbb{R}^n$.                                                                           (1)

    Ge states that if a function satisfies the filled function definition and the filling property, then it can be used to solve the problem (1). The first generation filled function formula given in [1] is as follows:

    $G(x,x_k^*,r,\rho)=\frac1{r+f(x)}\exp(-\frac{\|x-x_k^*\|^2}{\rho^2})$,                                                   (2)

    where $r$ and $\rho$ are given parameters. The selection of parameters in formula (2) is quite difficult. This is a challenge for coming up with new filled function methods. The filled function of equation (2) has some shortcomings. 

    It has been discussed in depth. These shortcomings are mainly related to the two parameters $r$ and $\rho$ contained in the function. According to Gao, Ge's filled function contains an exponential term $\exp(-\frac{\|x-x_k^*\|^2}{\rho^2})$, and the picture of the filled function is almost flat if $\|x-x_k^*\|^2$ is large or $\rho$ is small. This makes it difficult for the computer to discern changes in function values.

        In this paper, a new non-parameter filled function is constructed. It satisfies the definition of filled function and has excellent properties. Then, the corresponding algorithm is designed according to the new filled function. Numerical experiments are made and comparisons on several test problems are shown which exhibit the feasibility and effectiveness of the algorithm.



  • Qiao Chen , Liping Tang , Ke Zhang
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-024
    Accepted: 2024-10-12

    (Communicated by Xinmin Yang)

    Abstract:Global optimization problems are common issue in numerous fields, such as molecular biology, economics and finance, environmental engineering, industrial manufacturing. In the global optimization problem, if it is a convex programming, we can use a local minimization algorithm to find its global optimal solution. Unfortunately, in most cases, these problems are non-convex optimization problems with multiple minimum points, which cause a scientific question arise: How to find a better minimizer from the known local minimizer of the objective function.

        The traditional nonlinear programming algorithms, such as Quasi-Newton methods or other algorithms, cannot directly solve the puzzle. Therefore, it is a great challenge to find the global minimizer of non-convex objective function. In recent years, lots of studies have focused on global optimization theory and algorithm. In general, global optimization methods can be divided into two categories: stochastic methods and deterministic methods. The stochastic methods for global optimization include evolutionary algorithm, simulated annealing, and particle swarm optimization, etc. These methods are inspired by physical principles or natural evolution, which jump out of the local minimum by a probability-based approach. Although the stochastic methods have been effectively applied to solve many intricate problems, some problems remain unsolved, such as convergence analysis and parameter selection. The deterministic methods, for example, the filled function method, tunnelling method, covering method, can converge faster and often find a better minimizer than the current minima quickly. Compared with other deterministic methods, the filled function method is a more popular global optimization method because it is relatively easy to implement. The basic idea of filled function is to construct an auxiliary function at the current local minima of the objective function. By minimizing the auxiliary function, the current local minima is quickly jumped out and a better local minima is reached. This process cycles until a better local minimum cannot be found.

        The first filled function introduced by Ge is as follows, i.e.

    $P(x,x_{1}^*, r, \rho)=\frac{1}{r+F(x)}\text{exp}(-\frac{\|x-x_{1}^{\ast}\|^{2}}{\rho^{2}}),$

    where the parameters $r$ and $\rho$ need to be chosen appropriately. However, Ge's filled function has some shortcomings. Because there are too many restrictions on the parameters $r$ and $\rho$, it is difficult for them to be properly selected so that $P(x,x_{1}^*, r, \rho)$ satisfies the definition of the filled function. Constructing a filled function with good properties is one of the keys to the success of a filled function algorithm. Therefore, researchers have done a lot of beneficial research work in enriching the forms of filled function. Unfortunately, These filled functions indeed possess several merits, yet they also exhibit drawbacks that necessitate further refinement and improvement. The adjustment of parameters for some two-parameter or single-parameter filled functions is still difficult. Non-smooth, non-differentiable or even discontinuous filled functions are not minimized by effective local optimization procedures based on gradient information, which decreases the efficiency of the filled algorithm. To address the difficulties in adjusting parameters, some scholars have proposed parameter-free filled functions. However, some parameter-free filled functions do not have any information about the objective function when $f(x)\geq f(x_{1}^{*})$, which may lead to difficulties in finding the global minimum point. In addition, without the adjustment of parameters, the proposed function in literature may fail to satisfy the definition of the filled function.

        Based on the above considerations, a new one-parameter filled function is proposed in this paper. The main contributions of this work are summarized below:

        (i) The new filled function is continuously differentiable and its gradient information can be used in local minimization procedure. Thus, the realization of the minimization process is simplified, and the algorithm is more efficient and convenient.

          (ii)  In any case, our proposed filled function always contains information about the objective function.

         (iii)  The new filled function has an easily adjustable parameter that can be tuned according to the specific problem, enabling the algorithm to adapt to the problem and avoids any issues similar to those encountered in literature.

        Next, we conducted a theoretical analysis and research on the properties of the new filled function. These good function properties ensure the convergence of the minimization process of the proposed filled function. Therefore, the minimum iteration process of the new filled function algorithm will run successfully in the whole region. Without these function properties, the filled function may behave similarly to that in reference, having no local minima within the entire feasible region. Furthermore, supported by thorough theoretical analysis, we have meticulously conceived a filled function method that is not only easy to implement but also highly efficient. Through a series of rigorous numerical experiments, we have fully verified the effectiveness and reliability of this new algorithm. It is worth mentioning that in the comparative tests with two other filled function methods, our algorithm has demonstrated more significant advantages, and the experimental results convincingly prove its superiority.


  • Feifei Shen , Liu Yang , Jiaxin Yang , Xiaojiao Tong ,
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-016
    Accepted: 2024-09-04
    (Communicated by Guihua Lin)
    Abstract: With the development of financial investment risk management, utility-based shortfall risk measures have received increasing attention for its potential to quantify the risk of large tail losses more effectively than conditional value at risk. For portfolio selection, transaction costs incurred by changes of investment proportions on risky assets have a significant impact on the investment strategy and the return. In this paper, we discuss a distributionally robust portfolio selection problem with shortfall risk measures and investigate the impact of transaction costs on portfolio selection. We first constructed the distributionally robust shortfall risk (DRSR) portfolio models without and with transaction costs. Then we construct moment ambiguity sets based on empirical data for the unknown probability distribution functions of random variables and discuss the convergence of optimal values and solutions of models as the sample size increases. Under the condition that the loss function is a piecewise affine function, we transform models into equivalent semi-definite programming problems. Numerical tests illustrate the validity of our models. Our main contributions can be summarized as follows:
         (i) We construct the DRSR portfolio selection models without and with transaction costs based on the moment ambiguity set, and discuss the convergence analysis of optimal value and solution of the models with moment ambiguity sets based on empirical data as the sample size increases.
         (ii) We use a linearization method to convert the V-shaped transaction costs function into inequality constraints, and this method can overcome the nonsmooth of the transaction costs function. Then we transform the DRSR potfolio selection models into computable equivalent semi-definite programming problems (SDP) by using the moment theory and duality theory.


        (iii) Based on the actual stock data, the influence of different parameters (such as transaction costs rate, risk level, ambiguity set parameters, sample sizes, etc.) on the optimal solution and optimal value of the models are analyzed. The validity and convergence of the new models are verified.


  • Annamaria Barbagallo
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-017
    Accepted: 2024-09-04
    (Communicated by Andreas Fischer )
    Abstract: In this paper, a link-based emission pollution permit system for dynamic transportation network equilibrium problems is considered. The governing equilibrium conditions for this model are given. Then, their time-dependent variational formulation is obtained. Thanks to that, existence and continuity results for equilibrium distributions are established. Finally, the existence of Lagrange multipliers is proved, and the behavior of the model is analyzed.
  • Danni Xu , Suxiang He , Haibin Chen
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-018
    Accepted: 2024-09-04
    (Communicated by Zheng-Hai Huang)
    Abstract: In this paper, we consider the bi-quadratic optimization problem with a partially symmetric fourth-order tensor. When the objective function is concave, it has the same optimal value as multilinear optimization problems. Then, the relation between the augmented bi-quadratic optimization problem and the original problem is established by limiting the domain of the solution in a unit sphere. A fast alternating proximal gradient method is proposed to find the approximate optimal value of the augmented bi-quadratic optimization problem. Furthermore, we present an enhanced block improvement method to obtain the approximate optimal value of the original problem. Finally, we perform convergence analysis and numerical experiments on the method to show the efficiency
  • Jianchao Bai , Yang Chen , Yuxue Ma
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-019
    Accepted: 2024-09-04
    (Communicated by Xinmin Yang)
    Abstract: In this paper, we develop an Inexact Relaxed Augmented Lagrangian Method (IR-ALM) for solving a class of convex optimization problems. Flexible relative error criteria are designed for approximately solving the resulting subproblem, and a relaxation step is exploited to accelerate its convergence numerically. By a unified variational analysis, we establish the global convergence of this IR-ALM and its sublinear convergence rate in terms of the primal iterative residual, the objective function value gap and constraint violation, respectively. Experimental results on testing the image restoration problem with different types of images show that the proposed method is effective.
  • Qiao Zhu , Liping Tang, Yibin Xiao
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-020
    Accepted: 2024-09-04
    (Communicated by Xinmin Yang)
    Abstract: The present paper proposes a new piecewise convexification method to approximate the globally efficient solution set of non-convex multi-objective optimization problems (NC-MOP) with box constraints. Initially, we utilize the α-based Branch-and-Bound (αBB) method to formulate the problem of piecewise convexification for NC-MOP, which consists of a series of convex relaxation optimization problems. Subsequently, we introduce a box classification strategy to define the efficient solution set for the piecewise convexification problem and demonstrate its effective approximation to the globally efficient solution set of NC-MOP. Finally, a new two-stage piecewise convexification algorithm combining αBB and multi-objective evolutionary algorithm based on decomposition (MOEA/D) is proposed, which incorporates novel techniques such as box partitioning, deletion methods, and termination tests. Moreover, the effectiveness of our approach compared to competitor is validated through experimental results.
  • Xianjun Long , Xiaoting Wang, Gaoxi Li
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-021
    Accepted: 2024-09-04
    (Communicated by Donghui Li)
    Abstract: In this paper, we study a class of fractional optimization problems, where the numerator of which is the sum of a nonsmooth convex function and a second-order differentiable convex function, the denominator is a second-order differentiable concave function. We propose an inexact proximal quasi-Newton algorithm with line search for solving this problem. Under moderate conditions, we prove that the sequences generated by the proposed algorithms converge to optimal solutions. In addition, the inexact proximal quasi-Newton algorithm exhibits the O(1/k) global convergence rate, where k denotes the iteration counter. Finally, two numerical experiments illustrate the effectiveness and the competitiveness of the proposed algorithm.
  • Yazheng Dang, Jingwen Zhang, Jinglei Lu
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-022
    Accepted: 2024-09-04
    (Communicated by Mabel Chou)
    Abstract: The symmetric alternating direction method of multipliers(ADMM) for solving the nonconvex separable optimization has been widely studied. However, the symmetric ADMM may fail to converge when the optimization is nonconvex, nonsmooth and nonseparable. This paper proposes a proximal symmetric ADMM for nonconvex nonseparable optimization, which employs linearized technique in each subproblem and introduces a relaxation factor in the Lagrangian multiplier update step. In addition, when the potential function satisfies the Kurdyka-Łojasiewicz (KL) property, the convergence of the algorithm is established. Finally, numerical experiments verify the effectiveness of the proposed method.
  • Ying Gao, Yunfei Qu, Chunfeng Cui, Deren Han
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-010
    Accepted: 2024-03-11

    (Communicated by Zheng-Hai Huang)

        Overfitting is a common phenomenon in machine learning, wherein models almost can fit the samples on the training set but have poor generalization ability on the test set. Regularization is devoted to tackling this problem by imposing a penalty on the complexity or smoothness of the model. However, the performance of regularization is usually circumscribed by the lack of correlation with data samples, which restricts its potential efficiency for many practical models. In this paper, pursuing the seminal work by Zhu et al. (LDMNet), we develop a coupled tensor norm regularization. It can be customized to the model with small-sized structural samples. The main idea of this regularization, which is built upon empirical manifold observation that input data and output features have a low-dimensional structure, is an alternative representation of low-dimensionality. Concretely, coupled tensor norm regularization is the low-rank approximation of the coupled tensor rank function. Related theoretical properties are presented and we further test this regularization for multinomial logistic regression and deep neural networks by theoretical algorithm analysis and numerical experiments. Numerical simulations on real datasets demonstrate the compelling performance of proposed regularization.

  • Yu You, Jirui Ma
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-011
    Accepted: 2024-03-11

    (Communicated by Guihua Lin)

        We consider the optimization problem of minimizing a smooth and convex function. Based on the accelerated coordinate descent method (ACDM) using probabilities $L_i^{1/2}[\sum_{k=1}^n L_k^{1/2}]^{-1}$ for non-uniform sampling (Nesterov Yu. et al., SIAM J. Optim., 110–123, 2017 [3]), we propose an adaptive accelerated coordinate descent method (AACDM) with the same probability distribution determined by $\{L_i\}$ as in ACDM.

        In [1, 3], the step sizes of their algorithms are fixed and determined by the (global) parameters $\{L_i\}$. Note that this may not be preferable for practical applications where the (local) parameter values differ from the global counterparts to some extent. This implies that methods which can be adaptive to the local parameters might improve the performance in practice. Motivated by this, in this paper we study the adaptive ACDM, which still requires (global) Lipschitz constants for non-uniform sampling as a prior, while the (local) coordinate Lipschitz constants are determined by backtracking (not neceessarily monotone) to achieve better performance. Both the strongly and non-strongly cases are discussed in this paper.

        The non-monotone backtracking line search is included in our adaptive scheme, which performs better (compared with the monotone one) for applications whose local coordinate Lipschitz constants oscillate along the trajectory or become smaller when approaching the tail. The adaptive ACDM is indeed not a monotone method, meaning that the sequence of function values it produces is not necessarily nonincreasing. Since the monotone approach can be used to improve numerical stability (see monotone FISTA in [2]), we also propose an adaptive ACDM in monotone version.

        Numerical results on some classic problems show the efficiency of the adaptive scheme.


  • Hao Wu, Liping Wang, Hongchao Zhang
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-013
    Accepted: 2024-03-11

    (Communicated by Xinwei Liu)

        This paper introduces a novel conjugate gradient method that exploits the m-th order Taylor expansion of the objective function and cubic Hermite interpolation conditions. We derive a set of modified secant equations with enhanced accuracy in approximating the Hessian matrix of the objective function. Additionally, we develop a modified Wolfe line search to address the limitations of the conventional constraint imposed on modified secant equations while ensuring the fulfillment of the curvature condition. Consequently, an improved spectral conjugate gradient algorithm is proposed based on the modified secant equation and Wolfe line search. Under standard assumptions, the algorithm is proven to be globally convergent for minimizing general nonconvex functions. Numerical results are provided to demonstrate the effectiveness of this new proposed algorithm.       

  • Hui Zhang, Xin Liu, Naihua Xiu
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-014
    Accepted: 2024-03-11

    (Communicated by Zheng-Hai Huang)

        Recently, many new challenges in Compressed Sensing (CS) arose. In this paper, we present a new algorithm for solving CS with block sparse constraints (BSC) in complex fields. Firstly, based on block sparsity characteristics, we propose a new model to deal with CS with BSC and analyze the properties of the functions involved in this model. We then present a new  -stationary point and analyze corresponding first-order sufficient and necessary conditions. We further develop a block Newton hard-thresholding pursuit (BNHTP) algorithm for efficiently solving CS with BSC. Finally, numerical experiments demonstrate that the BNHTP algorithm has superior performance in terms of recovery accuracy and time.

  • A. Rikouane, M. Laghdir, M.B. Moustaid
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-007
    Accepted: 2024-01-28

    (Communicated by Nobuo Yamashita)

         In this paper, in the absence of any constraint qualifications, we develop sequential necessary and sufficient optimality conditions for a constrained multiobjective fractional programming problem characterizing a Henig proper efficient solution in terms of the $\epsilon$-subdifferentials and the subdifferentials of the functions. This is achieved by employing a sequential Henig subdifferential calculus rule of the sums of $m\ (m\geq 2)$ proper convex vector valued mappings with a composition of two convex vector valued mappings. In order to present an example illustrating Our results, we establish the classical optimality conditions under Moreau-Rockafellar qualification condition. Our results are presented in the setting of reflexive Banach space in order to avoid the use of nets.

  • Yixuan Chen, Haitao Che, Haibin Chen
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-009
    Accepted: 2024-01-28

    (Communicated by Guanglu Zhou)

        Consider the linear equation

    $Kx = y,$                                                                                    (1)

    where $x\in\mathbb{R}^n $ is the original signal or image, $K:\mathbb{R}^n\rightarrow \mathbb{R}^m$ is a degradation operator, and $y\in\mathbb{R}^m$ is the observed signal or image. It has some important applications to signal processing, non-destructive testing, etc. The typical $\ell_1$-norm sparse regularization method does not have beneficial stability for ill-posed problems with large condition number. What's more, in many applications, the solution $x$ of the question (1) is not really sparse. To overcome the shortcomings of typical sparsity regularization, some researchers have focused on the elastic-net regularization problem, that is,

    $\min_{x\in \mathbb{R}^n} J(x)=\frac{1}{2}\|Kx-y\|^2+\alpha\|x\|_{1}+\frac{\beta}{2}\|x\|^2,$                                                           (2)

    where $\alpha>0, \beta>0, $ $\|\cdot\|$ represents $\ell_2$-norm, $\|\cdot\|_1$ represents $\ell_1$-norm. As a species of multi-parameter Tikhonov regularization, elastic-net regularization is more applicable to ill-conditioned issues. Statistically speaking, elastic-net regularization has superior steadiness than the typical regularization. There is an additional $\ell_2$-norm in elastic-net regularization compared to the classical $\ell_1$-norm sparsity regularization. Elastic-net regularization has the advantages of both $\ell_2$-regularization and $\ell_1$-regularization.

        Even though there exist various approaches to solve the elastic-net regularization problem which mainly focus on the convergence of the algorithm, not much work has been carried out in the convergence time of the algorithm. Therefore, it is necessary to devise a method to estimate the convergence time. For (2), we can relax the problem to be the following constrained optimization problem

    min $g(x)=\frac{1}{2} \|Kx-y\|^2+\frac{\beta}{2}\|x\|^2$

    s.t.  $x\in\Theta,$                                                                                                        (3)

    where $\Theta=\{x\in\mathbb{R}^n|\|x\|_{1}\leq r\}, r>0,$ and transform the problem (3) into the fixed point equation form

    $x=P_\Theta(x-\lambda K^T(Kx-y)-\lambda\beta x),$                                                          (4)

    where the stepsize $\lambda>0$. Obviously, the solution of problem (1) can be obtained by taking the solution of problem (3) when $\beta\rightarrow0$.

        Based on (4), we propose the following  new continuous-time projection algorithm for solving the elastic-net regularization

    $\dot{x}(t)=-\varphi(x)(x(t)-z(x(t))),$                                                                       (5)

    where

    $\varphi(x)=\left\{\begin{array}{l}\frac{k_1}{\|z(x(t))-x(t)\|^{1-\alpha_1}}+\frac{k_2}{\|z(x(t))-x(t)\|^{1-\alpha_2}}+\frac{k_3}{\|z(x(t))-x(t)\|},\mathrm{if}~ x\in\mathbb{R}^n\setminus \{x\in \mathbb{R}^n:x=z(x)\},\\0,~~~~~~~~~\mathrm{otherwise}, \end{array}\right. $

    $z(x(t))=P_\Theta(x(t)-\lambda(K^TKx(t)-K^Ty)-\lambda\beta x(t)),$ $k_1>0, k_2>0, k_3>0,\alpha_1\in(0,1),  \alpha_2>1, \lambda>0$ and $\beta>0$.

        The equivalence between the equilibrium point of (5) and the solution of (4) is displayed. Moreover, we prove that for every $\beta>0$ and $\lambda\in(0,\frac{2\beta}{L^2})$, if $x$ is an equilibrium point of (5), then for all $t\in[0,+\infty)$, the solution of (5) exists uniquely for any given initial value. Furthermore, the convergence time of the proposed method can be reckoned by the setting time

    $T(x(0))\leq T_{\max}=\frac{1}{d_1(1-p_1)}+\frac{1}{d_2(p_2-1)},$

    where $k_1>0, k_2>0, k_3>0, \alpha_1\in(0,1), \alpha_2>1, L=\|K\|^2+\beta, \lambda\in(0,\frac{2\beta}{L^2}), \Phi=\sqrt{\frac{1}{1-\lambda^2 L^2+2\lambda\beta}}\in (0,1),$ $p_1:=\frac{1+\alpha_1}{2},$ $p_2:=\frac{1+\alpha_2}{2},$ $d_1=k_1(1-\Phi)\frac{2^{p_1}}{(\frac{4\beta}{4\beta-\lambda L^2})^{1-\alpha_1}}>0,$ $d_2=2^{p_1}(1-\Phi)^{\alpha_2}k_2>0. $

        To validate the convergence and effectiveness of the proposed algorithm, we apply our algorithm to some numerical examples of signal processing and image restoration. Compared with state-of-the-art methods in the literature such as CAPPA method, THZF method and PNN algorithm,  our algorithm is more effective to calculate the mean squared error (MSE) in signal processing. Furthermore,  our algorithm shows the better performance to achieve the signal-to-noise ratio (SNR), the structural  similarity index (SSIM) and the relative error (RE) in image restoration.

  • Zhibao Li, Deren Han, Guangming Zhou and Huan Gao
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2024-003
    Accepted: 2024-01-15

    (Communicated by Jie Sun)

        In this paper, we propose an extended CQ algorithm integrated with selection technique to address the multiple-sets split feasibility problem (MSFP). At each iteration, the selection technique is employed to formulate a split feasibility subproblem for MSFP, subsequently it is resolved by means of the CQ algorithm. Under mild conditions, we establish the global convergence results for the extended CQ algorithm. Furthermore, we provide empirical evidence in the form of numerical results, which conclusively affirm the effectiveness and competitiveness of our proposed algorithm.

  • Jing Tian, Shouqiang Du, Yuanyuan Chen
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2023-053
    Accepted: 2023-12-31

    (Communicated by Chen Ling)


        Traditional power systems face unprecedented challenges, such as energy, environment, and economy. At the same time, fossil fuels used for traditional thermal power generation are increasingly depleted. Currently, in the context of carbon reduction and harmonious advance in the environment, environment-friendly smart grid is developing rapidly. A strong smart grid is based on a strong grid structure, supported by communication information platforms, and controlled by intelligent means. It includes various links of power generation, power transmission, power transformation, power distribution, power consumption, and power dispatching in the smart grid.
        Smart grid is the inevitable result of economic and technological development, which specifically employs advanced technique to increase the performance for electricity power system in power utilization, power supply quality and reliability. The foundation of smart grid is divided into data transferring, calculation and control technique between several electricity providing units. Smart grid has excellent features of energy-saving, reliability, self-healing, full controllability and asset efficiency. So in recent years, the theory and method of smart grid with the characteristics of high-quality, reliability, self-healing, interaction, security and environmental protection have been studied.
        There exist two models for solving uncertain optimization systems: stochastic optimization and fuzzy optimization. In stochastic optimization problems, we need to know the distribution function of system parameters. Similarly, in fuzzy optimization problems, the membership function of system parameters is required to be known. However, in practical problems, the distribution function and membership function are not always easy to obtain. In most cases, we only know the coefficients of the system at a certain point. The interval optimization model is more practical and easy to handle when there exist changes within an interval. Research on interval optimization has aroused widespread interest and achieved rich theoretical and practical results. In interval optimization methods, it is not necessary to know the specific numerical values of parameters. It operates in the form of intervals and only needs to know their range of variation, i.e. the upper and lower boundaries, without the need for precise numerical values. This greatly simplifies the preliminary data processing work. Interval optimization has been widely studied, especially its optimality conditions. Interval optimization is divided into single-objective interval optimization and multi-objective interval optimization. Interval optimization is increasingly used in practice such as economic planning, energy development, engineering design, environmental protection and other fields. Among the existing smart grid models, there exists little research about the smart grid with controllable supply, so this paper aims to study the problem of smart grid with controllable supply. Therefore, the interval optimization theory is applied to the real-time pricing problem of smart grid for maximizing social welfare. The intelligent control of electricity generation in the model has interval change in reality, so we combine the interval optimization with the analysis of the real-time pricing problem of smart grid based on social welfare maximization, and add the change in electricity generation to the objective function of the smart grid model, which becomes an interval objective function. We propose the model of smart grid with controllable supply based on maximizing social welfare. The model is transformed into a real-time pricing problem for smart grid with interval change. We transform the interval optimization problem into a real-valued optimization problem and solve it based on Karush-Kuhn-Tuker(KKT) conditions. The KKT conditions for smart grid with controllable supply based on social welfare maximization are also given.
        In smart grid, there are currently three forms of pricing mechanisms: time of use pricing mechanism, critical peak load pricing mechanism, and real-time pricing mechanism. Unlike the time of use pricing mechanism and critical peak load pricing mechanism, real-time pricing is not pre-set, but fluctuates continuously every day, directly reflecting the relationship between market price and market electricity cost. It is an ideal pricing mechanism that can encourage users to consume more wisely and effectively. Therefore, real-time pricing mechanisms has become a current research hotspot. The KKT conditions, which are transformed into a nonsmooth equation system. And we introduce the value function to transform it into an unconstrained optimization problem. The real-time price of smart grid with controllable supply can be obtained by the KKT conditions of interval optimization.
       Levenberg-Marquardt method is a type of optimization method. Its application fields are very wide, such as economics, management optimization, network analysis, optimal design, mechanical or electronic design. In this paper, Levenberg-Marquardt method is applied to solve the transformed problem of smart grid with controllable supply. We give the convergence analysis of Levenberg-Marquardt method under mild conditions. Finally, related numerical experiments have shown that Levenberg-Marquardt method can effectively solve the real-time price problem of smart grid. The real-time price obtained meets the effect of peak-shaving and valley-filling, which indicates that Levenberg-Marquardt method can effectively get real-time price. The research on this kind of smart grid problem further enriches the research work in the field of real-time price of smart grid based on social welfare maximization.


  • Huage Wang
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2023-055
    Accepted: 2023-12-31

    (Communicated by Donghui Li)

         In this paper, we consider a sixth-order paired symmetric Cauchy tensor and its generating vec- tors. Initially, we investigate the conditions for positive definiteness and positive semidefinite- ness of the sixth-order paired symmetric Cauchy tensor. Necessary and sufficient conditions for its positive definiteness are given according to the structural characteristics of a sixth-order paired symmetric Cauchy tensor. Subsequently, we apply the concept of the M-eigenvalue to the sixth-order paired symmetric Cauchy tensor, and further discuss related properties. We give two M-eigenvalue inclusion intervals for a sixth-order paired symmetric Cauchy tensor, which pro- vide two upper bounds for the M-spectral radius. The inclusion relation between them is given. In the end, we provide two numerical examples of eigenvalue inclusion intervals, confirming the inclusion relationship of the intervals.

  • Xin Zhao, Anwa Zhou, Jinyan Fan
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2023-040
    Accepted: 2023-12-07
    (Communicated by Chen Ling)
          In this paper, we introduce the polynomial semidefinite complementarity problem. We formulate it equivalently as a polynomial optimization problem with both scalar polynomial constraints and polynomial matrix inequality constraints. The solutions for the problem can be computed sequentially, if there are finite ones. Each of them can be obtained by Lasserre’s hierarchy of matrix-type semidefinite relaxations. Under suitable assumptions, the asymptotic and finite convergences for such sequence of semidefinite relaxations are also proved. Numerical experiments show that the proposed method is efficient for test problems.
  • Tiange Li, Xiangyu Yang, Hao Wang
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2023-041
    Accepted: 2023-12-07
    (Communicated by Hiroshi Yabe)
    Abstract: The iteratively reweighted $\ell_1$ (IRL1) algorithm is commonly employed for addressing nonconvex optimization problems mainly through solving a sequence of convex subproblems. In this paper, we propose an enhanced IRL1 algorithm tailored for addressing structured optimization problems involving nonconvex $\ell_{q,p}$ regularization. The key to its acceleration lies in a simple yet effective feature screening strategy. The strategy we propose involves a priori screening test capable of identifying potential inactive groups before commencing the subproblem solver and also incorporates an ecient posterior Karush-Kuhn-Tucker (KKT) condition check procedure to ensure an optimal solution even if some screened variables are mistakenly removed. The priori screening procedure is primarily achieved by exploiting the dual subproblem information during each iteration. Furthermore, we establish a theoretical proof that, within a finite number of IRL1 iterations, the screening test correctly removes all inactive variables. Numerical experiments showcase the signi cant computational advantages of our algorithm over several state-of-the-art algorithms.
  • M. Ait Mansour, Z. Chbani, A.-R. Elakri
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2023-037
    Accepted: 2023-11-22

    (Communicated by Robert Csetnek)


          The purpose of this paper is to study the strong convergence of Moudafi's proximal iterative method for finding a common element of the set of solution of finite family of equilibrium problems in a Hilbert space. More precisely, we suppose that, for $i=1,...,m$, $C_i$ is closed convex subset of Hilbert space $H$ and $(F_i)_{i=1,...,m}$ be a sequence of real-valued bifunctions such that $F_i$ is defined on $C_i\times C_i$ for each $i=1,2,\dots,m.$ The corresponding system of equilibrium problems we consider aims at finding a point $x$ in $C=\bigcap_{i=1} ^m C_i$ such that :

    (SEP)     $F_{i}{(x,y )}\geq 0$ for all $y\in C_i$ and all $i=1,...,m.$                                       (1)

    Under appropriate condition on the parameters and the data, the sequence $(x_k)$ generated by the following algorithm 

    $x^{k+1}=\alpha_{k}{x^k}+\beta_{k}{g(x^k)}+\gamma_{k}{J_r^{F_{m}}}{J_r^{F_{m-1}}}\dots{J_r^{F_{1}}}(x)$

    is proved to be strongly converging to a point $\bar{x}\in S,$ where $S$ is the set of solutions to the system (1), $J_r^{F_{i}}(x)$ is the resolvent of $F_i$ at $x$ with parameter $r$ for $i=1,2\dots,m$, $g:C_m \longrightarrow C_m$ is a contraction and $\alpha_k ,\beta_k ,\gamma_k$ are positive scalars such that $\alpha_k +\beta_k +\gamma_k=1$, $\beta_k \rightarrow 0$, $ \sum \beta_k=+\infty$ and $(\alpha_k )\subset [c,d]$ for some $c,d\in (0,1)$.

         The second goal of this paper is to use the iterative algorithm proposed for the equilibrium system (1) to solve the split common fixed point problem as defined by Censor and Segal in [The split common fixed-point problem for directed operators J. Convex Anal.16 (2009) 587–600.], i.e., finding a point $\bar{x}$ satisfying

    $\overline{x}\in\bigcap_{i=1}^{m}{Fix}{U_{i}}$   and    $A{\overline{x}}\in\bigcap_{j=1}^{m}{Fix}{T_{j}}$                              (3)

    where, $H_1$ and $H_2$ are two Hilbert spaces, $A : H_{1} \longrightarrow H_{2}$ is a bounded linear operator, $(U_{i})_{i=1}^{m}: H_{1} \longrightarrow H_{1}$ are F-hemicontinuous operators and $(T_{i})_{i=1}^{m}: H_{2} \longrightarrow H_{2}$ are M-demicontractive mappings. Note that the concept of M-demicontractiveness is new in the literature, we have introduced it in order to develop the relation between problems (1) and (3). Our analysis underlines also that the class of $M$-demicontractive mappings is intermediately situated between nonexpansive operators and demicontractive ones. Finally, to illustrate the convergence of our algorithm, we present a numerical example written in matlab.


  • Xiuyun Feng, Gang Wang and Yiju Wang
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2023-039
    Accepted: 2023-11-22
    (Communicated by Donghui Li)
         Sparse tensors play a fundamental role in hypergraphs and stability of nonlinear systems.
        In this paper, we establish new bounds of the minimum $H$-eigenvalue for a $Z$-tensor by its majorization matrix's digraph and representation matrix's digraph. Numerical examples are proposed to verify that our conclusions are more accurate and require fewer calculations than existing results. Based on the lower bound estimations for the minimum $H$-eigenvalue, we provide some checkable sufficient conditions for the positive definiteness of $Z$-tensors.
  • Beier Chen, Hui Zhang
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2023-035
    Accepted: 2023-11-05

    (Communicated by Xinwei Liu)

    The proximal gradient descent method, well-known for convex composite optimization, can be completely described by the concept of proximal gradient mapping. In this note, we introduce two new results of proximal gradient mapping–norm monotonicity and refined descent, with which we are able to extend the recently proposed potential function-based framework from gradient descent to proximal gradient descent.

  • Fenlan Wang
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2023-031
    Accepted: 2023-10-19

    (Communicated by Fanwen Meng)

         
In this paper, A new approach is presented for exactly solving the following linear mixed integer programming problem (P) with scenario constraints.

    wherecis an n-dimensional column vector,  and for each scenario  .    is an  matrix, d is anm-dimensional column vector,   and  are the lower and upper bounds of x, respectively.

         Consider the following subproblem:

    where  and  and .

        We first decompose the subproblem  into two parts: main problem (MP) and the feasibility subproblems   of  scenario constraints.

           The first part is the following problem (MP) by discarding the scenario constraints of :

    The optimal objective function value of the continuous relaxation problem of problem (MP) provides a lower bound for problem . Also we can obtain two integer points by rounding the continuous optimal solution of (MP) up or down along some directions.

          Another part is only the scenario constraints. For a given feasible solution of (MP), we determine whether is feasible to the scenario constraints or not by solving the following sub-problems:

         Then we develop the domain partition techniques for several cases according to whether an integer point xˆ being feasible to each constraint or not and thus cut off some integer boxes which do not contain any optimal solution of from .

         Integrating the solution scheme into a branch-and-bound frame, we present the new algorithm for solving problem (P). The proposed solution method reduces the optimality gap gradually in the solution iterations and is of a finite-step convergence.

         Finally, we give the numerical experiments and the comparison numerical results with the traditional branch-and-bound method for different sizes of scenario constraints. The comparison results with the traditional branch-and-bound method also show the proposed algorithm is very efficient.

  • Xiaoliang Dong, Deren Han
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2023-028
    Accepted: 2023-10-11
    (Communicated by Lizhi Liao)
    A new three–term conjugate gradient algorithm is developed, in which the involved search direction satisfies the sufficient descent condition at each iteration under Wolfe conditions. Different from the existent methods, a dynamical two–side approximating mechanism is proposed, which adaptively adjusts the relative weight between sufficient descent condition and making fully use of curvature information of objective functions. To some extent, the above strategy meaningfully exploits Hessian approximation of the objective function and therefore increases the efficiency of the algorithm in practical computation. Under mild conditions, we prove that the presented method converges globally for general objective functions. Numerical results are reported, which illustrate that the proposed algorithm is practically encouraging.
  • Weijun Zhou and Li Zhang
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2023-027
    Accepted: 2023-09-21

    (Communicated by Gui-hua Lin)

    A globally and superlinearly convergent BFGS method is introduced to solve general nonlinear equations without computing exact gradient. Compared with existing Gauss-Newton-based BFGS type methods, the proposed method does not require conditions such as symmetry on the underlying function. Moreover, it can be suitably adjusted to solve nonlinear least squares problems and still guarantee global convergence. Some numerical results are reported to show its eciency.

  • Jingya Chang, Dongdong Liu and Min Xi
    Pacific Journal of Optimization. https://doi.org/10.61208/pjo-2023-025
    Accepted: 2023-08-18
    (Communicated by Guanglu Zhou)
        Clustering and classification are two important tasks in machine learning. Clustering approaches aim to divide a number of items without labels into several groups, while classification methods provide a classifier with the help of labeled data and classify other data by using the classifier. In reality, sometimes few annotated points are melded in the unannotated data set in clustering problems and taking advantage of the priori label information often enhances the clustering performances. Semi-supervised learning is to complete the learning task based on both the labeled and unlabeled data.
        In this paper, we consider the semi-supervised clustering problems that are modeled by hypergraphs. Our task is to cluster the n vertices of the hypergraph into k groups according to the hypergraph structure, while few categorization labels are given.

        We employ the multi way array to generate the clustering costs and use the hypergraph related tensor to construct the constrained optimization model

    for the semi-supervised clustering problem. This is a 0-1 integer programming. To make this optimization model easy, we penalize the constraint and add a regularization term  to the objective function. In terms of the discrete 0-1 constraint, it can be deduced that the column vectors of X are unit vectors and orthogonal to each other. Thus, we relax it to the continuous constraint . Then the optimization model (1) is transformed to an orthogonal constrained model

        The model (2) is an optimization constrained on the Steifel manifold . To solve this optimization problem, we first find a descent direction in the tangent space from the current point, and then employ the polar decompostion to map the descent direction to the Stiefel manifold.
        The tangent space at a point  is

    The direction  is employed to find a new point in the tangent space, where G is the gradient of function f at the current point X. In the iteration process, we find

    in the tangent space from Xk: The next step is to map to the manifold by using a retraction. Consider the univariate function

    with its domain , which is the projection of onto the Stiefel manifold.
        Given and the descent direction ; by using the adaptive feasible BB-like method, we find a step size such that

    where is a reference objective function value. The new iteration point

    is obtained.
        Convergence analysis demonstrates that the first order optimality condition is satisfied at the accumulation point. The sequence generated by our proposed algorithm SSHC either terminates with or is infinite. We prove that when the iteration is infinite, a subsequence of converges to 0, which means

        We employ the proposed SSHC method to cluster synthetic and real data, and compare the results of SSHC method with an unsupervised method SHC. Numerical experiments show that our method works well on synthetic and real data.