01 March 2025, Volume 21 Issue 1                  All Issue
    

  • Select all
    |
  • Diana-Elena Maxim
    Pacific Journal of Optimization. 2025, 21(1): 1-16. https://doi.org/10.61208/pjo-2023-010
    Abstract ( )   Knowledge map   Save

    (Communicated by Michel Théra)

    Abstract: The aim of our paper is to establish optimality conditions on dual spaces for constrained and unconstrained set-valued optimization problems with respect to some new notions of directional Pareto efficiency. The methods we employ are different for each of the two cases. For the constrained problem we use a fuzzy extremal principle, and for the unconstrained one we use the incompatibility between openness and minimality to prove that the limit of a sequence of directional minima for a perturbation of the objective set-valued map F is a critical point for F.

  • Caifang Wang, Xiaobin Xia and Dan Gao
    Pacific Journal of Optimization. 2025, 21(1): 17-29. https://doi.org/10.61208/pjo-2023-023
    Abstract ( )   Knowledge map   Save
    (Communicated by Adil Bagirov)


        For linear imaging problem

    the EM algorithm can be derived by minimizing the Kullback-Leibler (KL) divergence between forward problem data and the measurements b. However, the Kullback-Leibler divergence is sensitive to noise due to part of small boundary measurements. Thus in this manuscript, a parameter is introduced into Kullback-Leibler divergence and the difference between  and b is measured by the -KL divergence

    Then the linear imaging problem (1) is reduced to a constrained minimization problem

    and solved by a novel parameterized iteration formula

    which is derived from the Karush-Kuhn-Tucker condition of (3).
        To get the convergent property of the iterative method, the original optimization problem is spit into two optimization subproblems
        1. Find minimizer of  with respect to r,

        2. Find minimizer of   with respect to ,


    where and are two mn-dimensional hidden vectors with entries , and . Then the iteration sequence generated by (4) has the following properties
         The nonnegative sequence is bounded by

    In particular, if , then .

         In the first subproblem, the minimizer of the with respect to r with the constraint  is

        In the second subproblem, fix and then there are the following results

    Combining the property that a bounded point sequence must have a convergent subsequence, the convergence of the iterative sequence can be obtained.
        To test the performance of the new iterative formula, a fan-beam X-ray CT reconstruction problem of low-contract Shepp-Logan phantom is used. Compared with the traditional EM algorithm,
    the parameterized algorithm can achieve a relatively better reconstruction effect and faster convergent speed by adjusting the parameter .

  • Xiao Li, Chunlin Hao , Junyu Gao and Zuping Li
    Pacific Journal of Optimization. 2025, 21(1): 31-50. https://doi.org/10.61208/pjo-2023-024
    Abstract ( )   Knowledge map   Save

    (Communicated by Chen Ling)


        It is well-known that tensor eigenvalues have been widely used in signal processing, diffusion tensor imaging, independent component analysis and other fields. Nowadays, there are many methods to solve Z-eigenvalues of tensors. Nevertheless, methods for solving Z-eigenvalues of large-scale tensors tend to fall into a local optimal region. The feasible trust-region method is an effective method for computing the extreme Z-eigenvalue of large-scale tensors, but it also tends to fall into the local optimal. To overcome the problem, we propose three global optimization strategies based on the feasible trust-region method to improve the success rate of computing the extreme Z-eigenvalue.
        The first one is a multi-initial points algorithm. It is a classical, heuristic and easy method of global optimization, which idea is to randomly generate multiple initial points in the unit sphere and select the initial point with the largest objective function.
        The second one is a simulated annealing algorithm. It is also a representative global optimization algorithm with the advantages of easy implementation and strong robustness. Unlike the multi-initial points algorithm, the simulated annealing algorithm accepts the points with relatively poor target value according to probability in the process of algorithm iteration, but the probability of accepting the points with relatively poor target value will gradually decrease. Therefore, it can escape from local maximum and get the maximum value in the global range. In particular, we list two different simulated annealing algorithms based on different ways of generating new iteration points.

    The third one is an infeasible trust-region algorithm, which constructs an infeasible trustregion subproblem, expanding the search scope with the expectation of improving the success rate. The largest Z-eigenvalue of tensors can be obtained by solving the following optimization problem

        To expand the search scope, we construct the following trust-region subproblem with linear constraint at the current iteration point


    where

        For solving subproblem (2), if the constraints on this subproblem are consistent, the trial steps  is solved by interior-point method. Otherwise, if the constraints are inconsistent, a mechanism is introduced into the algorithm which computes the step in two stages. In the first stage, we try to satisfy the linearized constraints, that is,


    here . Then the total trial step dk can be accepted by solving

        Each iteration point of the subproblem (2) may not be feasible (). To expand the search scope, jump out of the local optimal region and find better points, the algorithm can accept the reduction in the objective function f. To prevent moving away from the sphere constraint while the objective function descends, the constraints must remain feasible when accepting a decrease in the objective function. Hence, we employ a trial step and project it onto the unit sphere.
         We demonstrate the global convergence of the infeasible trust region algorithm, which can converge to a KKT point, a strong stationary point, or a better point that jumps out of the local region of the original problem (1). The above three algorithms are considered as the preprocessing strategies based on the feasible trust-region method. We combine the three preprocessing strategies with the feasible trust-region method and conduct comparisons. Numerical experimental results show that the success rate of calculating the extreme Z-eigenvalue is greatly increased with the help of the global optimization strategy.


  • Jingya Chang, Dongdong Liu and Min Xi
    Pacific Journal of Optimization. 2025, 21(1): 51-62. https://doi.org/10.61208/pjo-2023-025
    Abstract ( )   Knowledge map   Save
    (Communicated by Guanglu Zhou)
        Clustering and classification are two important tasks in machine learning. Clustering approaches aim to divide a number of items without labels into several groups, while classification methods provide a classifier with the help of labeled data and classify other data by using the classifier. In reality, sometimes few annotated points are melded in the unannotated data set in clustering problems and taking advantage of the priori label information often enhances the clustering performances. Semi-supervised learning is to complete the learning task based on both the labeled and unlabeled data.
        In this paper, we consider the semi-supervised clustering problems that are modeled by hypergraphs. Our task is to cluster the n vertices of the hypergraph into k groups according to the hypergraph structure, while few categorization labels are given.

        We employ the multi way array to generate the clustering costs and use the hypergraph related tensor to construct the constrained optimization model

    for the semi-supervised clustering problem. This is a 0-1 integer programming. To make this optimization model easy, we penalize the constraint and add a regularization term  to the objective function. In terms of the discrete 0-1 constraint, it can be deduced that the column vectors of X are unit vectors and orthogonal to each other. Thus, we relax it to the continuous constraint . Then the optimization model (1) is transformed to an orthogonal constrained model

        The model (2) is an optimization constrained on the Steifel manifold . To solve this optimization problem, we first find a descent direction in the tangent space from the current point, and then employ the polar decompostion to map the descent direction to the Stiefel manifold.
        The tangent space at a point  is

    The direction  is employed to find a new point in the tangent space, where G is the gradient of function f at the current point X. In the iteration process, we find

    in the tangent space from Xk: The next step is to map to the manifold by using a retraction. Consider the univariate function

    with its domain , which is the projection of onto the Stiefel manifold.
        Given and the descent direction ; by using the adaptive feasible BB-like method, we find a step size such that

    where is a reference objective function value. The new iteration point

    is obtained.
        Convergence analysis demonstrates that the first order optimality condition is satisfied at the accumulation point. The sequence generated by our proposed algorithm SSHC either terminates with or is infinite. We prove that when the iteration is infinite, a subsequence of converges to 0, which means

        We employ the proposed SSHC method to cluster synthetic and real data, and compare the results of SSHC method with an unsupervised method SHC. Numerical experiments show that our method works well on synthetic and real data.

  • Baohua Huang and Changfeng Ma
    Pacific Journal of Optimization. 2025, 21(1): 63-87. https://doi.org/10.61208/pjo-2023-026
    Abstract ( )   Knowledge map   Save
    (Communicated by Bingsheng He)
        It is well known that Levenberg-Marquardt (LM) method is widely used for solving nonlinear equations. In this paper, we give an extension of LM method and propose a nonmonotone LM method with correction which produces the LM parameter according to the new nonmonotone strategy of Grippo, Lampariello and Lucidi. Moreover, not only an LM step but also a correction step are computed at every iteration in our proposed nonmonotone LM method with correction. The cubic convergence of the proposed method is proved under the local error bound condition which is weaker than nonsingularity. Some numerical results confirm the feasibility and effectiveness of the proposed algorithm.
  • Weijun Zhou and Li Zhang
    Pacific Journal of Optimization. 2025, 21(1): 89-106. https://doi.org/10.61208/pjo-2023-027
    Abstract ( )   Knowledge map   Save

    (Communicated by Gui-hua Lin)

    A globally and superlinearly convergent BFGS method is introduced to solve general nonlinear equations without computing exact gradient. Compared with existing Gauss-Newton-based BFGS type methods, the proposed method does not require conditions such as symmetry on the underlying function. Moreover, it can be suitably adjusted to solve nonlinear least squares problems and still guarantee global convergence. Some numerical results are reported to show its eciency.

  • Xiaoliang Dong, Deren Han
    Pacific Journal of Optimization. 2025, 21(1): 107-123. https://doi.org/10.61208/pjo-2023-028
    Abstract ( )   Knowledge map   Save
    (Communicated by Lizhi Liao)
    A new three–term conjugate gradient algorithm is developed, in which the involved search direction satisfies the sufficient descent condition at each iteration under Wolfe conditions. Different from the existent methods, a dynamical two–side approximating mechanism is proposed, which adaptively adjusts the relative weight between sufficient descent condition and making fully use of curvature information of objective functions. To some extent, the above strategy meaningfully exploits Hessian approximation of the objective function and therefore increases the efficiency of the algorithm in practical computation. Under mild conditions, we prove that the presented method converges globally for general objective functions. Numerical results are reported, which illustrate that the proposed algorithm is practically encouraging.
  • Fenlan Wang
    Pacific Journal of Optimization. 2025, 21(1): 125-136. https://doi.org/10.61208/pjo-2023-031
    Abstract ( )   Knowledge map   Save

    (Communicated by Fanwen Meng)

         
In this paper, A new approach is presented for exactly solving the following linear mixed integer programming problem (P) with scenario constraints.

    wherecis an n-dimensional column vector,  and for each scenario  .    is an  matrix, d is anm-dimensional column vector,   and  are the lower and upper bounds of x, respectively.

         Consider the following subproblem:

    where  and  and .

        We first decompose the subproblem  into two parts: main problem (MP) and the feasibility subproblems   of  scenario constraints.

           The first part is the following problem (MP) by discarding the scenario constraints of :

    The optimal objective function value of the continuous relaxation problem of problem (MP) provides a lower bound for problem . Also we can obtain two integer points by rounding the continuous optimal solution of (MP) up or down along some directions.

          Another part is only the scenario constraints. For a given feasible solution of (MP), we determine whether is feasible to the scenario constraints or not by solving the following sub-problems:

         Then we develop the domain partition techniques for several cases according to whether an integer point xˆ being feasible to each constraint or not and thus cut off some integer boxes which do not contain any optimal solution of from .

         Integrating the solution scheme into a branch-and-bound frame, we present the new algorithm for solving problem (P). The proposed solution method reduces the optimality gap gradually in the solution iterations and is of a finite-step convergence.

         Finally, we give the numerical experiments and the comparison numerical results with the traditional branch-and-bound method for different sizes of scenario constraints. The comparison results with the traditional branch-and-bound method also show the proposed algorithm is very efficient.

  • Beier Chen, Hui Zhang
    Pacific Journal of Optimization. 2025, 21(1): 137-153. https://doi.org/10.61208/pjo-2023-035
    Abstract ( )   Knowledge map   Save

    (Communicated by Xinwei Liu)

    The proximal gradient descent method, well-known for convex composite optimization, can be completely described by the concept of proximal gradient mapping. In this note, we introduce two new results of proximal gradient mapping–norm monotonicity and refined descent, with which we are able to extend the recently proposed potential function-based framework from gradient descent to proximal gradient descent.

  • M. Ait Mansour, Z. Chbani, A.-R. Elakri
    Pacific Journal of Optimization. 2025, 21(1): 155-178. https://doi.org/10.61208/pjo-2023-037
    Abstract ( )   Knowledge map   Save

    (Communicated by Robert Csetnek)


          The purpose of this paper is to study the strong convergence of Moudafi's proximal iterative method for finding a common element of the set of solution of finite family of equilibrium problems in a Hilbert space. More precisely, we suppose that, for $i=1,...,m$, $C_i$ is closed convex subset of Hilbert space $H$ and $(F_i)_{i=1,...,m}$ be a sequence of real-valued bifunctions such that $F_i$ is defined on $C_i\times C_i$ for each $i=1,2,\dots,m.$ The corresponding system of equilibrium problems we consider aims at finding a point $x$ in $C=\bigcap_{i=1} ^m C_i$ such that :

    (SEP)     $F_{i}{(x,y )}\geq 0$ for all $y\in C_i$ and all $i=1,...,m.$                                       (1)

    Under appropriate condition on the parameters and the data, the sequence $(x_k)$ generated by the following algorithm 

    $x^{k+1}=\alpha_{k}{x^k}+\beta_{k}{g(x^k)}+\gamma_{k}{J_r^{F_{m}}}{J_r^{F_{m-1}}}\dots{J_r^{F_{1}}}(x)$

    is proved to be strongly converging to a point $\bar{x}\in S,$ where $S$ is the set of solutions to the system (1), $J_r^{F_{i}}(x)$ is the resolvent of $F_i$ at $x$ with parameter $r$ for $i=1,2\dots,m$, $g:C_m \longrightarrow C_m$ is a contraction and $\alpha_k ,\beta_k ,\gamma_k$ are positive scalars such that $\alpha_k +\beta_k +\gamma_k=1$, $\beta_k \rightarrow 0$, $ \sum \beta_k=+\infty$ and $(\alpha_k )\subset [c,d]$ for some $c,d\in (0,1)$.

         The second goal of this paper is to use the iterative algorithm proposed for the equilibrium system (1) to solve the split common fixed point problem as defined by Censor and Segal in [The split common fixed-point problem for directed operators J. Convex Anal.16 (2009) 587–600.], i.e., finding a point $\bar{x}$ satisfying

    $\overline{x}\in\bigcap_{i=1}^{m}{Fix}{U_{i}}$   and    $A{\overline{x}}\in\bigcap_{j=1}^{m}{Fix}{T_{j}}$                              (3)

    where, $H_1$ and $H_2$ are two Hilbert spaces, $A : H_{1} \longrightarrow H_{2}$ is a bounded linear operator, $(U_{i})_{i=1}^{m}: H_{1} \longrightarrow H_{1}$ are F-hemicontinuous operators and $(T_{i})_{i=1}^{m}: H_{2} \longrightarrow H_{2}$ are M-demicontractive mappings. Note that the concept of M-demicontractiveness is new in the literature, we have introduced it in order to develop the relation between problems (1) and (3). Our analysis underlines also that the class of $M$-demicontractive mappings is intermediately situated between nonexpansive operators and demicontractive ones. Finally, to illustrate the convergence of our algorithm, we present a numerical example written in matlab.