25 July 2024, Volume 20 Issue 3                  All Issue
    

  • Select all
    |
  • Satoru Fujishige and Sachin B. Patkar
    Pacific Journal of Optimization. 2024, 20(3): 389-401. https://doi.org/10.61208/pjo-2023-019
    Abstract ( )   Knowledge map   Save
    Dedicated to Prof. Masao Fukushima on the occasion of his 75th birthday


    (Communicated by Liqun Qi )

        For a signed ring family (closed with respect to the reduced union and intersection) and for a bisubmodular function with and , the

    bisubmodular polyhedron associated with is given by

     

    where . We define the convolution of a bisubmodular function f and a special bisubmodular function called a box-bisubmodular function determined by
    upper and lower bound vectors and . We show that the convolution is a bisubmodular function, too. The bisubmodular polyhedron associated with the convolution is shown to be the
    intersection of and the box determined by the upper and lower bound vectors  and . This also generalizes some known min-max results on bisubmodular functions and ordinary submodular
    functions. Moreover, we consider the Dilworth truncation of bisubmodular functions, which generalizes the Dilworth truncation of submodular functions.
  • Kevin Huang and Shuzhong Zhang
    Pacific Journal of Optimization. 2024, 20(3): 403-428. https://doi.org/10.61208/pjo-2023-021
    Abstract ( )   Knowledge map   Save
    Dedicated to Prof. Masao Fukushima on the occasion of his 75th birthday


    (Communicated by Liqun Qi )

        In this paper, we discuss variational inequality (VI) problems without monotonicity from the perspective of convergence of projection-type algorithms. In particular, we identify existing conditions as well as present new conditions that are sufficient to guarantee convergence. The first half of the paper focuses on the case where a Minty solution exists (also known as Minty condition), which is a common assumption in the recent developments for non-monotone VI. The second half explores alternative sufficient conditions that are different from the existing ones such as monotonicity or Minty condition, using an algorithm-based approach. Through examples and convergence analysis, we show that these conditions are capable of characterizing different classes of VI problems where the algorithms are guaranteed to converge.
  • Chuanhao Guo, Pengfei Ma and Kok Lay Teo
    Pacific Journal of Optimization. 2024, 20(3): 429-440. https://doi.org/10.61208/pjo-2023-022
    Abstract ( )   Knowledge map   Save
    Dedicated to Prof. Masao Fukushima on the occasion of his 75th birthday


    (Communicated by Liqun Qi )

        In this paper, we consider a class of copositive programming problems with a continuous objective function. First, we show that these copositive programming problems can be transformed equivalently into semi-infinite programming problems through the use of a key lemma. Then, according to the structure characteristics of the problems, a new discretization method is proposed to solve these transformed problems. Moreover, under some mild conditions, we show that the proposed new method will terminated in a finite number of iterations, giving rise to a feasible solution with the corresponding cost function value better than or equal to the predicted optimal cost function value of the original problem, or confirming that there does not admits a feasible solution for the original problem. Finally, two numerical examples are solved using the proposed method. The results obtained demonstrate the applicability of the proposed algorithm.

    Note: We are pleased to entrust this paper to Prof. Masao Fukushima. I first met Masao in 1999, when I was the Head of the Department of Applied Mathematics and Chair Professor of Applied Mathematics at the Hong Kong Polytechnic University. Masao is a very distinguished and respected scholar with a high international reputation. His work on optimization has inspired the careers of many researchers. On a personal level, we have organized many conferences over the years. He is a good friend. This note was written by Kok Lay Teo.

  • Qingguo Bai, Hongling Lu and Fanwen Meng
    Pacific Journal of Optimization. 2024, 20(3): 441-459. https://doi.org/10.61208/pjo-2023-029
    Abstract ( )   Knowledge map   Save
    (Communicated by Jie Sun)
    This paper studies the single-product single-period newsvendor problem with sustainability investment, where the manufacturer determines its optimal production quantity and sustainability level under cap-and-trade policy. Specifically, the proposed model addresses the trade-off between the expected profit and its variance under the mean-variance framework. The demand under consideration is stochastic and dependent on the sustainability level. By taking into account two different forms of the underlying demand, that is, additive and multiplicative, we prove the existence and uniqueness of the optimal solution of the corresponding optimization models under some mild conditions. Further, we conduct sensitivity analysis on the optimal expected profit and its variance with respect to the risk-averse. Preliminary numerical experiments are conducted to illustrate the theoretical results and managerial insights. 
    This work is dedicated to Prof Fukushima to honor his outstanding contributions in optimization on his 75th birthday.
  • C. T. Kelley
    Pacific Journal of Optimization. 2024, 20(3): 461-474. https://doi.org/10.61208/pjo-2023-030
    Abstract ( )   Knowledge map   Save
    (Communicated by Liqun Qi)
    We describe a three precision variant of Newton’s method for nonlinear equations. We evaluate the nonlinear residual in double precision, store the Jacobian matrix in single precision, and solve the equation for the Newton step with iterative refinement with a factorization in half precision. We analyze the method as an inexact Newton method. This analysis shows that, except for very poorly conditioned Jacobians, the number of nonlinear iterations needed is the same that one would get if one stored and factored the Jacobian in double precision. In many ill-conditioned cases one can use the low precision factorization as a preconditioner for a GMRES iteration. That approach can recover fast convergence of the nonlinear iteration. We present an example to illustrate the results. 
    Dedication: To Masao Fukushima on his 75th birthday, with thanks for his many contributions to optimization.
  • Yu-Wei Li, Gui-Hua Lin, Xide Zhu
    Pacific Journal of Optimization. 2024, 20(3): 475-488. https://doi.org/10.61208/pjo-2023-032
    Abstract ( )   Knowledge map   Save

    Dedicated to Professor Masao Fukushima on the occasion of his 75th birthday

          This paper focuses on a new approach based on lower-level Wolfe and Mond-Weir duality for bilevel programs, which gives two new single-level reformulations called WDP and MDP, respectively. Different from the popular MPCC (i.e., mathematical program with comple- mentarity constraints) approach, both WDP and MDP may satisfy the Mangasarian-Fromovitz constraint qualification at their feasible points. This paper aims at exploring whether these new reformulations satisfy other constraint qualifications such as Abadie CQ and Guignard CQ. In particular, some sufficient conditions to ensure Abadie CQ and Guignard CQ to hold for WDP and MDP are derived.


  • Vo Minh Tam, Jein-Shan Chen
    Pacific Journal of Optimization. 2024, 20(3): 489-512. https://doi.org/10.61208/pjo-2023-033
    Abstract ( )   Knowledge map   Save
    (Communicated by Liqun Qi)
    The aim of this paper is to investigate the difference gap (for brevity, D-gap) functions and global error bounds for an abstract class of elliptic variational-hemivariational inequalities (for brevity, EVHIs). Based on the optimality condition for the concerning minimization problem, the regularized gap function for EVHIs is proposed under some suitable conditions. Accordingly, we establish the D-gap functions for EVHIs in terms of these regularized gap functions. Furthermore, we provide global error bounds for EVHIs by virtue of the regularized gap functions and the Dgap functions. An application to contact mechanic problem is given to illustrate our main results.
  • Xin-Wei Liu, Yu-Hong Dai, Ya-Kui Huang
    Pacific Journal of Optimization. 2024, 20(3): 513-535. https://doi.org/10.61208/pjo-2023-034
    Abstract ( )   Knowledge map   Save

    (Communicated by Liqun Qi)

    Abstract: We present a primal-dual majorization-minimization method for solving large- scale linear programming problems. A smooth barrier augmented Lagrangian (SBAL) func- tion with strict convexity for the dual linear programming is derived. The majorization- minimization approach is naturally introduced to develop the smoothness and convexity of the SBAL function. Our method only depends on a factorization of the constant matrix inde- pendent of iterations and does not need any computation on step sizes, thus can be expected to be particularly appropriate for large-scale linear programming. The method shares some similar properties to the first-order methods for convex and linear programming problems, but its convergence analysis is established on the differentiability and convexity of our SBAL function. The global convergence is analyzed without prior requiring either the primal or dual linear programming to be feasible. Under the strict complementarity conditions on the solution, our method is proved to be globally linearly convergent, and a new iteration- complexity result is given.


    This is dedicated to Professor Masao Fukushima for his 75th birthday. The first author would like to thank Professor Fukushima again for giving his opportunity to work six months at Kyoto University in 2004.



  • Christian Kanzow, Theresa Lechner
    Pacific Journal of Optimization. 2024, 20(3): 537-568. https://doi.org/10.61208/pjo-2023-036
    Abstract ( )   Knowledge map   Save
    (Communicated by Liqun Qi)
    Dedicated to Masao Fukushima on the occasion of his 75th birthday.
    Optimization problems with composite functions consist of an objective function which is the sum of a smooth and a (convex) nonsmooth term. This particular structure is exploited by the class of proximal gradient methods and some of their generalizations like proximal Newton and quasi-Newton methods. In this paper, we propose a regularized proximal quasi-Newton method whose main features are: (a) the method is globally convergent to stationary points, (b) the globalization is controlled by a regularization parameter, no line search is required, (c) the method can be implemented very efficiently based on a simple observation which combines recent ideas for the computation of quasi-Newton proximity operators and compact representations of limited-memory quasi-Newton updates. Numerical examples for the solution of convex and nonconvex composite optimization problems indicate that the method outperforms several existing methods.
  • Chenjian Pan, Hongjin He and Chen Ling
    Pacific Journal of Optimization. 2024, 20(3): 569-588. https://doi.org/10.61208/pjo-2023-038
    Abstract ( )   Knowledge map   Save
    (Communicated by Liqun Qi)

        Linear regression is one of the most popular tools for data analysis. However, many existing works are limited to vector/matrix based models, which are often less efficiently to deal with high-dimensional data. In this paper, we aim to develop a transformed low-rank tensor regression model and apply it to data analysis. Unlike many tensor regression models, we represent the testing sample as a linear combination of all training samples under the transformed T-product (i.e., tensor-tensor product). Moreover, we accordingly propose an easily implementable ADMM-based algorithm for solving the underlying optimization model. A series of numerical experiments demonstrate that our approach works well on color face classification and traffic flow datasets.


  • Huynh Van Ngai, Dao Ngoc Han
    Pacific Journal of Optimization. 2024, 20(3): 589-600. https://doi.org/10.61208/pjo-2024-015
    Abstract ( )   Knowledge map   Save
    (Communicated by Xinmin Yang)
    Abstract: In the papers ([1, 2]), Polyak has established a convexity principle, stated that the image of a small ball by a C1,1−smooth mapping between Hilbert spaces is convex. This convexity principle has some interesting applications in optimization and control theory. In this note, we give an extension of this principle to weakly convex multifunctions and its applications to local programming.