(Communicated by Liqun Qi )
For a signed ring family (closed with respect to the reduced union and intersection) and for a bisubmodular function with and , thebisubmodular polyhedron associated with is given by
where . We define the convolution of a bisubmodular function f and a special bisubmodular function called a box-bisubmodular function determined by
(Communicated by Liqun Qi )
(Communicated by Liqun Qi )
Note: We are pleased to entrust this paper to Prof. Masao Fukushima. I first met Masao in 1999, when I was the Head of the Department of Applied Mathematics and Chair Professor of Applied Mathematics at the Hong Kong Polytechnic University. Masao is a very distinguished and respected scholar with a high international reputation. His work on optimization has inspired the careers of many researchers. On a personal level, we have organized many conferences over the years. He is a good friend. This note was written by Kok Lay Teo.
Dedicated to Professor Masao Fukushima on the occasion of his 75th birthday
This paper focuses on a new approach based on lower-level Wolfe and Mond-Weir duality for bilevel programs, which gives two new single-level reformulations called WDP and MDP, respectively. Different from the popular MPCC (i.e., mathematical program with comple- mentarity constraints) approach, both WDP and MDP may satisfy the Mangasarian-Fromovitz constraint qualification at their feasible points. This paper aims at exploring whether these new reformulations satisfy other constraint qualifications such as Abadie CQ and Guignard CQ. In particular, some sufficient conditions to ensure Abadie CQ and Guignard CQ to hold for WDP and MDP are derived.
(Communicated by Liqun Qi)
Abstract: We present a primal-dual majorization-minimization method for solving large- scale linear programming problems. A smooth barrier augmented Lagrangian (SBAL) func- tion with strict convexity for the dual linear programming is derived. The majorization- minimization approach is naturally introduced to develop the smoothness and convexity of the SBAL function. Our method only depends on a factorization of the constant matrix inde- pendent of iterations and does not need any computation on step sizes, thus can be expected to be particularly appropriate for large-scale linear programming. The method shares some similar properties to the first-order methods for convex and linear programming problems, but its convergence analysis is established on the differentiability and convexity of our SBAL function. The global convergence is analyzed without prior requiring either the primal or dual linear programming to be feasible. Under the strict complementarity conditions on the solution, our method is proved to be globally linearly convergent, and a new iteration- complexity result is given.
This is dedicated to Professor Masao Fukushima for his 75th birthday. The first author would like to thank Professor Fukushima again for giving his opportunity to work six months at Kyoto University in 2004.
Linear regression is one of the most popular tools for data analysis. However, many existing works are limited to vector/matrix based models, which are often less efficiently to deal with high-dimensional data. In this paper, we aim to develop a transformed low-rank tensor regression model and apply it to data analysis. Unlike many tensor regression models, we represent the testing sample as a linear combination of all training samples under the transformed T-product (i.e., tensor-tensor product). Moreover, we accordingly propose an easily implementable ADMM-based algorithm for solving the underlying optimization model. A series of numerical experiments demonstrate that our approach works well on color face classification and traffic flow datasets.