(Communicated by Chen Ling)
(Communicated by Donghui Li)
In this paper, we consider a sixth-order paired symmetric Cauchy tensor and its generating vec- tors. Initially, we investigate the conditions for positive definiteness and positive semidefinite- ness of the sixth-order paired symmetric Cauchy tensor. Necessary and sufficient conditions for its positive definiteness are given according to the structural characteristics of a sixth-order paired symmetric Cauchy tensor. Subsequently, we apply the concept of the M-eigenvalue to the sixth-order paired symmetric Cauchy tensor, and further discuss related properties. We give two M-eigenvalue inclusion intervals for a sixth-order paired symmetric Cauchy tensor, which pro- vide two upper bounds for the M-spectral radius. The inclusion relation between them is given. In the end, we provide two numerical examples of eigenvalue inclusion intervals, confirming the inclusion relationship of the intervals.
(Communicated by Jie Sun)
In this paper, we propose an extended CQ algorithm integrated with selection technique to address the multiple-sets split feasibility problem (MSFP). At each iteration, the selection technique is employed to formulate a split feasibility subproblem for MSFP, subsequently it is resolved by means of the CQ algorithm. Under mild conditions, we establish the global convergence results for the extended CQ algorithm. Furthermore, we provide empirical evidence in the form of numerical results, which conclusively affirm the effectiveness and competitiveness of our proposed algorithm.
(Communicated by Nobuo Yamashita)
In this paper, in the absence of any constraint qualifications, we develop sequential necessary and sufficient optimality conditions for a constrained multiobjective fractional programming problem characterizing a Henig proper efficient solution in terms of the $\epsilon$-subdifferentials and the subdifferentials of the functions. This is achieved by employing a sequential Henig subdifferential calculus rule of the sums of $m\ (m\geq 2)$ proper convex vector valued mappings with a composition of two convex vector valued mappings. In order to present an example illustrating Our results, we establish the classical optimality conditions under Moreau-Rockafellar qualification condition. Our results are presented in the setting of reflexive Banach space in order to avoid the use of nets.
(Communicated by Zheng-Hai Huang)
Overfitting is a common phenomenon in machine learning, wherein models almost can fit the samples on the training set but have poor generalization ability on the test set. Regularization is devoted to tackling this problem by imposing a penalty on the complexity or smoothness of the model. However, the performance of regularization is usually circumscribed by the lack of correlation with data samples, which restricts its potential efficiency for many practical models. In this paper, pursuing the seminal work by Zhu et al. (LDMNet), we develop a coupled tensor norm regularization. It can be customized to the model with small-sized structural samples. The main idea of this regularization, which is built upon empirical manifold observation that input data and output features have a low-dimensional structure, is an alternative representation of low-dimensionality. Concretely, coupled tensor norm regularization is the low-rank approximation of the coupled tensor rank function. Related theoretical properties are presented and we further test this regularization for multinomial logistic regression and deep neural networks by theoretical algorithm analysis and numerical experiments. Numerical simulations on real datasets demonstrate the compelling performance of proposed regularization.
(Communicated by Guihua Lin)
We consider the optimization problem of minimizing a smooth and convex function. Based on the accelerated coordinate descent method (ACDM) using probabilities $L_i^{1/2}[\sum_{k=1}^n L_k^{1/2}]^{-1}$ for non-uniform sampling (Nesterov Yu. et al., SIAM J. Optim., 110–123, 2017 [3]), we propose an adaptive accelerated coordinate descent method (AACDM) with the same probability distribution determined by $\{L_i\}$ as in ACDM.
In [1, 3], the step sizes of their algorithms are fixed and determined by the (global) parameters $\{L_i\}$. Note that this may not be preferable for practical applications where the (local) parameter values differ from the global counterparts to some extent. This implies that methods which can be adaptive to the local parameters might improve the performance in practice. Motivated by this, in this paper we study the adaptive ACDM, which still requires (global) Lipschitz constants for non-uniform sampling as a prior, while the (local) coordinate Lipschitz constants are determined by backtracking (not neceessarily monotone) to achieve better performance. Both the strongly and non-strongly cases are discussed in this paper.
The non-monotone backtracking line search is included in our adaptive scheme, which performs better (compared with the monotone one) for applications whose local coordinate Lipschitz constants oscillate along the trajectory or become smaller when approaching the tail. The adaptive ACDM is indeed not a monotone method, meaning that the sequence of function values it produces is not necessarily nonincreasing. Since the monotone approach can be used to improve numerical stability (see monotone FISTA in [2]), we also propose an adaptive ACDM in monotone version.
Numerical results on some classic problems show the efficiency of the adaptive scheme.
(Communicated by Xinwei Liu)
This paper introduces a novel conjugate gradient method that exploits the m-th order Taylor expansion of the objective function and cubic Hermite interpolation conditions. We derive a set of modified secant equations with enhanced accuracy in approximating the Hessian matrix of the objective function. Additionally, we develop a modified Wolfe line search to address the limitations of the conventional constraint imposed on modified secant equations while ensuring the fulfillment of the curvature condition. Consequently, an improved spectral conjugate gradient algorithm is proposed based on the modified secant equation and Wolfe line search. Under standard assumptions, the algorithm is proven to be globally convergent for minimizing general nonconvex functions. Numerical results are provided to demonstrate the effectiveness of this new proposed algorithm.