(Communicated by Michel Théra)
Abstract: The aim of our paper is to establish optimality conditions on dual spaces for constrained and unconstrained set-valued optimization problems with respect to some new notions of directional Pareto efficiency. The methods we employ are different for each of the two cases. For the constrained problem we use a fuzzy extremal principle, and for the unconstrained one we use the incompatibility between openness and minimality to prove that the limit of a sequence of directional minima for a perturbation of the objective set-valued map F is a critical point for F.
For linear imaging problem
Then the linear imaging problem (1) is reduced to a constrained minimization problem
and solved by a novel parameterized iteration formula
which is derived from the Karush-Kuhn-Tucker condition of (3).
To get the convergent property of the iterative method, the original optimization problem is spit into two optimization subproblems
1. Find minimizer of with respect to r,
2. Find minimizer of with respect to
,
In particular, if , then
.
In the first subproblem, the minimizer of the with respect to r with the constraint
is
Combining the property that a bounded point sequence must have a convergent subsequence, the convergence of the iterative sequence can be obtained.
To test the performance of the new iterative formula, a fan-beam X-ray CT reconstruction problem of low-contract Shepp-Logan phantom is used. Compared with the traditional EM algorithm,
the parameterized algorithm can achieve a relatively better reconstruction effect and faster convergent speed by adjusting the parameter .
(Communicated by Chen Ling)
The third one is an infeasible trust-region algorithm, which constructs an infeasible trustregion subproblem, expanding the search scope with the expectation of improving the success rate. The largest Z-eigenvalue of tensors can be obtained by solving the following optimization problem
To expand the search scope, we construct the following trust-region subproblem with linear constraint at the current iteration point ,
where
For solving subproblem (2), if the constraints on this subproblem are consistent, the trial steps is solved by interior-point method. Otherwise, if the constraints are inconsistent, a mechanism is introduced into the algorithm which computes the step in two stages. In the first stage, we try to satisfy the linearized constraints, that is,
here . Then the total trial step dk can be accepted by solving
Each iteration point of the subproblem (2) may not be feasible (). To expand the search scope, jump out of the local optimal region and find better points, the algorithm can accept the reduction in the objective function f. To prevent moving away from the sphere constraint while the objective function descends, the constraints must remain feasible when accepting a decrease in the objective function. Hence, we employ a trial step and project it onto the unit sphere.
We demonstrate the global convergence of the infeasible trust region algorithm, which can converge to a KKT point, a strong stationary point, or a better point that jumps out of the local region of the original problem (1). The above three algorithms are considered as the preprocessing strategies based on the feasible trust-region method. We combine the three preprocessing strategies with the feasible trust-region method and conduct comparisons. Numerical experimental results show that the success rate of calculating the extreme Z-eigenvalue is greatly increased with the help of the global optimization strategy.
We employ the multi way array to generate the clustering costs and use the hypergraph related tensor to construct the constrained optimization model
for the semi-supervised clustering problem. This is a 0-1 integer programming. To make this optimization model easy, we penalize the constraint and add a regularization term
to the objective function. In terms of the discrete 0-1 constraint, it can be deduced that the column vectors of X are unit vectors and orthogonal to each other. Thus, we relax it to the continuous constraint
. Then the optimization model (1) is transformed to an orthogonal constrained model
The model (2) is an optimization constrained on the Steifel manifold . To solve this optimization problem, we first find a descent direction in the tangent space from the current point, and then employ the polar decompostion to map the descent direction to the Stiefel manifold.
The tangent space at a point is
The direction is employed to find a new point in the tangent space, where G is the gradient of function f at the current point X. In the iteration process, we find
in the tangent space from Xk: The next step is to map to the manifold by using a retraction. Consider the univariate function
with its domain , which is the projection of
onto the Stiefel manifold.
Given and the descent direction
; by using the adaptive feasible BB-like method, we find a step size
such that
where is a reference objective function value. The new iteration point
We employ the proposed SSHC method to cluster synthetic and real data, and compare the results of SSHC method with an unsupervised method SHC. Numerical experiments show that our method works well on synthetic and real data.
(Communicated by Gui-hua Lin)
A globally and superlinearly convergent BFGS method is introduced to solve general nonlinear equations without computing exact gradient. Compared with existing Gauss-Newton-based BFGS type methods, the proposed method does not require conditions such as symmetry on the underlying function. Moreover, it can be suitably adjusted to solve nonlinear least squares problems and still guarantee global convergence. Some numerical results are reported to show its eciency.
(Communicated by Fanwen Meng)
In this paper, A new approach is presented for exactly solving the following linear mixed integer programming problem (P) with scenario constraints.
wherecis an n-dimensional column vector, and
for each scenario
.
is an
matrix, d is anm-dimensional column vector,
and
are the lower and upper bounds of x, respectively.
Consider the following subproblem:
where and
and
.
The first part is the following problem (MP) by discarding the scenario constraints of :
The optimal objective function value of the continuous relaxation problem of problem (MP) provides a lower bound for problem . Also we can obtain two integer points by rounding the continuous optimal solution of (MP) up or down along some directions.
Another part is only the scenario constraints. For a given feasible solution of (MP), we determine whether
is feasible to the scenario constraints or not by solving the following sub-problems:
Then we develop the domain partition techniques for several cases according to whether an integer point xˆ being feasible to each constraint or not and thus cut off some integer boxes which do not contain any optimal solution of from
.
Integrating the solution scheme into a branch-and-bound frame, we present the new algorithm for solving problem (P). The proposed solution method reduces the optimality gap gradually in the solution iterations and is of a finite-step convergence.
Finally, we give the numerical experiments and the comparison numerical results with the traditional branch-and-bound method for different sizes of scenario constraints. The comparison results with the traditional branch-and-bound method also show the proposed algorithm is very efficient.
(Communicated by Xinwei Liu)
The proximal gradient descent method, well-known for convex composite optimization, can be completely described by the concept of proximal gradient mapping. In this note, we introduce two new results of proximal gradient mapping–norm monotonicity and refined descent, with which we are able to extend the recently proposed potential function-based framework from gradient descent to proximal gradient descent.
(Communicated by Robert Csetnek)
The purpose of this paper is to study the strong convergence of Moudafi's proximal iterative method for finding a common element of the set of solution of finite family of equilibrium problems in a Hilbert space. More precisely, we suppose that, for $i=1,...,m$, $C_i$ is closed convex subset of Hilbert space $H$ and $(F_i)_{i=1,...,m}$ be a sequence of real-valued bifunctions such that $F_i$ is defined on $C_i\times C_i$ for each $i=1,2,\dots,m.$ The corresponding system of equilibrium problems we consider aims at finding a point $x$ in $C=\bigcap_{i=1} ^m C_i$ such that :
(SEP) $F_{i}{(x,y )}\geq 0$ for all $y\in C_i$ and all $i=1,...,m.$ (1)
Under appropriate condition on the parameters and the data, the sequence $(x_k)$ generated by the following algorithm
$x^{k+1}=\alpha_{k}{x^k}+\beta_{k}{g(x^k)}+\gamma_{k}{J_r^{F_{m}}}{J_r^{F_{m-1}}}\dots{J_r^{F_{1}}}(x)$
is proved to be strongly converging to a point $\bar{x}\in S,$ where $S$ is the set of solutions to the system (1), $J_r^{F_{i}}(x)$ is the resolvent of $F_i$ at $x$ with parameter $r$ for $i=1,2\dots,m$, $g:C_m \longrightarrow C_m$ is a contraction and $\alpha_k ,\beta_k ,\gamma_k$ are positive scalars such that $\alpha_k +\beta_k +\gamma_k=1$, $\beta_k \rightarrow 0$, $ \sum \beta_k=+\infty$ and $(\alpha_k )\subset [c,d]$ for some $c,d\in (0,1)$.
The second goal of this paper is to use the iterative algorithm proposed for the equilibrium system (1) to solve the split common fixed point problem as defined by Censor and Segal in [The split common fixed-point problem for directed operators J. Convex Anal.16 (2009) 587–600.], i.e., finding a point $\bar{x}$ satisfying
$\overline{x}\in\bigcap_{i=1}^{m}{Fix}{U_{i}}$ and $A{\overline{x}}\in\bigcap_{j=1}^{m}{Fix}{T_{j}}$ (3)
where, $H_1$ and $H_2$ are two Hilbert spaces, $A : H_{1} \longrightarrow H_{2}$ is a bounded linear operator, $(U_{i})_{i=1}^{m}: H_{1} \longrightarrow H_{1}$ are F-hemicontinuous operators and $(T_{i})_{i=1}^{m}: H_{2} \longrightarrow H_{2}$ are M-demicontractive mappings. Note that the concept of M-demicontractiveness is new in the literature, we have introduced it in order to develop the relation between problems (1) and (3). Our analysis underlines also that the class of $M$-demicontractive mappings is intermediately situated between nonexpansive operators and demicontractive ones. Finally, to illustrate the convergence of our algorithm, we present a numerical example written in matlab.