Accepted: 2024-01-28
(Communicated by Guanglu Zhou)
Consider the linear equation
$Kx = y,$ (1)
where $x\in\mathbb{R}^n $ is the original signal or image, $K:\mathbb{R}^n\rightarrow \mathbb{R}^m$ is a degradation operator, and $y\in\mathbb{R}^m$ is the observed signal or image. It has some important applications to signal processing, non-destructive testing, etc. The typical $\ell_1$-norm sparse regularization method does not have beneficial stability for ill-posed problems with large condition number. What's more, in many applications, the solution $x$ of the question (1) is not really sparse. To overcome the shortcomings of typical sparsity regularization, some researchers have focused on the elastic-net regularization problem, that is,
$\min_{x\in \mathbb{R}^n} J(x)=\frac{1}{2}\|Kx-y\|^2+\alpha\|x\|_{1}+\frac{\beta}{2}\|x\|^2,$ (2)
where $\alpha>0, \beta>0, $ $\|\cdot\|$ represents $\ell_2$-norm, $\|\cdot\|_1$ represents $\ell_1$-norm. As a species of multi-parameter Tikhonov regularization, elastic-net regularization is more applicable to ill-conditioned issues. Statistically speaking, elastic-net regularization has superior steadiness than the typical regularization. There is an additional $\ell_2$-norm in elastic-net regularization compared to the classical $\ell_1$-norm sparsity regularization. Elastic-net regularization has the advantages of both $\ell_2$-regularization and $\ell_1$-regularization.
Even though there exist various approaches to solve the elastic-net regularization problem which mainly focus on the convergence of the algorithm, not much work has been carried out in the convergence time of the algorithm. Therefore, it is necessary to devise a method to estimate the convergence time. For (2), we can relax the problem to be the following constrained optimization problem
min $g(x)=\frac{1}{2} \|Kx-y\|^2+\frac{\beta}{2}\|x\|^2$
s.t. $x\in\Theta,$ (3)
where $\Theta=\{x\in\mathbb{R}^n|\|x\|_{1}\leq r\}, r>0,$ and transform the problem (3) into the fixed point equation form
$x=P_\Theta(x-\lambda K^T(Kx-y)-\lambda\beta x),$ (4)
where the stepsize $\lambda>0$. Obviously, the solution of problem (1) can be obtained by taking the solution of problem (3) when $\beta\rightarrow0$.
Based on (4), we propose the following new continuous-time projection algorithm for solving the elastic-net regularization
$\dot{x}(t)=-\varphi(x)(x(t)-z(x(t))),$ (5)
where
$\varphi(x)=\left\{\begin{array}{l}\frac{k_1}{\|z(x(t))-x(t)\|^{1-\alpha_1}}+\frac{k_2}{\|z(x(t))-x(t)\|^{1-\alpha_2}}+\frac{k_3}{\|z(x(t))-x(t)\|},\mathrm{if}~ x\in\mathbb{R}^n\setminus \{x\in \mathbb{R}^n:x=z(x)\},\\0,~~~~~~~~~\mathrm{otherwise}, \end{array}\right. $
$z(x(t))=P_\Theta(x(t)-\lambda(K^TKx(t)-K^Ty)-\lambda\beta x(t)),$ $k_1>0, k_2>0, k_3>0,\alpha_1\in(0,1), \alpha_2>1, \lambda>0$ and $\beta>0$.
The equivalence between the equilibrium point of (5) and the solution of (4) is displayed. Moreover, we prove that for every $\beta>0$ and $\lambda\in(0,\frac{2\beta}{L^2})$, if $x$ is an equilibrium point of (5), then for all $t\in[0,+\infty)$, the solution of (5) exists uniquely for any given initial value. Furthermore, the convergence time of the proposed method can be reckoned by the setting time
$T(x(0))\leq T_{\max}=\frac{1}{d_1(1-p_1)}+\frac{1}{d_2(p_2-1)},$
where $k_1>0, k_2>0, k_3>0, \alpha_1\in(0,1), \alpha_2>1, L=\|K\|^2+\beta, \lambda\in(0,\frac{2\beta}{L^2}), \Phi=\sqrt{\frac{1}{1-\lambda^2 L^2+2\lambda\beta}}\in (0,1),$ $p_1:=\frac{1+\alpha_1}{2},$ $p_2:=\frac{1+\alpha_2}{2},$ $d_1=k_1(1-\Phi)\frac{2^{p_1}}{(\frac{4\beta}{4\beta-\lambda L^2})^{1-\alpha_1}}>0,$ $d_2=2^{p_1}(1-\Phi)^{\alpha_2}k_2>0. $
To validate the convergence and effectiveness of the proposed algorithm, we apply our algorithm to some numerical examples of signal processing and image restoration. Compared with state-of-the-art methods in the literature such as CAPPA method, THZF method and PNN algorithm, our algorithm is more effective to calculate the mean squared error (MSE) in signal processing. Furthermore, our algorithm shows the better performance to achieve the signal-to-noise ratio (SNR), the structural similarity index (SSIM) and the relative error (RE) in image restoration.