Open Access
Issue
Wuhan Univ. J. Nat. Sci.
Volume 28, Number 1, February 2023
Page(s) 53 - 60
DOI https://doi.org/10.1051/wujns/2023281053
Published online 17 March 2023

© Wuhan University 2023

Licence Creative CommonsThis is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

0 Introduction

The goal of image denoising is to estimate the latent image xRm×nMathematical equation from its degraded observation yRm×nMathematical equation, which is typically an ill-posed inverse problem. To tackle the ill-posed nature of this problem, regularization methods based on various image prior information have to be incorporated into the image denoising process, which is usually realized by minimizing the objective function as

x ^ = a r g m i n x y - x 2 2 + λ R ( x ) Mathematical equation(1)

where xMathematical equation and yMathematical equation represent the original image and the degraded image, respectively. The first term is the data fidelity and the second term is the regularization term. λMathematical equation is a regularization parameter that balances these two terms. During the past decades, a variety of image denoising methods have been proposed, such as total variation[1,2], sparse representation[3-5], non-local self-similarity[6,7], and neural network models[8-10].

Nowadays, sparse representation has become a mainstream direction in image processing. Aharon et al[11] used singular value decomposition to update the basic atoms and corresponding coefficients in a minimum error criterion, and pioneered the K-SVD (singular value decomposition) algorithm with a K-time optimization process in 2006. In dictionary learning and sparse coding, each block is treated as an independent individual, while blocks are generally correlated with each other, which is usually ignored. Mairal et al[12] assumed that the sparse coefficients of similar blocks had the same support mechanism and combined the image redundancy information to improve the K-SVD denoising performance by learning simultaneous sparse coding (LSSC). Dong et al[13] trained principal component analysis (PCA) dictionaries by clustering all image blocks, and for each patch group, the most appropriate local PCA dictionary was selected to encode it, thus called the non-local centralized sparse representation (NCSR) model. This NCSR method introduces the concept of "coding residuals". Zhang et al[14] formally proposed the concept of "group" as the basic processing unit of sparse coding, and proposed a group-based sparse representation (GSR) model to extend the mutually independent patch sparse to group sparse which is applicable to both non-local similarity and local sparsity of images. Zha et al[15] proposed a group sparse residual constraint model with non-local priors (GSRC-NLP) and used soft thresholding to achieve image denoising, which essentially combines the NCSR method and the GSR model to upgrade the patch-based sparse coding residuals to the group-based sparse residuals.

Other than the popular l1Mathematical equation norm, there are many regularization functions to promote sparsity. Bai et al[16] proposed an adaptive correction term to alleviate the over constraint of the l1Mathematical equation norm . The l1/l2Mathematical equation regularization term is also used to improve sparsity and can alleviate the over-constraint of l1Mathematical equation norm to some extent [17]. In addition to this, many non-convex regularization terms have been proposed, such as l1/2Mathematical equation norm [18], lpMathematical equation norm [19], and Logarithmic regularization term[20].

In this paper, we study the ratio of the l1Mathematical equation and l2Mathematical equation norms, denoted as l1/l2Mathematical equation, to promote the sparsity of group sparse residual. The contributions of the paper are summarized as follows:

First, the l1/l2Mathematical equation regularization term is used to reduce the group sparse residual, and then a group sparse residual constraint model with l1/l2Mathematical equation minimization is proposed. The proposed model is successfully solved by the alternative direction method of multipliers (ADMM) algorithm.

Second, experimental results on image denoising show that the proposed scheme is feasible and outperforms many state-of-the-art methods in both objective and perceptual quality.

1 Background and Related Work

1.1 Group Sparse Representation

In the framework of GSR method[14], an image xMathematical equation with size NMathematical equation is divided into nMathematical equation overlapped patches xiMathematical equation of size b×bMathematical equation, and each patch is denoted by a vector xiRb×mMathematical equation. Then for each patch xiMathematical equation, mMathematical equation similar matched patches are selected from a W×WMathematical equation sized searching window by K-nearest neighbor (KNN) algorithm[21]. All patches in set SiMathematical equation are stacked into a matrix XiRb×mMathematical equation, i.e., Xi={xi,1,xi,2,,xi,m}Mathematical equation . This matrix XiMathematical equation consisting of patches with similar structures is called a patch group, where {xi,j}Mathematical equation denotes the j-thMathematical equation patch in the i-thMathematical equation patch group.

Give a dictionary DiMathematical equation, which is often learned from each group, such as PCA dictionary[22]. Each group XiMathematical equation can be sparsely represented by solving the following l1Mathematical equation norm minimization problem:

A ^ i = a r g m i n A i { 1 2 Y i - D i A i F 2 + λ A i 1 }      Mathematical equation(2)

where AiMathematical equation represents the group sparse coefficient of each group XiMathematical equation, i.e., Xi=DiAiMathematical equation. DiMathematical equation is an adaptive PCA dictionary learned from each patch group.

1.2 Construction of Group Sparse Coefficient Residuals Ai-BiMathematical equation

In Ref.[18], the authors pointed out that traditional GSR models fail to faithfully estimate the sparsity of each group due to the degradation of the observed image, and they showed that the quality of image restoration largely depends on the group sparsity residual, which is defined as the difference between each degraded group sparse coefficient AiMathematical equation and the corresponding original group sparse coefficient BiMathematical equation:

R i = A i - B i Mathematical equation(3)

So, they proposed the group sparsity residual constraint model to boost up the accuracy of group sparse coefficient AiMathematical equation , which can be written as

A ^ i = a r g m i n A i { 1 2 Y i - D i A i F 2 + λ A i - B i 1 }      Mathematical equation(4)

where Bi={β1,β2,,βm}Mathematical equation is the group sparse coefficient of each group corresponding to the original image. As the original image xMathematical equation is not available in image denoising, the non-local self-similarity prior is used to estimate BiMathematical equation. For each group including mMathematical equation non-local similar patches, a good estimation of βkMathematical equation can be computed by the weighted average of each vector αjMathematical equation in AiMathematical equation. We have

β k = j = 1 m w j α j Mathematical equation(5)

where Ai=DiTYiMathematical equation , βkMathematical equation and αjMathematical equation represent the k-thMathematical equation and j-thMathematical equation vector of BiMathematical equation and AiMathematical equation, respectively. After this, βkMathematical equation is copied by mMathematical equation times to estimate BiMathematical equation. We can obtain Bi={βk,βk,,βk}Mathematical equation. In Eq. (5), wjMathematical equation is set to be inversely proportional to the distance between the target patch yiMathematical equation and its similar patch yi,jMathematical equation, i.e.,

w j = 1 L e x p ( - y i - y i , j 2 2 / h ) Mathematical equation

where hMathematical equation is a predefined constant and LMathematical equation is a normalization factor [23].

1.3 l 1 / l 2 Mathematical equation Minimization

In Ref.[17], the authors employed the l1/l2Mathematical equation norm to promote the sparsity and proposed the following model for sparse recovery:

m i n x R n x 1 x 2 ,   s . t .   A x = b Mathematical equation(6)

Two main advantages of l1/l2Mathematical equation minimization are scale invariant and parameter free. As the ratio of l1Mathematical equation and l2Mathematical equation is non-convex and non-linear, solving model (6) and analyzing the convergence become theoretically difficult. The authors of Ref.[17] applied ADMM for minimizing model (11), and verified the convergence empirically by plotting residual errors and objective functions. In Ref.[17], the authors applied the l1/l2Mathematical equation norm on the image gradient for MRI reconstruction problem.

2 The GSRC-l1/l2Mathematical equation Model for Image Denoising

In this subsection, we employ the ratio of l1Mathematical equation and l2Mathematical equation norm as the regularizer to promote the sparsity of the group sparse residual mentioned before. The proposed model is

A ^ i = a r g m i n A i { 1 2 λ 0 y - D α 2 2 + i = 1 n A i - B i 1 A i - B i 2 } Mathematical equation(7)

where DMathematical equation represents the dictionary and αMathematical equation is the sparse coefficient.

As we mentioned before, patch group is the unite of the proposed sparse representation. Supposing x,yRnMathematical equation, the corresponding nMathematical equation patch group obtained by block matching is denoted by Xi,YiRb×mMathematical equation. Let x=DαMathematical equation, then Eq. (7) can be rewritten as

A ^ i = a r g m i n A i { 1 2 λ 0 y - x 2 2 + i = 1 n A i - B i 1 A i - B i 2 } Mathematical equation(8)

By means of Theorem I in Ref.[15], the following equation holds with a very large probability at each iteration.

1 N x - y 2 2 = 1 K i = 1 n X i - Y i F 2 Mathematical equation(9)

Based on Eq. (9), minimization problem (8) is equivalent to

A ^ i = a r g m i n A i i = 1 n { 1 2 λ Y i - X i F 2 + A i - B i 1 A i - B i 2 } Mathematical equation(10)

where λ=λ0KNMathematical equation, K=b×m×nMathematical equation. Let Xi=DiAiMathematical equation and Yi=DiSiMathematical equation, DiMathematical equation is a PCA based dictionary learned from each group YiMathematical equation, so it is orthogonal. By utilizing ADMM and introducing two auxiliary variables Pi=Ai-BiMathematical equation and Qi=Ai-BiMathematical equation, Eq. (10) can be transformed into another equivalent constraint form:

{ a r g m i n A i , P i , Q i i = 1 n ( 1 2 λ A i - S i F 2 + P i 1 Q i 2 ) s . t .   P i = A i - B i ,   Q i = A i - B i Mathematical equation(11)

The augmented Lagrange function plays a central role in the ADMM method. For the problem (11), the augmented Lagrange function can be written as

L ρ 1 , ρ 2 ( A i , P i , Q i , μ 1 , μ 2 ) = P i 1 Q i 2 + 1 2 λ A i - S i F 2 + μ 1 , P i - A i + B i + ρ 1 2 P i - A i + B i F 2 + μ 2 , Q i - A i + B i + ρ 2 2 Q i - A i + B i F 2 Mathematical equation(12)

Then, the ADMM iteration goes as follows

A i k + 1 = a r g m i n A i 1 2 λ A i - S i F 2 + ρ 1 2 A i - ( P i k + B i + μ 1 ρ 1 ) F 2 + ρ 2 2 A i - ( Q i k + B i + μ 2 ρ 2 ) F 2   Mathematical equation(13)

P i k + 1 = a r g m i n P i P i 1 Q i k 2 + ρ 1 2 A i k + 1 - ( P i + B i + μ 1 ρ 1 ) F 2 Mathematical equation(14)

  Q i k + 1 = a r g m i n Q i P i k + 1 1 Q i 2 + ρ 2 2 A i k + 1 - ( Q i + B i + μ 2 ρ 2 ) F 2 Mathematical equation(15)

μ 1 k + 1 = μ 1 k + ρ 1 ( P i k + 1 - A i k + 1 + B i ) Mathematical equation(16)

μ 2 k + 1 = μ 2 k + ρ 2 ( Q i k + 1 - A i k + 1 + B i ) Mathematical equation(17)

Next, we will discuss the optimization strategies for solving each sub-problem one by one.

2.1 A i Mathematical equation Sub-Problem

The sub-problem of AiMathematical equation is a least square problem. From the optimal conditions, we can obtain

    1 λ ( A i - S i ) + ρ 1 ( A i - ( P i k + B i + μ 1 ρ 1 ) ) + ρ 2 ( A i - ( Q i k + B i + μ 2 ρ 2 ) ) = 0 Mathematical equation

The solution of AiMathematical equation is

A i k + 1 = S i + λ ρ 1 ( P i k + B i + μ 1 ρ 1 ) + λ ρ 2 ( Q i k + B i + μ 2 ρ 2 ) 1 + λ ρ 1 + λ ρ 2 Mathematical equation(18)

2.2 P i Mathematical equation Sub-Problem

The PiMathematical equation sub-problem is equivalent to

P i k + 1 = a r g m i n P i P i 1 + ρ 1 Q i k 2 2 P i - ( A i k + 1 - B i - μ 1 ρ 1 ) F 2 Mathematical equation

It has a closed form solution via soft shrinkage, i.e.,

    P i k + 1 = m a x { | A i k + 1 - B i - μ 1 ρ 1 | - 1 Q i k 2 ρ 1 , 0 } s g n ( A i k + 1 - B i - μ 1 ρ 1 ) Mathematical equation(19)

where Mathematical equation represents point-wise product.

2.3 Q i Mathematical equation Sub-Problem

As for the QiMathematical equation sub-problem, let ck=Pik+11Mathematical equation, dk=Aik+1-Bi-μ2ρ2Mathematical equation and the minimization problem reduces to

Q i k + 1 = a r g m i n Q i c k Q i 2 + ρ 2 2 Q i - d k F 2 Mathematical equation(20)

If dk=0Mathematical equation, then any vector QiMathematical equation with Qi2=ckρ23Mathematical equation is a solution to the minimization problem. If ck=0Mathematical equation, then Qi=dkMathematical equation is the solution. If ck0, dk0Mathematical equation, by taking derivative of the objective function with respect to QiMathematical equation, we obtain

( - c k Q i 2 3 + ρ 2 ) Q i = ρ 2 d k Mathematical equation(21)

As a result, there exists a positive number τkMathematical equation such that Qi=τkdkMathematical equation. Thus, Eq. (21) reduces to

( - c k ( τ k ) 3 d k 2 3 + ρ 2 ) τ k d k = ρ 2 d k Mathematical equation

Given dkMathematical equation, we denote

η k = d k 2 , M k = c k ρ 2 ( η k ) 3 Mathematical equation

Then τkMathematical equation is the root of

τ 3 - τ 2 - M k = 0 Mathematical equation

Let F(τ)=τ3-τ2-MkMathematical equation, we can find F(τ) = 0Mathematical equation has only one real root, which has the following closed-form:

τ = 1 3 + 1 3 ( C k + 1 C k ) Mathematical equation

where Ck=27Mk+2+(27Mk+2)2-423Mathematical equation.

In summary, we have

Q i k + 1 = {   e k       ,   i f   d k = 0 τ k d k   ,   i f   d k 0 Mathematical equation(22)

where ekMathematical equation is a random vector with the l2Mathematical equation norm to be ckρ23Mathematical equation.

Finally, the final estimated group patch XiMathematical equation is computed by X^i=DiA^iMathematical equation. We obtain the complete image x^Mathematical equation by putting the group back to its original position and averaging the overlapping pixels.

2.4 Settings of Parameters and Iterations

In addition, to obtain better results, we can perform the following several iterations of the denoising process. In the t-thMathematical equation iteration, an iterative regularization strategy is used to update the estimation of the noise variance, and therefore updating ytMathematical equation. The standard deviation of the noise σMathematical equation in the t-thMathematical equation iteration is adjusted to

σ n t = ρ ( σ n 2 - y - x ^ t 2 2 ) Mathematical equation

where ρ>0Mathematical equation is a constant.

Since the noise variance changes after each regularization process, the parameter λMathematical equation, which is used to balance the fidelity and regularization terms, needs to change along with it. After each regularization, the parameter λMathematical equation of group YiMathematical equation is set to

λ = c ¯ 2 2 σ n 2 δ i + κ Mathematical equation(23)

where δiMathematical equation denotes the estimated standard deviation of Ai-BiMathematical equation and c¯,κMathematical equation are constants. The complete description of the proposed GSRC-l1/l2Mathematical equation model for image denoising employing ADMM framework is presented in Algorithm 1.

Algorithm 1 The proposed GSRC-l1/l2 model for image denoising
1. Require: Noisy image yMathematical equation.
2. Initialization: Input x̂0=yMathematical equation, y0=yMathematical equation, Pi0Mathematical equation, Qi0Mathematical equation, μ1Mathematical equation, μ2Mathematical equation, δiMathematical equation, ρ1Mathematical equation, ρ2Mathematical equation, σnMathematical equation, cMathematical equation,mMathematical equation, LMathematical equation, hMathematical equation, ρMathematical equation, μMathematical equation and εMathematical equation.
3. For t=1Mathematical equation to Max-Iter
4.  Iterative Regularization yt=x̂t-1+γ(y-yt-1)Mathematical equation
5.  For each patch yiMathematical equation in ytMathematical equation do
6.   Find non-local similar patches from group YiMathematical equation.
7.   Obtain dictionary DiMathematical equation by YiMathematical equation using PCA.
8.   Update ΑiMathematical equation by computing Ai=DiTYiMathematical equation.
9.   Update BiMathematical equation by computing Eq. (5).
10.  Update λMathematical equation by computing Eq. (23).
11.  While kMathematical equation < ADMM-Iter or Aik-Aik-12Aik2>εMathematical equation
12.    Update Aik+1Mathematical equation by computing Eq. (18).
13.    Update Pik+1Mathematical equation by computing Eq. (19).
14.    Compute Qik+1Mathematical equation by computing Eq. (22).
15.    Update μ1k+1Mathematical equation via (16).
16.    Update μ2k+1Mathematical equation via (17).
17.    Get the estimation: X̂i=DiÂiMathematical equation.
18.   End while
19.  End for
20.  Aggregate X̂iMathematical equation to the denoised image x̂tMathematical equation.
21. End for
22. Output: The final denoised image x̂Mathematical equation.

3 Experiments

In this section, we will present extensive experimental results to evaluate the denoising performance of the proposed GSRC-l1/l2Mathematical equation model. To evaluate the quality of the recovered images, the peak signal to noise ratio (PSNR) and structural similarity (SSIM) metrics are used. The source codes of all comparison methods were obtained from the original authors, and we used the default parameter settings. For color images, image restoration operators are only performed for the luminance component in this paper.

3.1 Setting Adaptive Parameters of the GSRC-l1/l2Mathematical equation Model

In this subsection, we report the performance of the proposed GSRC-l1/l2Mathematical equation for image denoising and compare them with some leading denoising methods, including BM3D[24], NCSR[13], PGPD[25], SAIST[26] and GSRC-NLP [15]. The parameter setting of the proposed GSRC-l1/l2Mathematical equation model for image denoising is as follows. The size of searching window for block matching is set to 25×25Mathematical equation. All experiments are conducted under the MATLAB 2018a environment on a computer with Intel (R) Core (TM) i7-10750H with 2.60 GHz CPU and NVIDIA GeForce GTX 1650 Ti and Windows 10 platform. In the GSRC-l1/l2Mathematical equation model, the size of patch b×bMathematical equation is set to 7×7Mathematical equation for 20<σ50Mathematical equation. In addition, the parameter settings of the model are different. The parameters (c,γ,ρ1,ρ2)Mathematical equation are (0.6,0.1,0.023,0.001), (0.7,0.1,0.003 4,0.001) and (0.7,0.1,0.0034,0.001) set to for σn=30Mathematical equation,σn=40Mathematical equation and σn=50Mathematical equation.

3.2 Analysis of Results

Note that BM3D, PGPD and SAIST are the GSR-based or NSS-based image denoising methods. NCSR and GSRC-NLP are group sparsity residual based image denoising methods. We test the denoising performance on the presented test images and the highest PSNR is recorded in this paper. The PSNR results for all methods are shown in Table 1.

The proposed GSRC-l1/l2Mathematical equation has an average PSNR increase of 0.26dB, 0.32dB, 0.27dB, 0.07dB and 0.03dB compared with BM3D, NCSR, PGPD, SAIST and GSRC-NLP, respectively. In addition, the proposed GSRC-l1/l2Mathematical equation has an average SSIM increase of 0.013 3, 0.012 9, 0.014 4, 0.007 2 and 0.003 3 compared with BM3D, NCSR, PGPD, SAIST and GSRC-NLP, respectively. We can observe that the proposed GSRC-l1/l2Mathematical equation can obtain better results than other competing methods.

Figures 1 and 2 show the visual comparison of the images Barbara and Mural, respectively. To better compare the details of the reconstructed images, we zoom in the same parts of each image to twice the original size. It can be seen that BM3D, NCSR, SAIST and GSRC-NLP are prone to over smooth the images. PGPD does not protect image details. In contrast, the proposed GSRC-l1/l2Mathematical equation is able to preserve local structure of the image more effectively than other competing approaches.

Thumbnail: Fig. 1 Refer to the following caption and surrounding text. Fig. 1 Denoising results on image Barbaraby different methods (noise level σn=30Mathematical equation)

Thumbnail: Fig. 2 Refer to the following caption and surrounding text. Fig. 2 Denoising results on image Mural by different methods (noise level σn=40Mathematical equation )

Table 1

PSNR and SSIM result by denoising methods

3.3 Empirical Convergence Analysis of GSRC-l1/l2Mathematical equation Model

The existing literature on the ADMM convergence requires the existence of one separable function in the objective function, whose gradient is Lipschitz continuous. However, the proposed GSRC-l1/l2Mathematical equation model does not satisfy this assumption. Therefore, it is difficult to analyze the convergence theoretically. Instead, in this subsection we will show the convergence empirically by plotting residual errors and objective functions, which provides strong supports for the convergence validation. In Fig. 3, we empirically demonstrate the convergence of the proposed GSRC-l1/l2Mathematical equation algorithm. There are two auxiliary variables PiMathematical equation and QiMathematical equation in l1/l2Mathematical equation, i.e., Pi=Ai-BiMathematical equation and Qi=Ai-BiMathematical equation.Figure 3(a) shows the values of i=1nPik-Aik+Bik2Mathematical equation and i=1nQik-Aik+Bik2Mathematical equation with respect to iteration counter kMathematical equation. Figure 3(b) shows the values of the objective function (11) versus iteration counter kMathematical equation. The constraints and objective functions in Fig. 3 decrease rapidly with respect to the iteration counters, which is the heuristic evidence of the convergence of the Algorithm 1.

Thumbnail: Fig. 3 Refer to the following caption and surrounding text. Fig. 3 Plots of residual errors and objective function for empirically demonstrating the convergence

4 Conclusion

To improve the performance of group sparse based image restoration approaches, we introduce an effective sparsity promoting strategies, i.e., l1/l2Mathematical equation minimization to promote the sparsity of the group sparse residual. We use the non-local self-similar prior of images alone with a self-supervised learning scheme to obtain good estimates of the group sparsity coefficients for each original group. Then we adapt the l1/l2Mathematical equation minimization to reduce the group sparse residual. Compared with l1Mathematical equation minimization in GSRC-NLP method, the proposed GSRC-l1/l2Mathematical equation can lead to better performance in image denoising. We apply the ADMM algorithm to solve the models and explore the convergence of the proposed algorithm. Experimental results show that the model is comparable with the tested denoising methods, and outperforms many state-of-the-art image denoising methods in terms of both objective and perceptual quality metrics.

References

  1. Blomgren P, Chan T F. Color TV: Total variation methods for restoration of vector-valued images[J]. IEEE Transactions on Image Processing, 1998, 7(3):304-309. [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
  2. Bertalmio M, Caselles V, Rouge B, et al. A TV based restoration model with local constraints. [J]. Journal of Scientific Computing, 2003, 19(1):95-122. [Google Scholar]
  3. Li H, Liu F. Image denoising via sparse and redundant representations over learned dictionaries in wavelet domain[C]// 2009 Fifth International Conference on Image and Graphics. Washington D C: IEEE, 2009: 754-758. [Google Scholar]
  4. Mairal J, Elad M, Sapiro G. Sparse representation for color image restoration[J]. IEEE Transactions on Image Processing, 2007, 17(1):53-69. [Google Scholar]
  5. Aharon M, Elad M, Bruckstein A M. On the uniqueness of overcomplete dictionaries, and a practical way to retrieve them[J]. Linear Algebra & Its Applications, 2006, 416(1):48-67. [CrossRef] [MathSciNet] [Google Scholar]
  6. Zha Z, Yuan X, Zhu C, et al. Image restoration via simultaneous nonlocal selfsimilarity priors[J]. IEEE Transactions on Image Processing, 2020, 29: 8561-8576. [NASA ADS] [CrossRef] [Google Scholar]
  7. Gu S H, Zhang L, Zuo W M, et al. Weighted nuclear norm minimization with application to image denoising[C]// 2014 IEEE Conference on Computer Vision and Pattern Recognition. Washington D C : IEEE, 2014: 2862-2869. [Google Scholar]
  8. Zhang K, Zuo W M, Chen Y J, et al. Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising[J]. IEEE Transactions on Image Processing, 2016, 26(7):3142-3155. [Google Scholar]
  9. Dong W S, Wang P Y, Yin W T, et al. Denoising prior driven deep neural network for image restoration[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(10): 2305-2318. [CrossRef] [PubMed] [Google Scholar]
  10. Zhang K, Zuo W M, Gu S H, et al. Learning deep CNN denoiser prior for image restoration[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Washington D C : IEEE, 2017: 2808-2817. [Google Scholar]
  11. Aharon M, Elad M, Bruckstein. A. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation[J]. IEEE Transactions on Signal Processing, 2006, 54(11): 4311-4322. [CrossRef] [Google Scholar]
  12. Mairal J, Bach F, Ponce J, et al. Non-local sparse models for image restoration[C]// 2009 IEEE 12th International Conference on Computer Vision. Washington D C : IEEE, 2009: 2272-2279 [Google Scholar]
  13. Dong W S, Zhang L, Shi G M, et al. Nonlocally centralized sparse representation for image restoration[J]. IEEE Transcation on Image Processing, 2013, 22(4):1620-1630. [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
  14. Zhang J, Zhao D B, Gao W. Group-based sparse representation for image restoration[J]. IEEE Transactions on Image Processing, 2014, 23(8): 3336-3351. [NASA ADS] [CrossRef] [MathSciNet] [PubMed] [Google Scholar]
  15. Zha Z Y, Yuan X, Wen B H, et al. Group sparsity residual constraint with non-local priors for image restoration[J]. IEEE Transactions on Image Processing, 2020, 29: 8960-8975. [NASA ADS] [CrossRef] [MathSciNet] [Google Scholar]
  16. Bai M, Zhang X, Shao Q. Adaptive correction procedure for TVL1 image deblurring under impulse noise[J]. Inverse Problems, 2016, 32(8): 289-338. [Google Scholar]
  17. Rahimi Y, Wang C, Dong H B, et al. A scale-invariant approach for sparse signal recovery[J]. SIAM Journal on Scientific Computing, 2019, 41(6):A3649-A3672. [NASA ADS] [CrossRef] [MathSciNet] [Google Scholar]
  18. Xu Z B, Chang X Y, Xu F M, et al. L-1/2 regularization: A thresholding representation theory and a fast solver[J]. IEEE Transactions on Neural Networks and Learning Systems, 2012, 23(7):1013-1027. [CrossRef] [PubMed] [Google Scholar]
  19. Zuo W M, Meng D Y, Zhang L, et al. A generalized iterated shrinkage algorithm for non-convex sparse coding[C]// 2013 IEEE International Conference on Computer Vision. Washington D C: IEEE, 2013: 217-224. [Google Scholar]
  20. Jia X, Feng X. Bayesian inference for adaptive low rank and sparse matrix estimation[J]. Neurocomputing, 2018(5):71-83. [CrossRef] [Google Scholar]
  21. Keller J M, Gray M R, Givens J A. A fuzzy k-nearest neighbor algorithm[J]. IEEE Transactions on Systems Man & Cybernetics,1985, SMC-15(4): 580-585. [CrossRef] [Google Scholar]
  22. Dong W S, Lei Z, Shi G M, et al. Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization[J]. IEEE Transactions on Image Processing, 2011, 20(7):1838-1857. [CrossRef] [MathSciNet] [PubMed] [Google Scholar]
  23. Buades A, Coll B, Morel J M. A non-local algorithm for image denoising[C]// 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05). Washington D C: IEEE, 2005, 2: 60-65. [Google Scholar]
  24. Dabov K, Foi A, Katkovnik V, et al. Image denoising by sparse 3-D transform-domain collaborative filtering[J]. IEEE Transactions on Image Processing, 2007, 16(8):2080-2095. [CrossRef] [Google Scholar]
  25. Xu J, Zhang L , Zuo W M, et al. Patch group based nonlocal self-similarity prior learning for image denoising[C]// 2015 IEEE International Conference on Computer Vision (ICCV). Washington D C: IEEE, 2016: 244-252. [Google Scholar]
  26. Dong W S, Shi G M, Li X. Nonlocal image restoration with bilateral variance estimation: A low-rank approach[J]. IEEE Transactions on Image Processing, 2013, 22(2):700-711. [NASA ADS] [CrossRef] [MathSciNet] [PubMed] [Google Scholar]

All Tables

Table 1

PSNR and SSIM result by denoising methods

All Figures

Thumbnail: Fig. 1 Refer to the following caption and surrounding text. Fig. 1 Denoising results on image Barbaraby different methods (noise level σn=30Mathematical equation)
In the text
Thumbnail: Fig. 2 Refer to the following caption and surrounding text. Fig. 2 Denoising results on image Mural by different methods (noise level σn=40Mathematical equation )
In the text
Thumbnail: Fig. 3 Refer to the following caption and surrounding text. Fig. 3 Plots of residual errors and objective function for empirically demonstrating the convergence
In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.