Open Access
Issue
Wuhan Univ. J. Nat. Sci.
Volume 28, Number 1, February 2023
Page(s) 61 - 67
DOI https://doi.org/10.1051/wujns/2023281061
Published online 17 March 2023

© Wuhan University 2023

Licence Creative CommonsThis is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

0 Introduction

As a fundamental and important image processing task, image denoising has a wide range of applications [1-3]. Traditional methods are based on local denoising, i.e., all pixels in the vicinity of the current pixel are used to estimate the current pixel. As a result, the estimated pixel can be heavily influenced by the pixels in its neighborhood. In recent years, image denoising methods based on non-local block matching have achieved great success [4,5]. Buades et al[6] proposed the non-local mean (NLM) filter, which estimates each pixel by a non-local averaging of all the pixels in the image . The most well-known hybrid method for image denoising is the block-matching and 3D filtering (BM3D) reported in Ref.[7], which groups similar patches into 3D arrays and deals with these arrays by sparse collaborative filtering, achieving better visual results in several image processing tasks. Jia et al[8] proposed a joint sparse and low-rank matrix approximation problem by solving the nuclear norm and norm related minimization problem .

Using non-local self-similarity, nuclear norm minimization (NNM) has a wide range of applications in matrix decomposition and low rank representation [9]. Cai et al[10] proved that the NNM can be easily solved by a soft-thresholding operation on the singular values of observation matrix . Given a matrix , the aim of NNM is to find a low-rank matrix by solving the following problem:

(1)

where and · denote the Frobenius norm and nuclear norm, respectively, and is a trade-off parameter between the fidelity term and the low rank regularization term induced by the nuclear norm. The solution of problem (1) can be obtained by

where is the singular value decomposition (SVD) of and is the soft thresholding operator on diagonal matrix with parameter .

In many applications, it is hard for single to shrink all singular values. Large will over-penalty the large singular values, and small preserves large singular values but prevents NNM from obtaining low-rank solutions. In recent year, Gu et al[11,12] proposed a weighted nuclear norm minimization (WNNM) to balance the rank and thresholding, and achieved competitive results .

Inspired by the above approach, group sparse coding (GSC) has shown great potential in a variety of image processing tasks in recent years. Instead of using a single patch as the basic unit of sparse coding, image processing is accomplished using groups of similar patches. GSC has great potential for solving many image problems that takes full advantage of local sparsity and non-local self-similarity[13-16]. Specifically, an image is divided into overlapped patches of size . For each patch , similar matching patches are selected from a sized search window to form a set . Note that the k-nearest neighbor (KNN) algorithm [17] is commonly used to search for similarly matched patches. Then all patches in are stacked into a matrix , i.e.,. The matrix consisting of patches with similar structure is called a patch group, where denotes the -th patch in the -th patch group. Similar to GSC, given a dictionary , each patch group can be sparsely represented by solving the following -norm minimization problem,

(2)

where represents the group sparse coefficient of each patch group . However, since -norm minimization is an NP-hard optimization problem, it is often relaxed to the convex -norm minimization [18]. Unfortunately, for some practical image processing tasks, such as image restoration, -norm minimization is just an estimation to the -norm minimization and cannot obtain the desirable sparse solution. Various norm minimization methods, like the weighted -norm minimization [19], the -norm minimization [20] and the weighted Schatten norm minimization [21], are proposed to solve the -norm minimization problem.

Inspired by GSC prior and low rank prior mentioned above, in this paper we will propose a denoising model in which mixed norm and nuclear norm are employed to regularize the coefficients of the high and low frequency of the image. The proposed denoising model can be efficiently solved by the ADMM algorithm [22], which provides a closed solution for each subproblem of the model.

1 Preliminaries

1.1 Nuclear Norm and Mixed Norm

Here, we briefly introduce nuclear norm and mixed norm [23].

Definition 1   The weighted nuclear norm of a matrix is defined as

(3)

where and means the i-th singular value of . The diagonal matrix of and consisting of elements and , respectively.

Definition 2   The mixed (1,1) norm of a matrix is defined as

(4)

The mixed (2,1) norm of a matrix is defined as

(5)

1.2 Adaptive Dictionary Learning

In Ref.[24], an adaptive dictionary learning approach is designed. For each patch group , its adaptive dictionary can be learned from its observation . Specifically, SVD is applied to :

(6)

where is a diagonal matrix, and , are the columns of and , respectively. Then each dictionary atom of the adaptive dictionary for every patch group is defined as:

(7)

Thus, an adaptive dictionary has been learned, i.e.,

(8)

It can be seen that the designed dictionary learning approach only requires one SVD operation for each patch group.

2 Proposed Model

In existing group sparse models, sparse coding is conducted the same on different components of the image such as structure and texture. In fact, an image can be decomposed into the low frequency (structure) and high frequency (texture) components, and it is difficult for single regularization term to enforce sparsity on different components. So in this subsection, we aim to recover the sparse coefficients of high and low frequency components simultaneously by solving the following problem:

(9)

where has the same meaning as that of group sparse model (2). are regularization parameters which control the balance between the fidelity term and the regularization terms. and represent the high frequency coefficient and the low frequency coefficient corresponding to the texture and structure components of the image, respectively.

The corresponding augmented Lagrange function of Eq. (11) is defined by

(10)

where is a positive parameter. In the following we use the ADMM algorithm to solve these proposed problem. Eq. (10) is decomposed into the following subproblems.

1) -subproblem

For fixed , , the minimization problem for becomes

This is a quadratic problem and has a closed-form solution:

(11)

2) -subproblem

The subproblem of is a weighted nuclear norm minimization (WNNM) problem and is computed as follows

So, the solution of -subproblem can be obtained by the following equation

(12)

where is a singular value decomposition of , and is the soft-thresholding operator with weight vector :

(13)

3) -subproblem

The subproblem of is given explicitly by the two-dimensional shrinkage

(14)

where and represents point-wise product.

4) -update

We update by

(15)

Based on the above discussion, we summarize the complete algorithm of the proposed model (9) for image decomposition and denoising in Algorithm 1.

For simplicity, we terminate the Algorithm 1 with the relative change of in all experiments, i.e.,

(16)

The recovered high and low frequency components of the image can be obtained by reconstructing the PCA dictionary from , and then aggregating all the estimated patch groups , .

Algorithm 1
Initialization: Input .
Iteration:
1. While to Max-Iter or Eq. (16) is satisfied.
2.  Iterative regularization
3.  For each patch in do
4.   Find nonlocal similar patches from group .
5.   Compute by Eq. (11).
6.   Compute by Eq. (12).
7.   Compute by Eq. (14).
8.   Compute by Eq. (15).
9.   Get the estimation: .
10. End for
11. 
12. Aggregate to form the denoised image .
13. End while.
14. Output: The final denoised image .

3 Experiments

In this section, we present various numerical results to illustrate the performance of the proposed model for image decomposition and denoising. Figure 1 shows the test images used in this paper.

thumbnail Fig. 1 Test images

3.1 Image Decomposition Results

For image decomposition, the goal of the proposed model is to separate the high frequency information from the low frequency information in the image. Generally speaking, the structure and contours in an image belong to the low frequency components, while small details, textures and noise in an image belong to the high frequency components.We use the image Starfish shown in Fig. 2(a) to give a clear insight into the behavior of the proposed decomposition model. Figure 2(b) and 2(c) show the "cartoon + texture" decomposition result on noiseless Starfish image. For noisy Starfish image in Fig. 2(d), the proposed model also produces reasonable high-low frequency decomposition. The Starfish image is decomposed into simple cartoon component (low frequency) and noisy component (high frequency).

thumbnail Fig. 2 Decomposition result on Starfish. Tifby the proposed method

(a)-(c):decomposition result on noiseless image; (d)-(f): decomposition result on noisy image

3.2 Image Denoising Results and Comparison with Other Methods

For image denoising, we compare the proposed method with other advanced methods including non-local means (NLM [6]), BM3D [7] and EPLL [25] in terms of objective and perceptual metrics. Both NLM and BM3D are based on the prior of non-local self similarity, and EPLL is a patch-prior based denoising method which ensures denoising performance in ensemble approximation by local patch denoising. In the proposed method, the size of patch is set to for . The parameters are set to (7,10,40), (7,9,80) and (6,8,180) for noise level , and , respectively. In Fig. 3, we show the visual comparison of the denoising results with noise level obtained by different methods. From visual comparison, we can see that the images restored by NLM and BM3D seem over-smooth and EPLL produces some bad artifacts. Images restored by our algorithm has better denoising performance and less staircase artifacts compared with the other three methods.

thumbnail Fig. 3 Visual comparison results with noise level by different denoising methods

The peak signal-to-noise (PSNR) ratio is employed to measure the quality of the recovered images which is defined as:

where u is the restored image, f is the true image, m and n denote the size of the image. However, the PSNR only considers the global statistics of image pixel values with neglect of the local visual factor of human eyes, which make it is impossible to access local image quality. To this end, an objective image quality evaluation index conforming to the characteristics of human visual system, structural similarity (SSIM)[26], was proposed. The advantage of SSIM is that it assesses the reconstructed image from the perspective of luminance, contrast and structure. Thus, we select the PSNR and SSIM as quality evaluation criteria to quantitatively assess the reconstruction performance.

The PSNR and SSIM results of all methods are shown in Table 1. The proposed method has an average PSNR gain of up to 1.5dB, 0.31dB and 0.46dB compared with NLM, BM3D and EPLL, respectively. Overall, the proposed method achieves better numerical results.

Table 1

Results with different methods for PSNR and SSIM

4 Conclusion

This paper proposed a new group sparse denoising model, in which the weighted nuclear norm and l2,1 mixed norm are employed as regularization terms to simultaneously enforce the priors on low and high frequency of the image. The proposed model can be solved efficiently based on the ADMM framework. Experimental results have demonstrated that the proposed method achieves comparable performance to some state-of-the-art methods.

References

  1. Elad M, Aharon M. Image denoising via sparse and redundant representations over learned dictionaries[J]. IEEE Transactions on Image Processing, 2006, 15(12):3736-3745. [NASA ADS] [CrossRef] [MathSciNet] [PubMed] [Google Scholar]
  2. Guo Q , Zhang C M, Zhang Y F, et al. An efficient SVD-based method for image denoising[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2016, 26(5):868-880. [CrossRef] [Google Scholar]
  3. Foi A , Katkovnik V , Egiazarian K . Pointwise shape-adaptive DCT for high-quality denoising and deblocking of grayscale and color images[J]. IEEE Transactions on Image Processing, 2007, 16(5): 1395-1411. [NASA ADS] [CrossRef] [MathSciNet] [PubMed] [Google Scholar]
  4. Katkovnik V, Ponomarenko M, Egiazarian K. Complex domain nonlocal block-matching denoising based on high-order singular value decomposition (HOSVD)[C] // 2017 25th European Signal Processing Conference (EUSIPCO). Washington D C: IEEE, 2017: 718-722. [Google Scholar]
  5. Dong W S, Shi G M, Ma Y, et al. Image restoration via simultaneous sparse coding: where structured sparsity meets Gaussian scale mixture[J]. International Journal of Computer Vision, 2015, 114(2-3):217-232. [CrossRef] [MathSciNet] [Google Scholar]
  6. Buades A, Coll B, Morel J. A non-local algorithm for image denoising[C] // 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05). Washington D C: IEEE, 2005, 2: 60-65. [Google Scholar]
  7. Dabov K, Foi A, Katkovnik V, et al. Image denoising by sparse 3-D transform-domain collaborative filtering[J]. IEEE Transactions on Image Processing, 2007, 16(8): 2080-2095. [CrossRef] [Google Scholar]
  8. Jia X X, Feng X, Wang W, et al. Bayesian inference for adaptive low rank and sparse matrix estimation[J]. Neurocomputing, 2018, 291(24):71-83. [Google Scholar]
  9. Fazel S M. Matrix Rank Minimization with Applications [D].Palo Alto: Stanford University, 2002. [Google Scholar]
  10. Cai J F, Candès E J, Shen Z. A singular value thresholding algorithm for matrix completion[J]. SIAM Journal on Optimization , 2010, 20(4): 1956-1982. [CrossRef] [MathSciNet] [Google Scholar]
  11. Gu S H, Zhang L, Zuo W, et al. Weighted nuclear norm minimization and its applications to low level vision[J]. International Journal of Computer Vision, 2017, 121(2): 183-208. [CrossRef] [Google Scholar]
  12. Gu S H, Zhang L, Zuo W, et al. Weighted nuclear norm minimization with application to image denoising[C]// 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Washington D C: IEEE, 2014: 2862-2869. [Google Scholar]
  13. Zhang J, Zhao D, Gao W. Group-based sparse representation for image restoration[J]. IEEE Transactions on Image Processing, 2014, 23(8):3336. [NASA ADS] [CrossRef] [MathSciNet] [PubMed] [Google Scholar]
  14. Wang Q, Zhang X G, Wu Y, et al. Non-convex weighted lp minimization based group sparse representation framework for image denoising[J]. IEEE Signal Processing Letters, 2017, 24(11): 1686-1690. [NASA ADS] [CrossRef] [Google Scholar]
  15. Mairal J, Bach F, Ponce J, et al. Non-local sparse models for image restoration[C]// 2009 IEEE 12th International Conference on Computer Vision. Washington D C: IEEE, 2009: 2272-2279. [Google Scholar]
  16. Li S T, Yin H T, Fang L Y. Group-sparse representation with dictionary learning for medical image denoising and fusion[J]. IEEE Transactions on Biomedical Engineering, 2012, 59(12):3450-3459. [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
  17. Keller J M, Gray M R, Givens J A. A fuzzy k-nearest neighbor algorithm[J]. IEEE Transactions on Systems Man & Cybernetics, 1985, 15(4):580-585. [CrossRef] [Google Scholar]
  18. Donoho D L . For most large underdetermined systems of linear equations the minimal 1-norm solution is also the sparsest solution[J]. Communications on Pure and Applied Mathematics, 2006, 59(6): 797-829. [CrossRef] [MathSciNet] [Google Scholar]
  19. Candès E J, Wakin M B, Boyd S P. Enhancing sparsity by reweighted l1 minimization[J]. Journal of Fourier Analysis & Applications, 2008, 14(5-6): 877-905. [CrossRef] [MathSciNet] [Google Scholar]
  20. Xu Z B, Chang X Y, Xu F M, et al. L-1/2 Regularization: A thresholding representation theory and a fast solver[J]. IEEE Transactions on Neural Networks and Learning Systems, 2012, 23(7):1013-1027. [CrossRef] [PubMed] [Google Scholar]
  21. Xie Y, Gu S H, Liu Y, et al. Weighted schatten p-norm minimization for image denoising and background subtraction[J]. IEEE Transactions on Image Processing, 2016, 25(10):4842-4857. [NASA ADS] [CrossRef] [MathSciNet] [Google Scholar]
  22. Chartrand R, Wohlberg B. A nonconvex ADMM algorithm for group sparsity with sparse groups[C]// 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. Washington D C: IEEE, 2013: 6009-6013. [Google Scholar]
  23. Fu H Y, Ng M K, Nikolova M, et al. Efficient minimization methods of mixed l2-l1 and l1-l1 norms for image restoration[J]. SIAM Journal on Scientific Computing, 2006, 27(6):1881-1902. [CrossRef] [MathSciNet] [Google Scholar]
  24. Zha Z Y, Yuan X, Wen B H, et al. A benchmark for sparse coding: When group sparsity meets rank minimization[J]. IEEE Transactions on Image Processing, 2020, 29:5094-5109. [NASA ADS] [CrossRef] [MathSciNet] [Google Scholar]
  25. Zoran D , Weiss Y. From learning models of natural image patches to whole image restoration [C]// IEEE International Conference on Computer Vision. Washington D C: IEEE, 2011: 479-486. [Google Scholar]
  26. Wang Z, Bovik A C, Sheikh H R, et al. Image quality assessment: From error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13: 600-612. [NASA ADS] [CrossRef] [PubMed] [Google Scholar]

All Tables

Table 1

Results with different methods for PSNR and SSIM

All Figures

thumbnail Fig. 1 Test images
In the text
thumbnail Fig. 2 Decomposition result on Starfish. Tifby the proposed method

(a)-(c):decomposition result on noiseless image; (d)-(f): decomposition result on noisy image

In the text
thumbnail Fig. 3 Visual comparison results with noise level by different denoising methods
In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.