Open Access
Issue
Wuhan Univ. J. Nat. Sci.
Volume 29, Number 3, June 2024
Page(s) 219 - 227
DOI https://doi.org/10.1051/wujns/2024293219
Published online 03 July 2024

© Wuhan University 2024

Licence Creative CommonsThis is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

0 Introduction

Image deblurring is a classical problem in low-level vision. Image deblurring aims to estimate the latent image from its degraded observation. A blurred image can be considered as a convolution between the latent image and the point spread function (PSF) of the image capture device. The blurring process can be modeled as

g = H f + n (1)

where f is an original image without any form of degradation, H represents the PSF, g is the observation, and n is the additive noise. When the PSF is given, this is known as the "non-blind" image deblurring problem; otherwise, it is called the "blind" deblurring.

Image deblurring is an ill-posed inverse problem, meaning that the solution does not exist and is not unique. Additionally, small perturbations in the data can result in significant errors during reconstruction. Therefore, regularization techniques are essential for transforming this problem into a well-posed one. Over the past three decades, numerous methods have been developed to address image deblurring. Due to the ill-posed nature of the image deblurring problem, it has been widely recognized that the prior information of images and the blurring process play a key role in enhancing the performance of image deblurring methods. A variety of image prior models have been developed, such as the statistical priors-based method[1,2], the transform-based method[3-7], the total variation (TV) based method [8-13], the dark channel-based method[14-16], class-specific prior based method[17], decoupled method[18] etc. Image prior information is often enforced through regularization strategy in the deblurring process, which can be described in the following variational regularization model

m i n f { 1 2 g - H f 2 2 + w φ ( f ) } (2)

where 2 denotes the Euclidean norm, φ(f) is conventionally called a regularization functional, and w>0 is referred to as a regularization parameter, which controls the balance between fidelity term and regularization term in model (2).

Choosing a suitable regularization functional is an active area of research in imaging science. Total variation, a classical regularization term, has verified its great success in image deblurring. Indeed, the TV norm has played a crucial role in various image processing tasks such as denoising, deblurring, and inpainting. Wang and Yang et al [13] proposed a state-of-the-art algorithm for image deblurring called fast total variation deconvolution (FTVd), which utilizes splitting techniques. In comparison with FTVd, the alternating direction method of multipliers (ADMM) TV method can obtain a similar peak signal-to-noise ratio (PSNR), while the choice of the regularization parameter is not needed.

TV norm serves as a highly effective signal prior to piece-wise smooth images. However, TV is an isotropic function and is unsuitable for images with a dominant direction. The main disadvantage of TV-based regularization for natural images is that TV regularization terms do not preserve the details and textures very well on the regions of complex structures. In various applications such as seismic and medical imaging, image textures exhibit clear directionality, which is crucial in analyzing detailed information. Therefore, it is highly desirable to incorporate the directional characteristics of these textures to ensure high-quality image outcomes. In this paper, we propose a novel approach to enhance the directional information of images in the gradient domain by extending the standard TV norm to the STV norm.

Another regularization technique for deblurring involves incorporating sparsity-based priors to regularize images, leveraging the observation that images typically exhibit sparse representation in certain redundant transform domains, such as wavelet and framelet transforms. The framelet-based approach for super-resolution reconstruction was initially proposed in Refs. [19,20]. The framelet deblurring algorithm was initially examined in Ref.[21]. In Ref.[10], Jiao et al also proposed an efficient algorithm using the ADMM strategy to solve the frame-based deblurring problem.

The core contributions are summarized as follows:

1) The incorporation of enhanced directionality is achieved by proposing a novel shear gradient (SG) operator, which combines the gradient operator and shear operator. Through experiments, it is verified that the proposed SG operator can detect more abundant edge information than that detected by the gradient operator. We also explore the properties of the SG operator and demonstrate that its matrix form exhibits a Block-Circulant-with-Circulant-Blocks (BCCB) structure, rendering it diagonalizable through the 2D discrete Fourier transform.

2) Based on the SG operator, we extend the conventional total variation to the shear total variation to enhance the direction information of the image in the gradient domain.

3) In accordance with the STV norm, we propose a deblurring method that incorporates the STV norm as a regularization term. The proposed problem can be efficiently addressed by employing the ADMM, leveraging the BCCB structure of the shear matrices. We conduct various numerical experiments to demonstrate the potential of our method.

The remaining sections of this paper are structured as follows: Section 1 provides definitions for the shear gradient operator and shear TV norm, along with a brief introduction to the fundamentals of the shear operator, BCCB matrix, and ADMM algorithm. In Section 2, we propose the STV deblurring model and employ ADMM to solve it. The effectiveness of our proposed model is demonstrated through numerical results presented in Section 3. Finally, conclusions are drawn in Section 4.

1 Preliminaries

1.1 Shear Operator

Let Sθd be a shear operator with the shear angle θd (see Ref. [22]), and θd[0,π], d{ t , b , l , r }

S θ d : R n × n R n × n : X X s h r (3)

where the (i^,j^)-th entry of Xshr is given, for i^=1,,n and j^=1,,n, by Xi^,j^shr:=Xi¯,j¯. The parameter d indicates the axis of shear (top t, bottom b, left l, right r), and θd determines the degree of shear.

i ¯   : = { [ ( i ^ - 1 + [ 4 π θ j ^ ] ) m o d   n ] + 1 ,                   i f   d = t [ ( i ^ - 1 + [ 4 π θ ( n - j ^ ) ] ) m o d   n ] + 1 ,       i f   d = b                   i ^                                       ,             o t h e r w i s e

j ¯   : = { [ ( j ^ - 1 + [ 4 π θ i ^ ] ) m o d   n ] + 1 ,                   i f   d =   r [ ( j ^ - 1 + [ 4 π θ ( n - i ^ ) ] ) m o d   n ] + 1 ,       i f   d =   l                        j ^                                 ,               o t h e r w i s e

where [] represents the procedure rounding the real number to the nearest integer in the zero direction. For example, SθdX means that the image X is sheared with the top axis and θd degree. In particular, θd=0 means Sθd is the identity transformation, i.e., no shear. Figure 1 shows the effect of the shear operator in different angles and directions.

thumbnail Fig. 1 Effect of the shear operator, Sθd with different angles and directions

1.2 Shear Gradient and Shear TV Norm

Traditional regularization methods include Tikhonov regularization [23] and TV regularization [8]. To avoid the Tikhonov regularization term being too smooth in the image restoration process and unable to preserve sharp edges, TV regularization was introduced by Rudin et al[8] for the image denoising and deconvolution problems. For the grayscale image uRn×n, its discrete total variation norm is:

T V ( u ) = i = 1 2 D i u 1 (4)

where D1=Dx and D2=Dy are linear operators corresponding to the horizontal and vertical first-order differences. The TV norm only includes finite differences in the horizontal and vertical directions, which results in a loss of some detailed information in other directions.

By combining the first-order finite difference operator and the shear operator, we propose the shear gradient operator (SG) in the following way:

D ( D 3 , D 4 ) = ( S θ d T D x S θ d , S θ d T D y S θ d ) (5)

where the superscript "T" means transpose. Then, for the grayscale image u, its discrete shear total variation norm is defined as:

S T V ( u ) = i = 1 4 D i u 1 (6)

The shear gradient has the following properties:

Remark 1   The shear operator Sθd and the horizontal gradient operator Dx are commutative when =l or r ; Similarly, Sθd and the vertical operator Dy are commutative when d=t or b . i.e.,

S θ d T D x S θ d = D x ,   w h e n    d = l   o r   r (7)

S θ d T D y S θ d = D y ,   w h e n    d = t   o r   b (8)

By Remark 1, the shear gradient operator is essentially a generalization of the gradient operator.

Remark 2   (Matrix form of Sθd) The operator Sθd is linear so that it can be represented as a matrix denoted by Sθd. Moreover, since Sθd is a permutation matrix, it is orthonormal.

The comparison between the gradient operator and the shear gradient operator applied to the Cameraman image is shown in Fig.2.

thumbnail Fig. 2 Comparison between gradient operator and shear gradient operator applied on Cameraman image

It can be observed that the shear gradient operator captures more direction information and retains more detailed edge information.

1.3 Block-Circulant-with-Circulant-Blocks Matrices

Under the periodic boundary conditions, both the blurring matrix and gradient matrix are BCCB and thus are diagonalizable by the 2D discrete Fourier transform. This section will show that the shear gradient matrix is also a BCCB matrix.

Let Eij denote a matrix whose (i,j) entry is 1 and zero elsewhere. Then {Eij|i,j=1,2,3} forms a set of orthonormal basis in the linear space R3×3. Let Sθl denote the shear matrix of the shear operator under the basis {Eij|i,j=1,2,,n}, i.e.

S θ l ( E 11 , E 21 , E 31 , E 12 , E 22 , E 32 , E 13 , E 23 , E 33 ) = ( E 11 , E 21 , E 31 , E 12 , E 22 , E 32 , E 13 , E 23 , E 33 ) S θ l (9)

where θl=45°. By Eq. (9), we can obtain that Sθl is a matrix with the following form

S θ l = [ 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 ]

Since gradient matrices are BCCB under periodic boundary conditions, it can be verified that the D3 and D4 in formula (5) are also BCCB matrices. The shear gradient matrix D4=SθlTDySθl with θl=45° has the following form:

D 4 = [ - 1 0 0 0 0 0 0 1 0 0 - 1 0 0 0 0 0 0 1 0 0 - 1 0 0 0 1 0 0 0 1 0 - 1 0 0 0 0 0 0 0 1 0 - 1 0 0 0 0 1 0 0 0 0 - 1 0 0 0 0 0 0 0 1 0 - 1 0 0 0 0 0 0 0 1 0 - 1 0 0 0 0 1 0 0 0 0 - 1 ]

But unfortunately, when the shear angle θd(0,45°),Di(i=1,2,3,4) do not always have a BCCB structure. When the shear angle is θl=40°, the shear gradient matrix D4=SθlTDySθl has the following form:

D 4 = [ - 1 1 0 0 0 0 0 0 0 0 - 1 0 0 0 0 0 0 1 0 0 - 1 1 0 0 0 0 0 0 1 0 - 1 1 0 0 0 0 0 0 1 0 - 1 0 0 0 0 1 0 0 0 0 - 1 1 0 0 0 0 0 0 1 0 - 1 1 0 0 0 0 0 0 1 0 - 1 0 1 0 0 1 0 0 0 0 - 1 ]

It can be seen that the above matrix is not a BCCB matrix.

1.4 Alternating Direction Method of Multipliers

The ADMM was proposed in Ref.[24] and has been analyzed in various literatures[25,26]. It is widely used to solve structured optimization problems due to its decomposability and superior flexibility. Recently, ADMM has gained popularity in modern signal and image processing, statistics, machine learning, and various other fields. ADMM is an algorithm utilizing proximal splitting techniques for solving the following convex optimization problem:

m i n     θ 1 ( x 1 ) + θ 2 ( x 2 )         s . t .      A 1 x 1 + A 2 x 2 = c ,                        x i χ i ,   i = 1,2 (10)

where θi:χiR are closed convex functions, Ai are linear transforms, χi are nonempty closed convex sets, and c is a given vec tor.

One of the advantages of ADMM is that the algorithm updates lend themselves to parallel implementations. The algorithm is given in Algorithm 1.

Algorithm 1: ADMM
Initialization: Starting point (x10,x20,p0), δ>0.
Iteration:
1. x1k+1=argminx1θ1(x1)+δ2A1x1+A2x2k-c+pkδ22
2. x2k+1=argminx2θ2(x2)+δ2A1x1k+1+A2x2-c+pkδ22
3. pk+1= pk+δ(A1x1k+1+A2x2k+1-c)
4. k=k+1
Until a stopping criterion is satisfied.

2 STV Deblurring Model

The discussion in subsection 1.2 shows that the STV norm can capture sharp edges and more detailed information about the image. This leads us to consider the following problem:

{ m i n   v   ψ ( v ) = i = 1 4 v i 1 s . t .   v 1 = D x f          v 2 = D y f                     v 3 = S θ t T D x S θ t f                     v 4 = S θ l T D y S θ l f      H f = g (11)

the corresponding augmented Lagrangian function of (11) can be written as:

ρ 1 , ρ 2 ( f , v ; λ , μ ) = ψ ( v ) + λ , H f - g + i = 1 4 μ i , D i f - v i + ρ 1 2 H f - g 2 + ρ 2 2 i = 1 4 D i f - v i 2 (12)

where ρ1 and ρ2 are two positive parameters. We decompose the optimization problem (12) into individual sub-problems and minimize them separately. Our ADMM algorithm then reads as follows:

{ f k + 1 = a r g m i n f ρ 1 , ρ 2 ( f , v k ; λ k , μ k ) v i k + 1 = a r g m i n v i ρ 1 , ρ 2 ( f k + 1 , v ; λ k , μ k ) λ k + 1 = λ k + ρ 1 ( H f k + 1 - g ) μ i k + 1 = μ i k + ρ 2 ( D i f k + 1 - v i k + 1 ) (13)

Let us now examine each of these sub-problems one by one.

1) f sub-problem

For fixed λk, vk, and μk, the minimization of ρ1,ρ2(f,v;λ,μ) with respect to f is a least squares problem, and the corresponding normal equations are:

( ρ 1 H T H + ρ 2 i = 1 4 D i T D i ) f = ρ 1 H T g + i = 1 4 D i T ( ρ 2 v i k - μ i k ) - H T λ k (14)

Considering what has been discussed in subsection 1.3, both HTH and DiTDi (i=1,2,3,4) are BCCB matrices under the periodic boundary condition with a shear angle of 45°. Fast Fourier transforms (FFTs) can effectively calculate the BCCB matrix [28]. First, concerning formula (14), we compute the right-hand side vector and apply a forward FFT. Second, by component-wisely dividing the eigenvalues of ρ1HTH+ρ2i=14DiTDi we get fk+1, where represents the two-dimensional discrete Fourier transform. Third, we apply the inverse FFT to fk+1 to get a new iteration of fk+1.

From the analysis of the shear gradient matrix in Section 1.3, we know that DiTDi(i=1,2,3,4) does not have the BCCB matrix property when the shear angle is not 45°. Thus, minimizing ρ1,ρ2(f,v;λ,μ) with f does not have a simple closed form solution. Inspired by Ref. [27], one way to overcome this challenge is by adding an approximation term representing a semi-norm induced by a suitable positive semi-definite matrix. Then, the objective function of fk+1 in (13) can be written as follows:

f k + 1 = a r g m i n f { ρ 1 , ρ 2 ( f , v k ; λ k , μ k ) + 1 2 f - f k S 2 } (15)

where S is a positive semi-definite matrix and S is the semi-norm induced by the semi-inner product fTSf. The solution to equation (15) can then be derived from the normal equation:

( ρ 1 H T H + ρ 2 i = 1 4 D i T D i + S ) f = ρ 1 H T g + i = 1 4 D i T ( ρ 2 v i k - μ i k ) - H T λ k + S f k (16)

As a result, ρ1HTH+ρ2i=14DiTDi is positive semi-definite, we choose S=αI-(ρ1HTH+ρ2i=14DiTDi) with ασmax(ρ1HTH+ρ2i=14DiTDi). Therefore, (16) can be rewritten as:

f k + 1 = ρ 1 H T g + i = 1 4 D i T ( ρ 2 v i k - μ i k ) - H T λ k + S f k α (17)

2) v sub-problem

The v sub-problem in (13) is decoupled so that it can be solved separately. We have:

v i k + 1 = m a x { D i f k + 1 + μ i k ρ 2 - 1 ρ 2 , 0 }     s g n ( D i f k + 1 + μ i k ρ 2 ) (18)

where "" and "sgn" represent the point-wise product and signum function, respectively.

3) λ and μ update

We update λ and μ by

λ k + 1 = λ k + ρ 1 ( H f k + 1 - g ) (19)

μ i k + 1 = μ i k + ρ 2 ( D i f k + 1 - v i k + 1 ) (20)

where i=1,2,3,4.

For simplicity, we terminate the algorithm with the relative change of f in all experiments, i.e.,

f k + 1 - f k m a x { f k , 1 } < ϵ (21)

Based on the above discussion, we get the following algorithm to solve the problem (11): the ADMM STV algorithm.

Obviously, Algorithm 2 is an instance of ADMM. The convergence of the proposed algorithm is guaranteed if the sub-problem has closed-form solutions.

Algorithm 2: ADMM STV
Initialization: Starting point (f0,vi0,λ0,μi0), ρ1>0,ρ2>0.
Iteration:
1. Compute fk+1 according to (14) or (17).
2. Compute vik+1 according to (18).
3. Update λk+1 according to (19).
4. Update μik+1 according to (20).
5. k=k+1
Until the stopping criterion (21) is satisfied.

3 Experiments

In this section, we present various numerical results to illustrate the performance of the proposed algorithm. All experiments were conducted on a laptop equipped with an AMD Ryzen 7840H CPU and 16 GB of RAM, using Windows 11 (64-bit) and MATLAB 2022b.

3.1 Parameter Selection

The proposed algorithm requires careful adjustment of several parameters. The parameter ρ1 controls the fidelity of the data. We observe that the value of ρ1 plays an important role in removing the additive noise in the image. However, relying excessively on ρ1 to filter out noise can lead to the occurrence of blocking and speckle artifacts in the restored image. To this end, we apply our algorithm to the Monarch deblurring problem on the grid ρ1 = 50:50:1 300, ρ2 = 100:20:600. In Fig.3, we can observe that the values of ρ1 and ρ2 can be selected from a wide range while still maintaining stable reconstructions. For the shear angle θd, through experiments, we find that better PSNR and SSIM values are often obtained when θd is set to 45° or its multiples. Therefore, we set θd=kπ4 (k=1,2,3,4) in the experiment and select the parameters that yielded the most optimal results.

thumbnail Fig. 3 PSNR results of image Monarch by ADMM STV on the grid ρ1 = 50:50:1 300, ρ2 = 100:20:600

3.2 Comparisons with Other State-of-the-Art Methods

Figure 4 shows the test images and their corresponding blurred versions. The test images are blurred by various kernels, including motion, Gaussian, and average. Fig. 4(h)-(n) show images blurred by kernels generated using the MATLAB fspecial (̔motion',35,50), fspecial (̔average',10) and fspecial (̔Gaussian',[20 20], 10) , respectively.

thumbnail Fig. 4 Some of the test images (the first row) and the corresponding blurred images (the second row)

In Fig.5 and Fig.6, we compare the deblurring performance of the proposed method with the FTVd, ADMM TV, and ADMM frame. One can observe that the images produced by the ADMM STV algorithm have fewer staircase artifacts and are visually much better than those obtained by FTVd, ADMM TV, and ADMM frame. From the local detail enlargement in Fig.5, one can observe that the image obtained by ADMM TV seems to be blockier compared with the proposed method. The restored image obtained by FTVd is too cartoon-like. The reconstructed image edges by the ADMM frame method are hampered by ring artifacts, also known as the Gibbs phenomenon. This indicates that the proposed model performs better for deblurring images while preserving edges and small details. The curves of PSNR value versus the number of iterations are illustrated in Fig. 7. It can be observed that as the number of iterations increases, the PSNR curve gradually increases and finally becomes flat and stable. In other words, our proposed algorithm shows good stability in terms of convergence.

thumbnail Fig. 5 Visual comparison of Monarch by different methods with gaussian blur(the noise level is 0.256)

thumbnail Fig. 6 Visual comparison of Pirate by different methods with average blur (the noise level is 0.512)

thumbnail Fig. 7 PSNR results versus iteration number by ADMM STV algorithm

The results in terms of PSNR, SSIM, and CPU time are listed in Table 1. We report the PSNR and SSIM results for a collection of 9 test images for all competing methods. It is evident that the proposed ADMM STV outperforms other competing methods in terms of PSNR in most cases. ADMM TV only contains finite differences in two directions, while ADMM STV contains finite differences in four directions. Therefore, the cost of ADMM STV is slightly more expensive than that of ADMM TV, but it better preserves more details, such as directional texture, by the ADMM STV method.

Table 1

The results of PSNR, SSIM, and CUP time with different methods

4 Conclusion

The shear total variation norm is proposed in this paper to effectively preserve sharp image edges and enhance the direction information of the image in the gradient domain. It has been demonstrated as a highly efficient solution for non-blind image deblurring problems. Thanks to the BCCB structure of the shear gradient operator and blurring matrices, the proposed model can be efficiently solved using the ADMM algorithm. Comparative evaluations against three state-of-the-art non-blind image deblurring methods clearly demonstrate that our proposed method outperforms both subjectively and objectively.

References

  1. Richardson W H. Bayesian-based iterative method of image restoration[J]. Journal of the Optical Society of America, 1972, 62(1): 55-59. [NASA ADS] [CrossRef] [Google Scholar]
  2. Lucy L B. An iterative technique for the rectification of observed distributions[J]. The Astronomical Journal, 1974, 79(6): 745. [NASA ADS] [CrossRef] [Google Scholar]
  3. Wiener N. Extrapolation, Interpolation, and Smoothing of Stationary Time Series, with Engineering Applications[M]. Piscataqay: IEEE Xplore, 1949. [Google Scholar]
  4. Neelamani B R, Choi H, Baraniuk R. Wavelet-based deconvolution for ill-conditioned systems[C]//1999 IEEE International Conference on Acoustics, Speech, and Signal Processing. New York: IEEE, 2002, 6: 3241-3244. [Google Scholar]
  5. Li J Z, Luisier F, Blu T. PURE-LET image deconvolution[J]. IEEE Transactions on Image Processing: A Publication of the IEEE Signal Processing Society, 2018, 27(1): 92-105. [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
  6. Cho S, Lee S. Fast motion deblurring[C]//ACM SIGGRAPH Asia 2009 Papers. New York: ACM, 2009, 145: 1-8. [Google Scholar]
  7. Cai J F, Osher S, Shen Z W. Linearized bregman iterations for frame-based image deblurring[J]. SIAM Journal on Imaging Sciences, 2009, 2(1): 226-252. [CrossRef] [MathSciNet] [Google Scholar]
  8. Rudin L I, Osher S, Fatemi E. Nonlinear total variation based noise removal algorithms[J]. Physica D Nonlinear Phenomena, 1992, 60(1/2/3/4): 259-268. [NASA ADS] [CrossRef] [MathSciNet] [Google Scholar]
  9. Tao M, Yang J F. Alternating direction algorithms for total variation deconvolution in image reconstruction[EB/OL]. [2009-05-09]. https://legacy.sites.fas.harvard.edu/~cs278/papers/adm.pdf. [Google Scholar]
  10. Jiao Y L, Jin Q N, Lu X L, et al. Alternating direction method of multipliers for linear inverse problems[J]. SIAM Journal on Numerical Analysis, 2016, 54(4): 2114-2137. [Google Scholar]
  11. Wu C L, Tai X C. Augmented Lagrangian method, dual methods, and split bregman iteration for ROF, vectorial TV, and high order models[J]. SIAM Journal on Imaging Sciences, 2010, 3(3): 300-339. [CrossRef] [MathSciNet] [Google Scholar]
  12. Wu C L, Zhang J Y, Tai X C. Augmented Lagrangian method for total variation restoration with non-quadratic fidelity[J]. Inverse Problems & Imaging, 2011, 5(1): 237-261. [CrossRef] [MathSciNet] [Google Scholar]
  13. Wang Y L, Yang J F, Yin W T, et al. A new alternating minimization algorithm for total variation image reconstruction [J]. SIAM Journal on Imaging Sciences, 2008, 1(3): 248-272. [CrossRef] [MathSciNet] [Google Scholar]
  14. Ullah E, Nawaz R, Iqbal J. Single image haze removal using improved dark channel prior[C]//2013 5th International Conference on Modelling, Identification and Control (ICMIC). New York: IEEE, 2013: 245-248. [Google Scholar]
  15. Shi L, Ynag L, Chu S B, et al. Image haze removal using dark channel prior and minimizing energy function[C]//2017 IEEE 2nd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC). New York: IEEE, 2017: 256-259. [CrossRef] [Google Scholar]
  16. Zhou K, Zhuang P X, Xiong J Y, et al. Blind image deblurring with joint extreme channels and L0-regularized intensity and gradient priors[C]//2020 IEEE International Conference on Image Processing (ICIP). New York: IEEE, 2020: 873-877. [CrossRef] [Google Scholar]
  17. Anwar S, Huynh C P, Porikli F. Image deblurring with a class-specific prior[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(9): 2112-2130. [CrossRef] [PubMed] [Google Scholar]
  18. Hosseini M S, Plataniotis K N. Convolutional deblurring for natural imaging[J]. IEEE Transactions on Image Processing: A Publication of the IEEE Signal Processing Society, 2020, 29: 250-264. [NASA ADS] [CrossRef] [MathSciNet] [PubMed] [Google Scholar]
  19. Chan R H, Riemenschneider S D, Shen L X, et al. Tight frame: An efficient way for high-resolution image reconstruction[J]. Applied and Computational Harmonic Analysis, 2004, 17(1): 91-115. [Google Scholar]
  20. Chan R H, Riemenschneider S D, Shen L X, et al. High-resolution image reconstruction with displacement errors: A framelet approach [J]. International Journal of Imaging Systems and Technology, 2004, 14(3): 91-104. [CrossRef] [Google Scholar]
  21. Chai A W, Shen Z W. Deconvolution: A wavelet frame approach[J]. Numerische Mathematik, 2007, 106(4): 529-587. [Google Scholar]
  22. Ono S, Miyata T, Yamada I. Cartoon-texture image decomposition using blockwise low-rank texture characterization [J] IEEE Transactions on Image Processing: A Publication of the IEEE Signal Processing Society, 2014, 23(3): 1128-1142. [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
  23. Tikhonov A. Regularization of incorrectly posed problems [J]. Soviet Math Dokl, 1963: 1624-1627. [Google Scholar]
  24. Gabay D. Chapter IX applications of the method of multipliers to variational inequalities[C]//Augmented Lagrangian Methods: Applications to the Numerical Solution of Boundary-Value Problems. Amsterdam: Elsevier, 1983, 15: 299-331. [CrossRef] [Google Scholar]
  25. Eckstein J, Bertsekas D P. On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators [J]. Mathematical Programming, 1992, 55(1): 293-318. [CrossRef] [MathSciNet] [Google Scholar]
  26. Lv X G, Huang T Z, Xu Z B, et al. Kronecker product approximations for image restoration with whole-sample symmetric boundary conditions[J]. Information Sciences, 2012, 186(1): 150-163. [CrossRef] [MathSciNet] [Google Scholar]
  27. Fazel M, Pong T K, Sun D F, et al. Hankel matrix rank minimization with applications to system identification and realization[J]. SIAM Journal on Matrix Analysis and Applications, 2013, 34(3): 946-977. [CrossRef] [MathSciNet] [Google Scholar]

All Tables

Table 1

The results of PSNR, SSIM, and CUP time with different methods

All Figures

thumbnail Fig. 1 Effect of the shear operator, Sθd with different angles and directions
In the text
thumbnail Fig. 2 Comparison between gradient operator and shear gradient operator applied on Cameraman image
In the text
thumbnail Fig. 3 PSNR results of image Monarch by ADMM STV on the grid ρ1 = 50:50:1 300, ρ2 = 100:20:600
In the text
thumbnail Fig. 4 Some of the test images (the first row) and the corresponding blurred images (the second row)
In the text
thumbnail Fig. 5 Visual comparison of Monarch by different methods with gaussian blur(the noise level is 0.256)
In the text
thumbnail Fig. 6 Visual comparison of Pirate by different methods with average blur (the noise level is 0.512)
In the text
thumbnail Fig. 7 PSNR results versus iteration number by ADMM STV algorithm
In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.