Open Access
Issue
Wuhan Univ. J. Nat. Sci.
Volume 26, Number 6, December 2021
Page(s) 495 - 506
DOI https://doi.org/10.1051/wujns/2021266495
Published online 17 December 2021

© Wuhan University 2021

Licence Creative CommonsThis is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

0 Introduction

Image deblurring is a classical problem in low level vision. A blurred image can be considered as a convolution between the latent image and the point spread function (PSF) of the image capture device. The blurring process can be modeled asf=Ku+n(1)where uRn×n is an original image without any form of degradation, KRn×n is a degradation matrix, f is an observed image, and n is the additive noise.

The goal of image deblurring is to estimate the original image u from an observed image f degraded by blur kernel and noise[1]. In this paper, we mainly perform image restoration processing on blur kernel and Gaussian noise. The function of Gaussian noise can be expressed asn(x)=12πσe(xμ)22σ2(2)where σ and μ represent the standard derivative and average value of the noise distribution n, respectively.

Image deblurring is an ill-posed inverse problem. In other words, its solution neither exists nor is unique, and small disturbances in the data may cause large errors in reconstruction. Hence, in order to overcome this ill-posedness, regularization techniques need to be considered to produce reasonable approximate solutions from noisy data. Rudin-Osher-Fatemi (ROF) model[2] is one of the most successful regularization methods due to its ability to preserve sharp edges[3]. The corresponding minimization task is as follows:minu{λ2fKu22+uTV}(3)where ||∙||2 denotes the Euclidean norm, ||∙||TV is the discrete TV regularization term. The first term of (3) is called the fidelity term, and the second term is called the regularization term (or penalty) which describes prior knowledge of the image. λ>0 is the regularization parameter which controls the balance between fidelity term and regularization term. During the last few decades, a series of methods based on total variation and its variants have been developed.

The well-known first-order total variation methods such as FTVd (fast total variation deconvolution)[4] and ADMM (alternating direction method of multipliers) TV[5,6] have also achieved satisfactory results in the image restoration. FTVd is a total variation model based on splitting techniques. In Ref. [7], Chan et al proposed a frame-based deblurring problem using the ADMM algorithm. In Ref. [5], Jiao et al used the ADMM iterative method to solve the TV deblurring problem (ADMM TV). Compared with FTVd, the ADMM TV method can achieve comparable experimental performance without the need for parameter selection.

However, it is well known that TV-based restoration methods suffer from the staircase artifact in flat regions. There are many image restoration methods that have been developed to overcome these shortcomings. A well-known method is to replace the first-order total variation in the regularization term with a second-order (or higher) total variation.

In 2000, Chan et al[8] proposed the high-order TV (HOTV) model to improve the restoration quality. Contrary to TV regularization that penalizes gradient magnitude, regularizations using second-order derivatives do not penalize ramp regions whose intensity varies linearly, and therefore do not force the intensity of the region to remain constant. In other words, these regularizations have ability to potentially recover a broader class of images that consist of more than just piecewise-constant regions[9]. In Ref. [10], a second-order Laplace model for image deblurring was proposed by You and Kaveh. In 2003, a second-order derivative regularization[11] (Hessian regularization) model was proposed, called the Lysaker, Lundervold and Tai (LLT) model, which can effectively suppress the staircase artifacts and maintain the smoothness of flat areas of the image. In 2012, Hu et al[12] improved the model for image recovery and designed the maximum-minimum (MM) algorithm to solve the model. Their numerical experiments show that second-order TV has an advantage over classical TV regularization in avoiding staircase artifacts. In Ref. [13], the authors pointed out that high-order TV’s ability to preserve edges became weaker compared with first-order TV regularization and tended to smoothen edges and other small details. Thus, hybrid models that combine higher-order TV with other regularization were proposed in many literatures[13-15].

Among the hybrid models, by replacing the first-order TV with a second-order TV, Zhang et al[16] obtained better results than the model with strongly convex terms presented in Ref. [17]. Furthermore, in Refs. [18] and [19], a combination of first-order and second-order TV priors is used to recover blurred images and the minimization model is solved by the ADMM algorithm. To further improve the restoration performance, Liu et al[20] investigated a spatially adaptive regularization parameter updating scheme. As an adaptive balancing scheme between first-order and second-order TV, Wang et al[21] developed a Poisson denoising framework based on an iterative weighted total generalized variational (TGV) model. In Ref. [22], Liu et al pointed out that TGV can maintain sharp edges and perform well in areas of gradual intensity slopes.

TV or HOTV norm only contains finite difference in horizontal and vertical directions, and this leads to losing some detail information in other directions. In order to incorporate more direction information in gradient domain, in this paper we propose a novel shear high-order TV (SHOTV) norm which contains finite difference in different directions. Compared with higher order TV regularization, the proposed SHOTV regularization method provides a more meaningful description of the gradient information as it contains information about the different directions of the image gradients.

1 Preliminary

1.1 Shear Operator

Let Sθd be a shear operator with the shear angle θd[0,π4] (see Ref. [23]), and d{'t,'b,'l,'r}:Sθd:Rn×nRn×n:XXshr(4)where the (i^,j^)-th entry of Xshr is given, for i^=1,,m and j^=1,,n, by Xi^,j^shr:=Xi¯,j¯ withi¯:={[(i^1+4πθj^)modn]+1   , if d='t  [(i^1+4πθ(nj^))modn]+1, if d='b   i^             , otherwise (5)j¯:={[(j^1+4πθi^)modn]+1   , if d='r[(j^1+4πθ(ni^))modn]+1, if d='l j^             ,otherwise(6)where |.| represents the procedure rounding the real number to the nearest integer in the zero direction. The parameter d indicates the axis of shear (top 't , bottom 'b, left 'l, right 'r), and θd determines the degree of shear. For example, SθdX means that the image X is sheared with top axis and θd. In particular, θd=0 means Sθd is the identity transformation, i.e., no shear. Figure 1 shows the results of shear operator in different directions at an angle of π4.

thumbnail Fig. 1 Effect of shear operator Sθd with different angles and directions

(a) Original image X; (b) SθdX with θd=π4,d='l; (c)SθdX with θd=π4,d='r

1.2 Shear Gradient and Shear HOTV Norm

TV regularization was introduced by Rudin et al[2], for image denoising and further extended to other image restoration tasks. For the gray-scale image uRn×n, its discrete total variation norm isTV(u):=i=1,j=1n|(Du)i,j|           =i=1,j=1n|(Dxu)i,j|2+|(Dyu)i,j|2(7)where (Du)i,j=((Dxu)i,j,(Dyu)i,j) is defined as follows(Dxu)i,j={ui+1,jui,j, if  i<nu1,jun,j, if  i=n(Dyu)i,j={ui,j+1ui,j, if  j<nui,1ui,n, if  j=n

TV norm only contains finite difference in horizontal and vertical directions, and this leads to losing some detailed information in other directions.

TV regularization preserves fine features and sharp edges, but it usually produces staircase artifact that is not presented in real images[24,25]. In many applications[26,27], some authors have proposed a higher-order TV to reduce staircase artifact. HOTV of u is defined asHTV(u)=|(D2u)i,j|             =(Dxxu)i,j2+(Dxyu)i,j2+(Dyxu)i,j2+(Dyyu)i,j2(8)where D2u=(Dxxu,Dxyu,Dyxu,Dyyu) denotes the second-order difference of the ((j-1)n+i)-th entry of the vector u. By combining the higher-order finite difference operator and the shear operator, we propose the shear high-order gradient (SHOG) operator by the following way:DS(Ds1,Ds2,Ds3,Ds4)     =(Sθd1TDxxSθd1,Sθd1TDxySθd1,Sθd2TDyxSθd2,Sθd2TDyySθd2)(9)where the superscript T means conjugate transpose. Sθd1 and Sθd2 denote shear along the 'top'(or 'bottom') and 'left'(or 'right') direction with different angles, respectively. With different types of texture images, we choose different directions based on our experience. Then for the gray-scale image u, its discrete shear higher-order total variation (SHOTV) norm isSHOTV(u)=i=14DSiu1(10)

The shear gradient has following properties:

Remark 1   (Matrix form of Sθd) The operator Sθd is linear, so it can be represented as matrix, which is denoted by Sθd. Moreover, since Sθd is permutation matrix, it is orthonormal.

Theorem 1   The shear operator Sθd and the horizontal gradient operator Dxx are commutative when d='l or 'r; Similarly, Sθd and the vertical operator Dyy are commutative when d='t or 'b. i.e.,SθdTDxxSθd=Dxx when d='l or 'r(11)SθdTDyySθd= Dyy when d='t or 'b(12)

Proof   Firstly, we show that SθlDx=DxSθl, i.e., SθlDxSθl=Dx. In fact, for uRn×n, by the definition of gradient operator and shear operator, we have(SθlDxu)i,j=(Dxu)i,j={ui+1,j¯ui,j¯, if i<n0, if  i=n(13)where j¯ is defined as in Eq. (7).(DxSθlu)i,j={(Sθlu)i+1,j(Sθlu)i,j  if  i<n0if  i=n                ={ui+1,j¯ui,j¯, if  i<n0, if  i=n(14)

By Eq. (13) and Eq. (14), we obtain SθlDx=DxSθl, i.e. SθlTDxSθt=Dx.

Lastly, by the orthogonality of matrix Sθl, we can obtainSθlTDxxSθl=SθlTDxSθlSθlTDxSθl=DxDx=Dxx

By Theorem 1, the shear gradient operator is essentially a generalization of gradient operator.

The comparison between HOG operator and SHOG operator applied on Barbara image are shown in Fig. 2. It can be observed that more direction information is captured and more detailed edge information can be retained by the SHOG operator. From the local comparison graph, it can be seen that the SHOG operator preserves more information in the texture part of the image.

thumbnail Fig. 2 Comparison between 2nd-gradient operator and shear 2nd-gradient operator applied on Barbara image u

1.3 BCCB Matrices

Under the periodic boundary conditions, both the blurring matrix and gradient matrix are Block-Circu- lant-with-Circulant-Blocks (BCCB), and thus are diagonalizable by the 2D discrete Fourier transform. In this subsection, we will show that the SHOG matrices are BCCB when the shear angle is π4.

Let Eij denote a matrix whose (i,j)-entry is 1, zero elsewhere. Then {Eij|i,j=1,2,3} is a set of orthonormal basis of linear space R3×3. Let Sθd denote the shear matrix of shear operator under the basis {Eij|i,j=1,2,…,n}, i.e.Sθd(E11,E21,E31,E12,E22,E32,E13,E23,E33)    =(E11,E21,E31,E12,E22,E32,E13,E23,E33)Sθd(15)where Sθl indicates shearing at angle of π4 along the '1' direction. Sθl is a permutation matrix of the following formSθl=[100000000000000010000001000000100000010000000000000001000000100000010000001000000]

Since gradient matrices are BCCB matrices under periodic boundary condition, it can be verified that the DSi(i=1,2,3,4) in formula (10) are also BCCB matrices when shear angle is π4. The shear gradient matrices DS1=SθtTDxxSθt and DS2=SθtTDxySθt with θt=π4 have the following form:DS1=[200010001020001100002100010001200010100020001010002100010001200001100020100010002]DS2=[110011000011101000101110000000110011000010101000101110011000110101000011110000101]

The shear gradient matrices DS3=SθlTDyxSθl and DS4=SθlTDyySθl with shear angel θl=π4 have the following form:DS3=[110100010011010001101001100010110100001011010100101001100010110010001011001100101]DS4=[200001010020100001002010100010200001001020100100002010001010200100001020010100002]

But unfortunately, when the shear angle θd(0,π4), DSi(i=1,2,3,4) do not always have BCCB structure. When the shear angle is θ1=40°, the shear matrix Sθl has the following form:DS4=[210000001120000001002110000001210000001120000000002110000001210000001120110000002]

It can be noticed that the above matrix is not a BCCB matrix.

2 Proposed Model

Considering the advantages of high-order TV and shear operator, we propose the following optimization modelminuλ2Kuf22+ωDSu1+IΩ(u)(16)where the term ||·||1 represents convex l1 norm regularization. The variables λ,ω>0 are regularization parameters that control the data fidelity term and the shear high-order regularization. I(u) is an indication function to impose hard constraints on the objective function defined asIΩ(x)={0  ,ifxΩ  ,ifxΩwhere =[0,255]. To minimize problem (16) robustly, we hereby adopt framework of alternating direction method of multipliers (ADMM)[28]. Equation (16) can be converted into a constrained optimization problem:minuλ2Kuf22+ωDSu1+IΩ(u) s.t.  w=DSu   z=u(17)

The corresponding augmented Lagrangian function of (17) can be written asLρ1,ρ2(u,w,z;λ,ω,μ1,μ2)=λ2Kuf22+ωw1+IΩ(u)        μ1T(wDSu)+ρ12wDSu22          μ2T(zu)+ρ22zu22(18)where ρ1, ρ2 are penalty parameters. Variablesμ1 andμ2 are the Lagrange multipliers associated with the constraints w=DSu and z=u, respectively. ADMM minimizes the Lagrangian function on two sets of variables u, w and z. Since w and z are decoupled from each other, they can be solved separately.

ADMM is an algorithm using proximal splitting techniques for solving the following convex optimization problem:minθ1(x1)+θ2(x2)s.t. A1x1+A2x2=c,xiχi,i=1,2(19)where θi:χiR are closed convex functions, Ai are linear transforms, χi are nonempty closed convex sets, and c is a given vector.

Part of the appeal of ADMM is the fact that the algorithm lends themselves to parallel implementations. The algorithm is given in Algorithm 1.

Algorithm 1ADMM
1 Initialization: Starting point (x10,x20,p0).
2 Iteration:
  1)x1k+1=argminx1θ1(x1)+δ2A1x1+A2x2kc+pkδ22
  2)x2k+1=argminx2θ2(x2)+δ2A1x1k+1+A2x2c+pkδ22
  3)pk+1=pk+δ(A1x1k+1+A2x2k+1c)
  4) k=k+1.
Until a stopping criterion is satisfied.

In this section, we discuss the optimization strategies for solving each sub-problem of (18) one by one.

1) u Sub-Problem

The sub-problem of u is a least square problem with the following formuk+1=argminuλ2Kukf22μ1T(wkDSuk)           +ρ12wkDSuk22           μ2T(zkuk)+ρ22zkuk22(20)

From the optimality conditions, we have(λKTΚ+ρ1(DS)TDS+ρ2I)uk          =λKTf+ρ1(DS)T(wkμ1ρ1)μ2+ρ2zk(21)

Considering what was discussed in subsection 1.3, (DS)TDS are BCCB matrices only when the shear angle is π4. While the shear angle is not π4, computing the inverse of the left hand of (21) in each iteration is costly. In this paper, we set the shear angle θd=π4. KTK is also BCCB matrix under the periodic boundary condition. As a consequence, Eq. (21) can be efficiently solved by one FFT operation and one inverse FFT operation asuk+1=F1(F[λKTf+ρ1(DS)T(wkμ1ρ1)μ2+ρ2zk]λF(KTK)+ρ1F((DS)TDS)+ρ2)(22)where F represents the two-dimensional discrete Fourier transform and F-1 is the inverse transform.

2) w Sub-Problem

The w sub-problem is a convex second-order deblurring problemwk+1=argminwρ12wkDSuk+122           μ1(wkDSuk+1)+ωwk1       =argminwρ12wk(DSuk+1+μ1kρ1)22+ωwk1(23)

We write t=DSuk+1+μ1kρ1 for simplicity, and solve problem (23) using a two-dimensional shrinkage operator. i.e.,wk+1=shrink(t,ωρ1)       =max{|t|ωρ1,0}sign(t)(24)

3) z Sub-Problemzk+1=argminzρ22zkuk+122μ2T(zkuk+1)+IΩ(zk)=argminzρ22zk(uk+1+μ2kρ2)22+IΩ(zk)(25)

The z sub-problem is a projection problem. For an 8-bit image, the range of pixel values can be kept at [0, 255], and the closed-form solution can be obtained by using the projection operatorzk+1=ProjΩ(uk+1+μ2kρ2)      =min(255,max(uk+1+μ2kρ2,0))(26)

The Lagrangian multipliers are updated as followsμ1k+1=μ1k+(Dsuk+1wk+1)(27)μ2k+1=μ2k+(uk+1zk+1)(28)

The proposed algorithm is named SHOTV and shown in Algorithm 2.

Algorithm 2SHOTV
1 Initialization: Starting point
(u0,w0,z0,μ10,μ20),ρ1>0,ρ2>0.
2 Iteration:
  1) Compute uk+1 according to (21).
  2) Compute wk+1 according to (24).
  3) Compute zk+1 according to (26).
  4) Update μ1k+1 according to (27).
  5) Update μ2k+1 according to (28).
  6) Check the stopping criterion ukuk+12uk2δ
  7) k=k+1.
  8) end

3 Numerical Experiments

In this section, we present various numerical results to illustrate the performance of the proposed algorithm for non-blind image deblurring. All experiments are carried out on Windows 10 64-bit and Matlab 2019b running on a desktop equipped with an Intel Core i5-10300 H CPU 2.5 GHz and 16 GB of RAM. The source code for all competing methods was obtained from the original authors and we use the default parameter settings.

3.1 Experiment Setting

The test images are shown in Fig. 3, and the pixel values of the images are normalized to [0, 1] for simplicity. Both Peak Signal to Noise Ratio (PSNR) and Structural Similarity metrics (SSIM)[29] are used to evaluate the quality of the reconstructed images. They are defined as follows:PSNR(dB)=10log10(Maxu)2fuF2(29)SSIM=(2f^u^)(2σuf+c2)(f^2+u^2+c1)(σf2+σu2+c2)(30)where u is the clean image, f is the recovered image, Maxu is the largest possible value of the image u. u^ and f^ are the mean values of images u and f, σf and σf are the standard variance of images u and f, respectively, and σuf is the covariance of u and f, c1,c2>0 are constants. In general, high PSNR and SSIM values imply better image quality.

thumbnail Fig. 3 The test images

3.2 Parameter Selection Discussion

In this subsection, we focus on the choice of parameters ρ1,ρ2 and λ. For simplicity, we set the parameter ρ1=ρ2. Then we investigate the sensitivity of the parameter ρ1 and λ. We test the selection of ρ1 on both House and Parrot images. They are corrupted by motion blur (fspecial(‘motion’, 35, 50)) and Gaussian blur (fspecial (‘gaussian’, [9,9], 9)). Figure 4 shows the change of PSNR values versus the parameter ρ1 under three different noise levels. We empirically set the parameters ρ1∈[5,30] in the subsequent experiments.

thumbnail Fig. 4 The PSNR results versus the parameter ρ1 for fixed λ

Then, we discuss the selection of the parameter λ. Our experiments are tested on Pirate and Barbara images with three different levels of noise and two different blur kernels. For simplicity, we set the parameter ρ1=ρ2=10. Figure 5 shows the curves of PSNR value with respect to parameter λ under three different noise levels. For the parameter λ in the subsequent experiments, the empirical value is taken to be in the range [800, 2 000]. As a matter of fact, the above empirical parameter settings settings are also generally valid for other test images.

thumbnail Fig. 5 The PSNR results versus the parameter λ for fixed ρ1=ρ2=10

3.3 Image Deblurring

In this subsection, we report the performance of the proposed SHOTV method for image deblurring and compare it with some leading deblurring methods, including FTVd[4], ADMM frame[7], ADMM TV[5] and HOTV[18].

First, we test the 9×9 Gaussian blur with a standard deviation of 9. The blurred image is generated using the MATLAB function “imfilter” with periodic boundary conditions. Then, the blurred image is corrupted with zero-mean additive white Gaussian noise σ2=0.001. The PSNR and SSIM values for the deblurring experiments are reported in Table 1 and Table 2. The numbers in bold indicate the best performance. One can observe that the proposed method outperforms the other competing methods. We choose Fingerprint, Cameraman, Hill and Shirt images for Gaussian deblurring and denoising comparison, and the recovered images are shown in Fig. 6. From the comparison, we can see that the restored images by our method retain more texture and have fewer artifacts. As can be seen from the Shirt image in Fig. 6, our proposed algorithm retains more texture information through the shear operator, resulting in a significant improvement in image deblurring.

thumbnail Fig. 6 Restoration of four degraded images with Gaussian blur and comparison with other algorithms

From left to right: the noisy images, FTVd, ADMM-Frame, ADMM-TV, HOTV, and SHOTV; each value in parentheses represents the corresponding SSIM value of the restored image

Similarly, we test the motion blur with a len size 35 in the blur kernel and a rotation angle 50º. The blurred image is generated using the MATLAB function “imfilter” with periodic boundary conditions, and the blurred image is corrupted by zero-mean additive white Gaussian noise with ρ2=0.001. The PSNR and SSIM results for all competing deblurring methods are reported in Table 2. One can observe that the proposed SHOTV method can obtain better result than other competing mehtods. The blurred images and the deblurred images are shown in Fig. 7. Compared with other methods, our algorithm can preserve more texture structure and the deblurred image contains fewer artifacts compared with first-order TV.

thumbnail Fig. 7 Restoration of four degraded images with Motion blur and comparison with other algorithms

From left to right: the noisy images, FTVd, ADMM-Frame, ADMM-TV, HOTV and SHOTV; each value in parentheses represents the corresponding SSIM value of the restored image

Table 1

PSNR and SSIM comparison of different methods for Gaussian deblurring

Table 2

PSNR (dB) and SSIM comparison of different methods for motion deblurring

3.4 Convergence Analysis

In order to verify the convergence of the algorithm numerically, we apply Algorithm 2 to four different images corrupted by different blur kernels and additive white Gaussian noise with deviation ρ2=0.001, where Gaussian blur is applied to Fingerprint and Shirt images, and Gaussian blur is applied to Barbara and Zebra images. We use the equation uk+1uk2uk2 to calculate the relative error curve for each iteration of the recovery image of Algorithm 2. As shown in Fig. 8, the relative error decreases as the number of iterations increases, which numerically illustrates the convergence of the algorithm.

thumbnail Fig. 8 Relative error values versus iteration number

4 Conclusion

In this paper, a new SHOG operator is proposed by combining the high-order gradient operator and the shear operator. By both theoretical analysis and experiments, we show that the proposed SHOG operator incorporates more directionality and can detect more abundant edge information. We extend the HOTV norm to the SHOTV norm based on the SHOG operator, and then employ SHOTV norm as the regularization term to establish an image deblurring model. We also study some properties of the SHOG operator, and show that the SHOG matrices are Block-Circulant-with-Circulant-Blocks (BCCB) when the shear angle is π4. ADMM scheme is exploited to solve the proposed model efficiently. Extensive numerical experiments demonstrate the superiority of the proposed method in terms of visual quality and quantitative metrics.

References

  1. Abbass M Y, Kim H W, Abdelwahab S A, et al. Image deconvolution using homomorphic technique [J]. Signal, Image and Video Processing, 2019, 13(4): 703-709. [Google Scholar]
  2. Rudin L I, Osher S, Fatemi E. Nonlinear total variation based noise removal algorithms [J]. Physica D: Nonlinear Phenomena, 1992, 60(1-4): 259-268. [Google Scholar]
  3. Du H, Liu Y. Minmax-concave total variation denoising [J]. Signal, Image and Video Processing, 2018, 12(6): 1027-1034. [Google Scholar]
  4. Wang Y, Yang J, Yin W, et al. A new alternating minimization algorithm for total variation image reconstruction [J]. SIAM Journal on Imaging Sciences, 2008, 1(3): 248-272. [CrossRef] [MathSciNet] [Google Scholar]
  5. Jiao Y, Jin Q, Lu X, et al. Alternating direction method of multipliers for linear inverse problems [J]. SIAM Journal on Numerical Analysis, 2016, 54(4): 2114-2137. [Google Scholar]
  6. Chang H, Lou Y, Duan Y, et al. Total variation-based phase retrieval for Poisson noise removal [J]. SIAM Journal on Imaging Sciences, 2018, 11(1): 24-55. [CrossRef] [MathSciNet] [Google Scholar]
  7. Chan R H, Riemenschneider S D, Shen L, et al. Tight frame: An efficient way for high-resolution image reconstruction [J]. Applied and Computational Harmonic Analysis, 2004, 17(1): 91-115. [Google Scholar]
  8. Chan T, Marquina A, Mulet P. High-order total variation-based image restoration [J]. SIAM Journal on Scientific Computing, 2000, 22(2): 503-516. [CrossRef] [MathSciNet] [Google Scholar]
  9. Lefkimmiatis S, Ward J P, Unser M. Hessian Schatten-norm regularization for linear inverse problems [J]. IEEE Transactions on Image Processing, 2013, 22(5): 1873-1888. [NASA ADS] [CrossRef] [MathSciNet] [PubMed] [Google Scholar]
  10. You Y L, Kaveh M. Fourth-order partial differential equations for noise removal [J]. IEEE Transactions on Image Processing, 2000, 9(10): 1723-1730. [NASA ADS] [CrossRef] [MathSciNet] [PubMed] [Google Scholar]
  11. Lysaker M, Lundervold A, Tai X C. Noise removal using fourth-order partial differential equation with applications to medical magnetic resonance images in space and time [J]. IEEE Transactions on Image Processing, 2003, 12(12): 1579-1590. [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
  12. Hu Y, Jacob M. Higher degree total variation (HDTV) regularization for image recovery [J]. IEEE Transactions on Image Processing, 2012, 21(5): 2559-2571. [NASA ADS] [CrossRef] [MathSciNet] [PubMed] [Google Scholar]
  13. Shi Q, Sun N, Sun T, et al. Structure-adaptive CBCT reconstruction using weighted total variation and Hessian penalties [J]. Biomedical Optics Express, 2016, 7(9): 3299-3322. [Google Scholar]
  14. Papafitsoros K, Schönlieb C B. A combined first and second order variational approach for image reconstruction [J]. Journal of Mathematical Imaging and Vision, 2014, 48(2): 308-338. [Google Scholar]
  15. Bredies K, Kunisch K, Pock T. Total generalized variation [J]. SIAM Journal on Imaging Sciences, 2010, 3(3): 492-526. [CrossRef] [MathSciNet] [Google Scholar]
  16. Zhang J, Ma M, Wu Z, et al. High-order total bounded variation model and its fast algorithm for Poissonian image restoration [J]. Mathematical Problems in Engineering, 2019, 2019: 1-11. [Google Scholar]
  17. Liu X, Huang L. Total bounded variation-based Poissonian images recovery by split Bregman iteration [J]. Mathematical Methods in the Applied Sciences, 2012, 35(5): 520-529. [NASA ADS] [CrossRef] [MathSciNet] [Google Scholar]
  18. Adam T, Paramesran R, Mingming Y, et al. Combined higher order non-convex total variation with overlapping group sparsity for impulse noise removal [J]. Multimedia Tools and Applications, 2021, 80(12): 18503-18530. [Google Scholar]
  19. Jiang L, Huang J, Lv X G, et al. Alternating direction method for the high-order total variation-based Poisson noise removal problem [J]. Numerical Algorithms, 2015, 69(3): 495-516. [Google Scholar]
  20. Liu J, Huang T Z, Lv X G, et al. High-order total variation-based Poissonian image deconvolution with spatially adapted regularization parameter [J]. Applied Mathematical Modelling, 2017, 45: 516-529. [CrossRef] [MathSciNet] [Google Scholar]
  21. Wang X, Feng X, Wang W, et al. Iterative reweighted total generalized variation based Poisson noise removal model[J]. Applied Mathematics and Computation, 2013, 223: 264-277. [Google Scholar]
  22. Liu H, Tan S. Image regularizations based on the sparsity of corner points [J]. IEEE Transactions on Image Processing, 2018, 28(1): 72-87. [Google Scholar]
  23. Ono S, Miyata T, Yamada I. Cartoon-texture image decomposition using blockwise low-rank texture characterization [J]. IEEE Transactions on Image Processing, 2014, 23(3): 1128-1142. [NASA ADS] [CrossRef] [MathSciNet] [PubMed] [Google Scholar]
  24. Liu P, Xiao L. Efficient multiplicative noise removal method using isotropic second order total variation [J]. Computers & Mathematics with Applications, 2015, 70(8): 2029-2048. [Google Scholar]
  25. Wang S, Huang T Z, Zhao X L, et al. Speckle noise removal in ultrasound images by first-and second-order total variation [J]. Numerical Algorithms, 2018, 78(2): 513-533. [Google Scholar]
  26. Mei J J, Huang T Z. Primal-dual splitting method for high-order model with application to image restoration [J]. Applied Mathematical Modelling, 2016, 40(3):2322-2332. [Google Scholar]
  27. Li F, Shen C, Fan J, et al. Image restoration combining a total variational filter and a fourth-order filter [J]. Journal of Visual Communication and Image Representation, 2007, 18 (4): 322-330. [CrossRef] [Google Scholar]
  28. Gabay D. Chapter IX applications of the method of multipliers to variational inequalities [J]. Studies in Mathematics & Its Applications, 1983, 15: 299-331. [CrossRef] [Google Scholar]
  29. Zhou W, Bovik A C, Sheikh H R, et al. Image quality assessment: From error visibility to structural similarity [J]. IEEE Trans Image Process, 2004, 13(4): 600-612. [NASA ADS] [CrossRef] [PubMed] [Google Scholar]

All Tables

Table 1

PSNR and SSIM comparison of different methods for Gaussian deblurring

Table 2

PSNR (dB) and SSIM comparison of different methods for motion deblurring

All Figures

thumbnail Fig. 1 Effect of shear operator Sθd with different angles and directions

(a) Original image X; (b) SθdX with θd=π4,d='l; (c)SθdX with θd=π4,d='r

In the text
thumbnail Fig. 2 Comparison between 2nd-gradient operator and shear 2nd-gradient operator applied on Barbara image u
In the text
thumbnail Fig. 3 The test images
In the text
thumbnail Fig. 4 The PSNR results versus the parameter ρ1 for fixed λ
In the text
thumbnail Fig. 5 The PSNR results versus the parameter λ for fixed ρ1=ρ2=10
In the text
thumbnail Fig. 6 Restoration of four degraded images with Gaussian blur and comparison with other algorithms

From left to right: the noisy images, FTVd, ADMM-Frame, ADMM-TV, HOTV, and SHOTV; each value in parentheses represents the corresponding SSIM value of the restored image

In the text
thumbnail Fig. 7 Restoration of four degraded images with Motion blur and comparison with other algorithms

From left to right: the noisy images, FTVd, ADMM-Frame, ADMM-TV, HOTV and SHOTV; each value in parentheses represents the corresponding SSIM value of the restored image

In the text
thumbnail Fig. 8 Relative error values versus iteration number
In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.