Issue 
Wuhan Univ. J. Nat. Sci.
Volume 26, Number 6, December 2021



Page(s)  495  506  
DOI  https://doi.org/10.1051/wujns/2021266495  
Published online  17 December 2021 
Computer Science
CLC number: TP751
NonBlind Image Deblurring Method Using Shear High Order Total Variation Norm
^{1}
School of Mathematics and Physics, Anhui University of Technology, Maanshan 243002, Anhui, China
^{2}
Key Laboratory of Multidisciplinary Management and Control of Complex Systems of Anhui Higher Education Institutes, Anhui University of Technology, Maanshan 243002, Anhui, China
† To whom correspondence should be addressed. Email: zt9877@163.com
Received:
5
September
2021
In this paper, we propose a shear highorder gradient (SHOG) operator by combining the shear operator and highorder gradient (HOG) operator. Compared with the HOG operator, the proposed SHOG operator can incorporate more directionality and detect more abundant edge information. Based on the SHOG operator, we extend the total variation (TV) norm to shear highorder total variation (SHOTV), and then propose a SHOTV deblurring model. We also study some properties of the SHOG operator, and show that the SHOG matrices are Block Circulant with Circulant Blocks (BCCB) when the shear angle is $\frac{\text{\pi}}{4}$. The proposed model is solved efficiently by the alternating direction method of multipliers (ADMM). Experimental results demonstrate that the proposed method outperforms some stateoftheart nonblind deblurring methods in both objective and perceptual quality.
Key words: image deblurring / highorder TV norm / Block Circulant with Circulant Blocks (BCCB) matrix / shear operator / alternating direction method of multipliers (ADMM)
Biography: LU Lixuan, male, Master candidate, research direction: image processing. Email: lulixaunmath@163.com
Foundation item: Supported by the National Natural Science Foundation of China (61701004) , Outstanding Young Talents Support Program of Anhui Province (gxyq2021178) , Open Fund of Key Laboratory of Anhui Higher Education Institutes (CS202107) and Program of University Mathematics Teaching Research and Development Center (CMC20200301)
© Wuhan University 2021
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
0 Introduction
Image deblurring is a classical problem in low level vision. A blurred image can be considered as a convolution between the latent image and the point spread function (PSF) of the image capture device. The blurring process can be modeled as$f=Ku+n$(1)where $u\in {R}^{n\times n}$ is an original image without any form of degradation, $K\in {R}^{n\times n}$ is a degradation matrix, f is an observed image, and n is the additive noise.
The goal of image deblurring is to estimate the original image u from an observed image f degraded by blur kernel and noise^{[1]}. In this paper, we mainly perform image restoration processing on blur kernel and Gaussian noise. The function of Gaussian noise can be expressed as$n(x)=\frac{1}{\sqrt{2\text{\pi}}\sigma}{\text{e}}^{\frac{{(x\mu )}^{2}}{2{\sigma}^{2}}}$(2)where σ and μ represent the standard derivative and average value of the noise distribution n, respectively.
Image deblurring is an illposed inverse problem. In other words, its solution neither exists nor is unique, and small disturbances in the data may cause large errors in reconstruction. Hence, in order to overcome this illposedness, regularization techniques need to be considered to produce reasonable approximate solutions from noisy data. RudinOsherFatemi (ROF) model^{[2]} is one of the most successful regularization methods due to its ability to preserve sharp edges^{[3]}. The corresponding minimization task is as follows:$\underset{u}{\mathrm{min}}\left\{\frac{\lambda}{2}\Vert fKu{\Vert}_{2}^{2}+\Vert u{\Vert}_{\text{TV}}\right\}$(3)where ∙_{2} denotes the Euclidean norm, ∙_{TV} is the discrete TV regularization term. The first term of (3) is called the fidelity term, and the second term is called the regularization term (or penalty) which describes prior knowledge of the image. λ>0 is the regularization parameter which controls the balance between fidelity term and regularization term. During the last few decades, a series of methods based on total variation and its variants have been developed.
The wellknown firstorder total variation methods such as FTVd (fast total variation deconvolution)^{[4]} and ADMM (alternating direction method of multipliers) TV^{[5,6]} have also achieved satisfactory results in the image restoration. FTVd is a total variation model based on splitting techniques. In Ref. [7], Chan et al proposed a framebased deblurring problem using the ADMM algorithm. In Ref. [5], Jiao et al used the ADMM iterative method to solve the TV deblurring problem (ADMM TV). Compared with FTVd, the ADMM TV method can achieve comparable experimental performance without the need for parameter selection.
However, it is well known that TVbased restoration methods suffer from the staircase artifact in flat regions. There are many image restoration methods that have been developed to overcome these shortcomings. A wellknown method is to replace the firstorder total variation in the regularization term with a secondorder (or higher) total variation.
In 2000, Chan et al^{[8]} proposed the highorder TV (HOTV) model to improve the restoration quality. Contrary to TV regularization that penalizes gradient magnitude, regularizations using secondorder derivatives do not penalize ramp regions whose intensity varies linearly, and therefore do not force the intensity of the region to remain constant. In other words, these regularizations have ability to potentially recover a broader class of images that consist of more than just piecewiseconstant regions^{[9]}. In Ref. [10], a secondorder Laplace model for image deblurring was proposed by You and Kaveh. In 2003, a secondorder derivative regularization^{[11]} (Hessian regularization) model was proposed, called the Lysaker, Lundervold and Tai (LLT) model, which can effectively suppress the staircase artifacts and maintain the smoothness of flat areas of the image. In 2012, Hu et al^{[12]} improved the model for image recovery and designed the maximumminimum (MM) algorithm to solve the model. Their numerical experiments show that secondorder TV has an advantage over classical TV regularization in avoiding staircase artifacts. In Ref. [13], the authors pointed out that highorder TV’s ability to preserve edges became weaker compared with firstorder TV regularization and tended to smoothen edges and other small details. Thus, hybrid models that combine higherorder TV with other regularization were proposed in many literatures^{[1315]}.
Among the hybrid models, by replacing the firstorder TV with a secondorder TV, Zhang et al^{[16]} obtained better results than the model with strongly convex terms presented in Ref. [17]. Furthermore, in Refs. [18] and [19], a combination of firstorder and secondorder TV priors is used to recover blurred images and the minimization model is solved by the ADMM algorithm. To further improve the restoration performance, Liu et al^{[20]} investigated a spatially adaptive regularization parameter updating scheme. As an adaptive balancing scheme between firstorder and secondorder TV, Wang et al^{[21]} developed a Poisson denoising framework based on an iterative weighted total generalized variational (TGV) model. In Ref. [22], Liu et al pointed out that TGV can maintain sharp edges and perform well in areas of gradual intensity slopes.
TV or HOTV norm only contains finite difference in horizontal and vertical directions, and this leads to losing some detail information in other directions. In order to incorporate more direction information in gradient domain, in this paper we propose a novel shear highorder TV (SHOTV) norm which contains finite difference in different directions. Compared with higher order TV regularization, the proposed SHOTV regularization method provides a more meaningful description of the gradient information as it contains information about the different directions of the image gradients.
1 Preliminary
1.1 Shear Operator
Let ${S}_{{\theta}_{d}}$ be a shear operator with the shear angle ${\theta}_{d}\in [0,\frac{\text{\pi}}{4}]$ (see Ref. [23]), and $d\in \{\text{'}{\text{t}}^{\prime}\text{,}\text{'}{\text{b}}^{\prime}\text{,}\text{'}{\text{l}}^{\prime}\text{,}\text{'}{\text{r}}^{\prime}\}$:${S}_{{\theta}_{d}}:{R}^{n\times n}\mapsto {R}^{n\times n}:X\mapsto {X}^{\text{shr}}$(4)where the $(\widehat{i},\widehat{j})$th entry of X^{shr} is given, for $\widehat{i}=1,\cdots ,m$ and $\widehat{j}=1,\cdots ,n$, by ${X}_{\widehat{i},\widehat{j}}^{\text{shr}}:={X}_{\overline{i},\overline{j}}$ with$\overline{i}:=\{\begin{array}{ll}\left[\left(\widehat{i}1+\lfloor \frac{4}{\text{\pi}}\theta \widehat{j}\rfloor \right)\text{\hspace{0.17em}}\mathrm{mod}\text{\hspace{0.17em}}n\right]+1\text{\hspace{1em}\hspace{1em}\hspace{0.17em}},\hfill & \text{if}d=\text{'}{\text{t}}^{\prime}\text{}\hfill \\ \left[\left(\widehat{i}1+\lfloor \frac{4}{\text{\pi}}\theta \left(n\widehat{j}\right)\rfloor \right)\text{\hspace{0.17em}}\mathrm{mod}\text{\hspace{0.17em}}n\right]+1,\hfill & \text{if}d=\text{'}{\text{b}}^{\prime}\text{}\hfill \\ \widehat{i}\text{\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}},\hfill & \text{otherwise}\hfill \end{array}$(5)$\overline{j}:=\{\begin{array}{ll}\left[\left(\widehat{j}1+\lfloor \frac{4}{\text{\pi}}\theta \widehat{i}\rfloor \right)\text{\hspace{0.17em}}\mathrm{mod}\text{\hspace{0.17em}}n\right]+1\text{\hspace{1em}\hspace{1em}\hspace{0.17em}},\hfill & \text{if}d=\text{'}{\text{r}}^{\prime}\hfill \\ \left[\left(\widehat{j}1+\lfloor \frac{4}{\text{\pi}}\theta \left(n\widehat{i}\right)\rfloor \right)\text{\hspace{0.17em}}\mathrm{mod}n\right]+1,\hfill & \text{if}d=\text{'}{\text{l}}^{\prime}\text{}\hfill \\ \widehat{j}\text{\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}},\hfill & \text{otherwise}\hfill \end{array}$(6)where . represents the procedure rounding the real number to the nearest integer in the zero direction. The parameter d indicates the axis of shear (top $\text{'}{\text{t}}^{\prime}\text{}$, bottom $\text{'}{\text{b}}^{\prime}$, left $\text{'}{\text{l}}^{\prime}$, right $\text{'}{\text{r}}^{\prime}$), and θ_{d} determines the degree of shear. For example, ${S}_{{\theta}_{d}}X$ means that the image X is sheared with top axis and θ_{d}. In particular, θ_{d}=0 means ${S}_{{\theta}_{d}}$ is the identity transformation, i.e., no shear. Figure 1 shows the results of shear operator in different directions at an angle of $\frac{\text{\pi}}{4}$.
Fig. 1 Effect of shear operator ${S}_{{\theta}_{d}}$ with different angles and directions (a) Original image X; (b) ${S}_{{\theta}_{d}}X$ with ${\theta}_{d}=\frac{\text{\pi}}{4},d=\text{'}{\text{l}}^{\prime}$; (c)${S}_{{\theta}_{d}}X$ with ${\theta}_{d}=\frac{\text{\pi}}{4},d=\text{'}{\text{r}}^{\prime}$ 
1.2 Shear Gradient and Shear HOTV Norm
TV regularization was introduced by Rudin et al^{[2]}, for image denoising and further extended to other image restoration tasks. For the grayscale image $u\in {R}^{n\times n}$, its discrete total variation norm is$\begin{array}{l}\mathrm{TV}(u):={\displaystyle \sum _{i=1,j=1}^{n}\left{(Du)}_{i,j}\right}\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}={\displaystyle \sum _{i=1,j=1}^{n}\sqrt{{\left{\left({D}_{x}^{}u\right)}_{i,j}\right}^{2}+{\left{\left({D}_{y}^{}u\right)}_{i,j}\right}^{2}}}\end{array}$(7)where ${(Du)}_{i,j}=({({D}_{x}^{}u)}_{i,j},{({D}_{y}^{}u)}_{i,j})$ is defined as follows$\begin{array}{l}{\left({D}_{x}^{}u\right)}_{i,j}=\{\begin{array}{ll}{u}_{i+1,j}{u}_{i,j},\hfill & \text{}if\text{}in\hfill \\ {u}_{1,j}{u}_{n,j},\hfill & \text{}if\text{}i=n\hfill \end{array}\\ {\left({D}_{y}^{}u\right)}_{i,j}=\{\begin{array}{ll}{u}_{i,j+1}{u}_{i,j},\hfill & \text{}if\text{}jn\hfill \\ {u}_{i,1}{u}_{i,n},\hfill & \text{}if\text{}j=n\hfill \end{array}\end{array}$
TV norm only contains finite difference in horizontal and vertical directions, and this leads to losing some detailed information in other directions.
TV regularization preserves fine features and sharp edges, but it usually produces staircase artifact that is not presented in real images^{[24,25]}. In many applications^{[26,27]}, some authors have proposed a higherorder TV to reduce staircase artifact. HOTV of u is defined as$\begin{array}{l}\text{HTV}(u)=\left{\left({D}^{2}u\right)}_{i,j}\right\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}=\sqrt{{\left({D}_{xx}^{}u\right)}_{i,j}^{2}+{\left({D}_{xy}^{}u\right)}_{i,j}^{2}+{\left({D}_{yx}^{}u\right)}_{i,j}^{2}+{\left({D}_{yy}^{}u\right)}_{i,j}^{2}}\end{array}$(8)where ${D}^{2}u=\left({D}_{xx}u,{D}_{xy}u,{D}_{yx}u,{D}_{yy}u\right)$ denotes the secondorder difference of the ((j1)n+i)th entry of the vector u. By combining the higherorder finite difference operator and the shear operator, we propose the shear highorder gradient (SHOG) operator by the following way:$\begin{array}{l}{D}_{S}\triangleq ({D}_{s}^{1},{D}_{s}^{2},{D}_{s}^{3},{D}_{s}^{4})\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}=({S}_{{\theta}_{d1}}^{\text{T}}{D}_{xx}{S}_{{\theta}_{d1}},{S}_{{\theta}_{d1}}^{\text{T}}{D}_{xy}{S}_{{\theta}_{d1}},{S}_{{\theta}_{d2}}^{\text{T}}{D}_{yx}{S}_{{\theta}_{d2}},{S}_{{\theta}_{d2}}^{\text{T}}{D}_{yy}{S}_{{\theta}_{d2}})\end{array}$(9)where the superscript T means conjugate transpose. ${S}_{{\theta}_{d1}}$ and ${S}_{{\theta}_{d2}}$ denote shear along the 'top'(or 'bottom') and 'left'(or 'right') direction with different angles, respectively. With different types of texture images, we choose different directions based on our experience. Then for the grayscale image u, its discrete shear higherorder total variation (SHOTV) norm is$\text{SHOTV}(u)={\displaystyle \sum _{i=1}^{4}\Vert {D}_{S}^{i}u{\Vert}_{1}}$(10)
The shear gradient has following properties:
Remark 1 (Matrix form of ${S}_{{\theta}_{d}}$) The operator ${S}_{{\theta}_{d}}$ is linear, so it can be represented as matrix, which is denoted by ${S}_{{\theta}_{d}}$. Moreover, since ${S}_{{\theta}_{d}}$ is permutation matrix, it is orthonormal.
Theorem 1 The shear operator ${S}_{{\theta}_{d}}$ and the horizontal gradient operator D_{xx} are commutative when $d=\text{'}{\text{l}}^{\prime}\text{\hspace{0.22em}or\hspace{0.22em}}\text{'}{\text{r}}^{\prime}$; Similarly, ${S}_{{\theta}_{d}}$ and the vertical operator D_{yy} are commutative when $d=\text{'}{\text{t}}^{\prime}\text{\hspace{0.22em}or\hspace{0.22em}}\text{'}{\text{b}}^{\prime}$. i.e.,${S}_{{\theta}_{d}}^{\text{T}}{D}_{xx}{S}_{{\theta}_{d}}={D}_{xx}\text{\hspace{1em}when\hspace{0.22em}}d=\text{'}{\text{l}}^{\prime}\text{\hspace{0.22em}or\hspace{0.22em}}\text{'}{\text{r}}^{\prime}$(11)${S}_{{\theta}_{d}}^{\text{T}}{D}_{yy}{S}_{{\theta}_{d}}=\text{}{D}_{yy}\text{\hspace{1em}when\hspace{0.22em}}d=\text{'}{\text{t}}^{\prime}\text{\hspace{0.22em}or\hspace{0.22em}}\text{'}{\text{b}}^{\prime}$(12)
Proof Firstly, we show that ${S}_{{\theta}_{\text{l}}}{D}_{x}={D}_{x}{S}_{{\theta}_{\text{l}}}$, i.e., ${S}_{{\theta}_{\text{l}}}{D}_{x}{S}_{{\theta}_{\text{l}}}={D}_{x}$. In fact, for $u\in {R}^{n\times n}$, by the definition of gradient operator and shear operator, we have$\begin{array}{l}{({S}_{{\theta}_{\text{l}}}{D}_{x}u)}_{i,j}={({D}_{x}u)}_{i,j}\\ =\{\begin{array}{ll}{u}_{i+1,\overline{j}}{u}_{i,\overline{j}},\hfill & \text{}if\text{}in\hfill \\ 0,\hfill & \text{}if\text{}i=n\hfill \end{array}\end{array}$(13)where $\overline{j}$ is defined as in Eq. (7).$\begin{array}{l}{({D}_{x}{S}_{{\theta}_{\text{l}}}u)}_{i,j}=\{\begin{array}{l}{({S}_{{\theta}_{\text{l}}}u)}_{i+1,j}{({S}_{{\theta}_{\text{l}}}u)}_{i,j}\uff0c\text{}if\text{}in\hfill \\ 0\uff0cif\text{}i=n\hfill \end{array}\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}=\{\begin{array}{ll}{u}_{i+1,\overline{j}}{u}_{i,\overline{j}},\hfill & \text{}if\text{}in\hfill \\ 0,\hfill & \text{}if\text{}i=n\hfill \end{array}\end{array}$(14)
By Eq. (13) and Eq. (14), we obtain ${S}_{{\theta}_{\text{l}}}{D}_{x}={D}_{x}{S}_{{\theta}_{\text{l}}}$, i.e. ${S}_{{\theta}_{\text{l}}}^{\text{T}}{D}_{x}{S}_{{\theta}_{\text{t}}}={D}_{x}$.
Lastly, by the orthogonality of matrix ${S}_{{\theta}_{\text{l}}}$, we can obtain${S}_{{\theta}_{\text{l}}}^{\text{T}}{D}_{xx}{S}_{{\theta}_{\text{l}}}={S}_{{\theta}_{\text{l}}}^{\text{T}}{D}_{x}{S}_{{\theta}_{\text{l}}}{S}_{{\theta}_{\text{l}}}^{\text{T}}{D}_{x}{S}_{{\theta}_{l}}={D}_{x}{D}_{x}={D}_{xx}$
By Theorem 1, the shear gradient operator is essentially a generalization of gradient operator.
The comparison between HOG operator and SHOG operator applied on Barbara image are shown in Fig. 2. It can be observed that more direction information is captured and more detailed edge information can be retained by the SHOG operator. From the local comparison graph, it can be seen that the SHOG operator preserves more information in the texture part of the image.
Fig. 2 Comparison between 2ndgradient operator and shear 2ndgradient operator applied on Barbara image u 
1.3 BCCB Matrices
Under the periodic boundary conditions, both the blurring matrix and gradient matrix are BlockCircu lantwithCirculantBlocks (BCCB), and thus are diagonalizable by the 2D discrete Fourier transform. In this subsection, we will show that the SHOG matrices are BCCB when the shear angle is $\frac{\text{\pi}}{4}$.
Let E_{ij} denote a matrix whose (i,j)entry is 1, zero elsewhere. Then {E_{ij}i,j=1,2,3} is a set of orthonormal basis of linear space R^{3×3}. Let ${S}_{{\theta}_{d}}$ denote the shear matrix of shear operator under the basis {E_{ij}i,j=1,2,…,n}, i.e.$\begin{array}{l}{S}_{{\theta}_{d}}({E}_{11},{E}_{21},{E}_{31},{E}_{12},{E}_{22},{E}_{32},{E}_{13},{E}_{23},{E}_{33})\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}=\text{\hspace{0.17em}}({E}_{11},{E}_{21},{E}_{31},{E}_{12},{E}_{22},{E}_{32},{E}_{13},{E}_{23},{E}_{33}){S}_{{\theta}_{d}}\end{array}$(15)where ${S}_{{\theta}_{\text{l}}}$ indicates shearing at angle of $\frac{\text{\pi}}{4}$ along the '1' direction. ${S}_{{\theta}_{\text{l}}}$ is a permutation matrix of the following form${S}_{{\theta}_{\text{l}}}=\left[\begin{array}{ccccccccc}1& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 1& 0\\ 0& 0& 0& 0& 0& 1& 0& 0& 0\\ 0& 0& 0& 1& 0& 0& 0& 0& 0\\ 0& 1& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0& 1\\ 0& 0& 0& 0& 0& 0& 1& 0& 0\\ 0& 0& 0& 0& 1& 0& 0& 0& 0\\ 0& 0& 1& 0& 0& 0& 0& 0& 0\end{array}\right]$
Since gradient matrices are BCCB matrices under periodic boundary condition, it can be verified that the ${D}_{S}^{i}(i=1,2,3,4)$ in formula (10) are also BCCB matrices when shear angle is $\frac{\text{\pi}}{4}$. The shear gradient matrices ${D}_{S}^{1}={S}_{{\theta}_{\text{t}}}^{\text{T}}{D}_{xx}{S}_{{\theta}_{\text{t}}}$ and ${D}_{S}^{2}$$={S}_{{\theta}_{\text{t}}}^{\text{T}}{D}_{xy}{S}_{{\theta}_{\text{t}}}$ with ${\theta}_{\text{t}}=\frac{\text{\pi}}{4}$ have the following form:$$\begin{array}{c}{D}_{S}^{1}=\left[\begin{array}{ccccccccc}2& 0& 0& 0& 1& 0& 0& 0& 1\\ 0& 2& 0& 0& 0& 1& 1& 0& 0\\ 0& 0& 2& 1& 0& 0& 0& 1& 0\\ 0& 0& 1& 2& 0& 0& 0& 1& 0\\ 1& 0& 0& 0& 2& 0& 0& 0& 1\\ 0& 1& 0& 0& 0& 2& 1& 0& 0\\ 0& 1& 0& 0& 0& 1& 2& 0& 0\\ 0& 0& 1& 1& 0& 0& 0& 2& 0\\ 1& 0& 0& 0& 1& 0& 0& 0& 2\end{array}\right]\\ {D}_{S}^{2}=\left[\begin{array}{ccccccccc}1& 1& 0& 0& 1& 1& 0& 0& 0\\ 0& 1& 1& 1& 0& 1& 0& 0& 0\\ 1& 0& 1& 1& 1& 0& 0& 0& 0\\ 0& 0& 0& 1& 1& 0& 0& 1& 1\\ 0& 0& 0& 0& 1& 0& 1& 0& 1\\ 0& 0& 0& 1& 0& 1& 1& 1& 0\\ 0& 1& 1& 0& 0& 0& 1& 1& 0\\ 1& 0& 1& 0& 0& 0& 0& 1& 1\\ 1& 1& 0& 0& 0& 0& 1& 0& 1\end{array}\right]\end{array}$$
The shear gradient matrices ${D}_{S}^{3}={S}_{{\theta}_{\text{l}}}^{\text{T}}{D}_{yx}{S}_{{\theta}_{\text{l}}}$ and ${D}_{S}^{4}={S}_{{\theta}_{\text{l}}}^{\text{T}}{D}_{yy}{S}_{{\theta}_{\text{l}}}$ with shear angel ${\theta}_{\text{l}}=\frac{\text{\pi}}{4}$ have the following form:$$\begin{array}{c}{D}_{S}^{3}=\left[\begin{array}{ccccccccc}1& 1& 0& 1& 0& 0& 0& 1& 0\\ 0& 1& 1& 0& 1& 0& 0& 0& 1\\ 1& 0& 1& 0& 0& 1& 1& 0& 0\\ 0& 1& 0& 1& 1& 0& 1& 0& 0\\ 0& 0& 1& 0& 1& 1& 0& 1& 0\\ 1& 0& 0& 1& 0& 1& 0& 0& 1\\ 1& 0& 0& 0& 1& 0& 1& 1& 0\\ 0& 1& 0& 0& 0& 1& 0& 1& 1\\ 0& 0& 1& 1& 0& 0& 1& 0& 1\end{array}\right]\\ {D}_{S}^{4}=\left[\begin{array}{ccccccccc}2& 0& 0& 0& 0& 1& 0& 1& 0\\ 0& 2& 0& 1& 0& 0& 0& 0& 1\\ 0& 0& 2& 0& 1& 0& 1& 0& 0\\ 0& 1& 0& 2& 0& 0& 0& 0& 1\\ 0& 0& 1& 0& 2& 0& 1& 0& 0\\ 1& 0& 0& 0& 0& 2& 0& 1& 0\\ 0& 0& 1& 0& 1& 0& 2& 0& 0\\ 1& 0& 0& 0& 0& 1& 0& 2& 0\\ 0& 1& 0& 1& 0& 0& 0& 0& 2\end{array}\right]\end{array}$$
But unfortunately, when the shear angle θ_{d}∈$(0,\frac{\text{\pi}}{4})$, ${D}_{S}^{i}(i=1,2,3,4)$ do not always have BCCB structure. When the shear angle is θ_{1}=40°, the shear matrix ${S}_{{\theta}_{\text{l}}}$ has the following form:${D}_{S}^{4}=\left[\begin{array}{ccccccccc}2& 1& 0& 0& 0& 0& 0& 0& 1\\ 1& 2& 0& 0& 0& 0& 0& 0& 1\\ 0& 0& 2& 1& 1& 0& 0& 0& 0\\ 0& 0& 1& 2& 1& 0& 0& 0& 0\\ 0& 0& 1& 1& 2& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 2& 1& 1& 0\\ 0& 0& 0& 0& 0& 1& 2& 1& 0\\ 0& 0& 0& 0& 0& 1& 1& 2& 0\\ 1& 1& 0& 0& 0& 0& 0& 0& 2\end{array}\right]$
It can be noticed that the above matrix is not a BCCB matrix.
2 Proposed Model
Considering the advantages of highorder TV and shear operator, we propose the following optimization model$\underset{u}{\mathrm{min}}\frac{\lambda}{2}\Vert Kuf{\Vert}_{2}^{2}+\omega \Vert {D}_{S}u{\Vert}_{1}+{I}_{\Omega}(u)$(16)where the term ·_{1} represents convex ${\mathcal{l}}_{1}$ norm regularization. The variables λ,ω>0 are regularization parameters that control the data fidelity term and the shear highorder regularization. I_{Ω}(u) is an indication function to impose hard constraints on the objective function defined as${I}_{\Omega}(x)=\{\begin{array}{ll}0\text{\hspace{0.17em}\hspace{1em}},\hfill & if\text{\hspace{1em}}x\in \Omega \hfill \\ \infty \text{\hspace{0.17em}\hspace{1em}},\hfill & if\text{\hspace{1em}}x\notin \Omega \hfill \end{array}$where Ω=[0,255]. To minimize problem (16) robustly, we hereby adopt framework of alternating direction method of multipliers (ADMM)^{[28]}. Equation (16) can be converted into a constrained optimization problem:$\begin{array}{l}\underset{u}{\mathrm{min}}\frac{\lambda}{2}\Vert Kuf{\Vert}_{2}^{2}+\omega \Vert {D}_{S}u{\Vert}_{1}+{I}_{\Omega}(u)\hfill \\ \text{s.t.\hspace{1em}}w={D}_{S}u\hfill \\ \text{\hspace{1em}\hspace{1em}\hspace{1em}}z=u\hfill \end{array}$(17)
The corresponding augmented Lagrangian function of (17) can be written as$\begin{array}{l}{L}_{{\rho}_{1},{\rho}_{2}}(u,w,z;\lambda ,\omega ,{\mu}_{1},{\mu}_{2})=\frac{\lambda}{2}\Vert Kuf{\Vert}_{2}^{2}+\omega \Vert w{\Vert}_{1}+{I}_{\Omega}(u)\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{\mu}_{1}^{\text{T}}(w{D}_{S}u)+\frac{{\rho}_{1}}{2}\Vert w{D}_{S}u{\Vert}_{2}^{2}\text{\hspace{0.17em}\hspace{0.17em}}\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{\mu}_{2}^{\text{T}}(zu)+\frac{{\rho}_{2}}{2}\Vert zu{\Vert}_{2}^{2}\end{array}$(18)where ρ_{1}, ρ_{2} are penalty parameters. Variablesμ_{1} andμ_{2} are the Lagrange multipliers associated with the constraints w=D_{S}u and z=u, respectively. ADMM minimizes the Lagrangian function on two sets of variables u, w and z. Since w and z are decoupled from each other, they can be solved separately.
ADMM is an algorithm using proximal splitting techniques for solving the following convex optimization problem:$\begin{array}{l}\mathrm{min}\text{\hspace{0.17em}}{\theta}_{1}({x}_{1})+{\theta}_{2}({x}_{2})\\ \text{s.t.}{A}_{1}{x}_{1}+{A}_{2}{x}_{2}=c,\text{\hspace{0.17em}}{x}_{i}\in {\chi}_{i},i=1,2\end{array}$(19)where ${\theta}_{i}:\text{\hspace{0.22em}}{\chi}_{i}\to R$ are closed convex functions, A_{i} are linear transforms, χ_{i} are nonempty closed convex sets, and c is a given vector.
Part of the appeal of ADMM is the fact that the algorithm lends themselves to parallel implementations. The algorithm is given in Algorithm 1.
Algorithm 1ADMM  

1 Initialization: Starting point $({x}_{1}^{0},{x}_{2}^{0},{p}^{0})$.  
2 Iteration:  
1)${x}_{1}^{k+1}=\mathrm{arg}\underset{{x}_{1}}{\mathrm{min}}\text{\hspace{0.22em}}{\theta}_{1}({x}_{1})+\frac{\delta}{2}\Vert {A}_{1}{x}_{1}+{A}_{2}{x}_{2}^{k}c+\frac{{p}^{k}}{\delta}{\Vert}_{2}^{2}$  
2)${x}_{2}^{k+1}=\mathrm{arg}\underset{{x}_{2}}{\mathrm{min}}\text{\hspace{0.22em}}{\theta}_{2}({x}_{2})+\frac{\delta}{2}\Vert {A}_{1}{x}_{1}^{k+1}+{A}_{2}{x}_{2}c+\frac{{p}^{k}}{\delta}{\Vert}_{2}^{2}$  
3)${p}^{k+1}={p}^{k}+\delta ({A}_{1}{x}_{1}^{k+1}+{A}_{2}{x}_{2}^{k+1}c)$  
4) k=k+1.  
Until a stopping criterion is satisfied. 
In this section, we discuss the optimization strategies for solving each subproblem of (18) one by one.
1) u SubProblem
The subproblem of u is a least square problem with the following form$\begin{array}{l}{u}^{k+1}=\underset{u}{\mathrm{arg}\mathrm{min}}\frac{\lambda}{2}\Vert K{u}^{k}f{\Vert}_{2}^{2}{\mu}_{1}^{\text{T}}({w}^{k}{D}_{S}{u}^{k})\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}+\frac{{\rho}_{1}}{2}\Vert {w}^{k}{D}_{S}{u}^{k}{\Vert}_{2}^{2}\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{\mu}_{2}^{\text{T}}({z}^{k}{u}^{k})+\frac{{\rho}_{2}}{2}\Vert {z}^{k}{u}^{k}{\Vert}_{2}^{2}\end{array}$(20)
From the optimality conditions, we have$\begin{array}{l}(\lambda {K}^{\text{T}}{\rm K}+{\rho}_{1}{({D}_{S})}^{\text{T}}{D}_{S}+{\rho}_{2}I){u}^{k}\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}=\lambda {K}^{\text{T}}f+{\rho}_{1}{({D}_{S})}^{\text{T}}({w}^{k}\frac{{\mu}_{1}}{{\rho}_{1}}){\mu}_{2}+{\rho}_{2}{z}^{k}\end{array}$(21)
Considering what was discussed in subsection 1.3, (D_{S})^{T}D_{S} are BCCB matrices only when the shear angle is $\frac{\text{\pi}}{4}$. While the shear angle is not $\frac{\text{\pi}}{4}$, computing the inverse of the left hand of (21) in each iteration is costly. In this paper, we set the shear angle ${\theta}_{d}=\frac{\text{\pi}}{4}$. K^{T}K is also BCCB matrix under the periodic boundary condition. As a consequence, Eq. (21) can be efficiently solved by one FFT operation and one inverse FFT operation as${u}^{k+1}={F}^{1}\left(\frac{F\left[\lambda {K}^{\text{T}}f+{\rho}_{1}{({D}_{S})}^{\text{T}}({w}^{k}\frac{{\mu}_{1}}{{\rho}_{1}}){\mu}_{2}+{\rho}_{2}{z}^{k}\right]}{\lambda F({K}^{\text{T}}K)+{\rho}_{1}F({({D}_{S})}^{\text{T}}{D}_{S})+{\rho}_{2}}\right)$(22)where F represents the twodimensional discrete Fourier transform and F^{1} is the inverse transform.
2) w SubProblem
The w subproblem is a convex secondorder deblurring problem$\begin{array}{l}{w}^{k+1}=\underset{w}{\mathrm{arg}\mathrm{min}}\frac{{\rho}_{1}}{2}\Vert {w}^{k}{D}_{S}{u}^{k+1}{\Vert}_{2}^{2}\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{\mu}_{1}({w}^{k}{D}_{S}{u}^{k+1})+\omega \Vert {w}^{k}{\Vert}_{1}\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}=\underset{w}{\mathrm{arg}\mathrm{min}}\frac{{\rho}_{1}}{2}\Vert {w}^{k}({D}_{S}{u}^{k+1}+\frac{{\mu}_{1}^{k}}{{\rho}_{1}}){\Vert}_{2}^{2}+\omega \Vert {w}^{k}{\Vert}_{1}\end{array}$(23)
We write $t={D}_{S}{u}^{k+1}+\frac{{\mu}_{1}^{k}}{{\rho}_{1}}$ for simplicity, and solve problem (23) using a twodimensional shrinkage operator. i.e.,$\begin{array}{l}{w}^{k+1}=\mathrm{shrink}\left(t,\frac{\omega}{{\rho}_{1}}\right)\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}=\mathrm{max}\left\{\leftt\right\frac{\omega}{{\rho}_{1}},0\right\}\circ \mathrm{sign}\left(t\right)\end{array}$(24)
3) z SubProblem$\begin{array}{c}{z}^{k+1}=\underset{z}{\mathrm{arg}\mathrm{min}}\frac{{\rho}_{2}}{2}\Vert {z}^{k}{u}^{k+1}{\Vert}_{2}^{2}\text{\hspace{0.17em}}{\mu}_{2}^{\text{T}}({z}^{k}{u}^{k+1})+{I}_{\Omega}({z}^{k})\\ =\underset{z}{\mathrm{arg}\mathrm{min}}\frac{{\rho}_{2}}{2}\Vert {z}^{k}({u}^{k+1}+\frac{{\mu}_{2}^{k}}{{\rho}_{2}}){\Vert}_{2}^{2}\text{\hspace{0.17em}}+{I}_{\Omega}({z}^{k})\end{array}$(25)
The z subproblem is a projection problem. For an 8bit image, the range of pixel values can be kept at [0, 255], and the closedform solution can be obtained by using the projection operator$\begin{array}{l}{z}^{k+1}={\text{Proj}}_{\Omega}\left({u}^{k+1}+\frac{{\mu}_{2}^{k}}{{\rho}_{2}}\right)\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}=\mathrm{min}\left(255,\mathrm{max}({u}^{k+1}+\frac{{\mu}_{2}^{k}}{{\rho}_{2}},0)\right)\end{array}$(26)
The Lagrangian multipliers are updated as follows${\mu}_{1}^{k+1}={\mu}_{1}^{k}+({D}_{s}{u}^{k+1}{w}^{k+1})$(27)${\mu}_{2}^{k+1}={\mu}_{2}^{k}+({u}^{k+1}{z}^{k+1})$(28)
The proposed algorithm is named SHOTV and shown in Algorithm 2.
Algorithm 2SHOTV  

1 Initialization: Starting point  
$({u}^{0},{w}^{0},{z}^{0},{\mu}_{1}^{0},{\mu}_{2}^{0}),{\rho}_{1}>0,{\rho}_{2}>0$.  
2 Iteration:  
1) Compute ${u}^{k+1}$ according to (21).  
2) Compute ${w}^{k+1}$ according to (24).  
3) Compute ${z}^{k+1}$ according to (26).  
4) Update ${\mu}_{1}^{k+1}$ according to (27).  
5) Update ${\mu}_{2}^{k+1}$ according to (28).  
6) Check the stopping criterion $\frac{{\Vert {u}^{k}{u}^{k+1}\Vert}_{2}}{{\Vert {u}^{k}\Vert}_{2}}\le \delta $  
7) k=k+1.  
8) end 
3 Numerical Experiments
In this section, we present various numerical results to illustrate the performance of the proposed algorithm for nonblind image deblurring. All experiments are carried out on Windows 10 64bit and Matlab 2019b running on a desktop equipped with an Intel Core i510300 H CPU 2.5 GHz and 16 GB of RAM. The source code for all competing methods was obtained from the original authors and we use the default parameter settings.
3.1 Experiment Setting
The test images are shown in Fig. 3, and the pixel values of the images are normalized to [0, 1] for simplicity. Both Peak Signal to Noise Ratio (PSNR) and Structural Similarity metrics (SSIM)^{[29]} are used to evaluate the quality of the reconstructed images. They are defined as follows:$\text{PSNR}(\text{dB})=10{\mathrm{log}}_{10}\frac{{({\text{Max}}_{u})}^{2}}{\Vert fu{\Vert}_{F}^{2}}$(29)$\text{SSIM}=\frac{\left(2\widehat{f}\text{\hspace{0.17em}}\widehat{u}\right)\left(2{\sigma}_{uf}+{c}_{2}\right)}{\left({\widehat{f}}^{2}+{\widehat{u}}^{2}+{c}_{1}\right)\left({\sigma}_{f}^{2}+{\sigma}_{u}^{2}+{c}_{2}\right)}$(30)where u is the clean image, f is the recovered image, Max_{u} is the largest possible value of the image u. $\widehat{u}$ and $\widehat{f}$ are the mean values of images u and f, σ_{f} and σ_{f} are the standard variance of images u and f, respectively, and σ_{uf} is the covariance of u and f, c_{1},c_{2}>0 are constants. In general, high PSNR and SSIM values imply better image quality.
Fig. 3 The test images 
3.2 Parameter Selection Discussion
In this subsection, we focus on the choice of parameters ρ_{1},ρ_{2} and λ. For simplicity, we set the parameter ρ_{1}=ρ_{2}. Then we investigate the sensitivity of the parameter ρ_{1} and λ. We test the selection of ρ_{1} on both House and Parrot images. They are corrupted by motion blur (fspecial(‘motion’, 35, 50)) and Gaussian blur (fspecial (‘gaussian’, [9,9], 9)). Figure 4 shows the change of PSNR values versus the parameter ρ_{1} under three different noise levels. We empirically set the parameters ρ_{1}∈[5,30] in the subsequent experiments.
Fig. 4 The PSNR results versus the parameter ρ_{1} for fixed λ 
Then, we discuss the selection of the parameter λ. Our experiments are tested on Pirate and Barbara images with three different levels of noise and two different blur kernels. For simplicity, we set the parameter ρ_{1}=ρ_{2}=10. Figure 5 shows the curves of PSNR value with respect to parameter λ under three different noise levels. For the parameter λ in the subsequent experiments, the empirical value is taken to be in the range [800, 2 000]. As a matter of fact, the above empirical parameter settings settings are also generally valid for other test images.
Fig. 5 The PSNR results versus the parameter λ for fixed ρ_{1}=ρ_{2}=10 
3.3 Image Deblurring
In this subsection, we report the performance of the proposed SHOTV method for image deblurring and compare it with some leading deblurring methods, including FTVd^{[4]}, ADMM frame^{[7]}, ADMM TV^{[5]} and HOTV^{[18]}.
First, we test the 9×9 Gaussian blur with a standard deviation of 9. The blurred image is generated using the MATLAB function “imfilter” with periodic boundary conditions. Then, the blurred image is corrupted with zeromean additive white Gaussian noise σ^{2}=0.001. The PSNR and SSIM values for the deblurring experiments are reported in Table 1 and Table 2. The numbers in bold indicate the best performance. One can observe that the proposed method outperforms the other competing methods. We choose Fingerprint, Cameraman, Hill and Shirt images for Gaussian deblurring and denoising comparison, and the recovered images are shown in Fig. 6. From the comparison, we can see that the restored images by our method retain more texture and have fewer artifacts. As can be seen from the Shirt image in Fig. 6, our proposed algorithm retains more texture information through the shear operator, resulting in a significant improvement in image deblurring.
Fig. 6 Restoration of four degraded images with Gaussian blur and comparison with other algorithms From left to right: the noisy images, FTVd, ADMMFrame, ADMMTV, HOTV, and SHOTV; each value in parentheses represents the corresponding SSIM value of the restored image 
Similarly, we test the motion blur with a len size 35 in the blur kernel and a rotation angle 50º. The blurred image is generated using the MATLAB function “imfilter” with periodic boundary conditions, and the blurred image is corrupted by zeromean additive white Gaussian noise with ρ^{2}=0.001. The PSNR and SSIM results for all competing deblurring methods are reported in Table 2. One can observe that the proposed SHOTV method can obtain better result than other competing mehtods. The blurred images and the deblurred images are shown in Fig. 7. Compared with other methods, our algorithm can preserve more texture structure and the deblurred image contains fewer artifacts compared with firstorder TV.
Fig. 7 Restoration of four degraded images with Motion blur and comparison with other algorithms From left to right: the noisy images, FTVd, ADMMFrame, ADMMTV, HOTV and SHOTV; each value in parentheses represents the corresponding SSIM value of the restored image 
PSNR and SSIM comparison of different methods for Gaussian deblurring
PSNR (dB) and SSIM comparison of different methods for motion deblurring
3.4 Convergence Analysis
In order to verify the convergence of the algorithm numerically, we apply Algorithm 2 to four different images corrupted by different blur kernels and additive white Gaussian noise with deviation ρ^{2}=0.001, where Gaussian blur is applied to Fingerprint and Shirt images, and Gaussian blur is applied to Barbara and Zebra images. We use the equation $\frac{\Vert {u}^{k+1}{u}^{k}{\Vert}_{2}}{\Vert {u}^{k}{\Vert}_{2}}$ to calculate the relative error curve for each iteration of the recovery image of Algorithm 2. As shown in Fig. 8, the relative error decreases as the number of iterations increases, which numerically illustrates the convergence of the algorithm.
Fig. 8 Relative error values versus iteration number 
4 Conclusion
In this paper, a new SHOG operator is proposed by combining the highorder gradient operator and the shear operator. By both theoretical analysis and experiments, we show that the proposed SHOG operator incorporates more directionality and can detect more abundant edge information. We extend the HOTV norm to the SHOTV norm based on the SHOG operator, and then employ SHOTV norm as the regularization term to establish an image deblurring model. We also study some properties of the SHOG operator, and show that the SHOG matrices are BlockCirculantwithCirculantBlocks (BCCB) when the shear angle is $\frac{\text{\pi}}{4}$. ADMM scheme is exploited to solve the proposed model efficiently. Extensive numerical experiments demonstrate the superiority of the proposed method in terms of visual quality and quantitative metrics.
References
 Abbass M Y, Kim H W, Abdelwahab S A, et al. Image deconvolution using homomorphic technique [J]. Signal, Image and Video Processing, 2019, 13(4): 703709. [Google Scholar]
 Rudin L I, Osher S, Fatemi E. Nonlinear total variation based noise removal algorithms [J]. Physica D: Nonlinear Phenomena, 1992, 60(14): 259268. [Google Scholar]
 Du H, Liu Y. Minmaxconcave total variation denoising [J]. Signal, Image and Video Processing, 2018, 12(6): 10271034. [Google Scholar]
 Wang Y, Yang J, Yin W, et al. A new alternating minimization algorithm for total variation image reconstruction [J]. SIAM Journal on Imaging Sciences, 2008, 1(3): 248272. [CrossRef] [MathSciNet] [Google Scholar]
 Jiao Y, Jin Q, Lu X, et al. Alternating direction method of multipliers for linear inverse problems [J]. SIAM Journal on Numerical Analysis, 2016, 54(4): 21142137. [Google Scholar]
 Chang H, Lou Y, Duan Y, et al. Total variationbased phase retrieval for Poisson noise removal [J]. SIAM Journal on Imaging Sciences, 2018, 11(1): 2455. [CrossRef] [MathSciNet] [Google Scholar]
 Chan R H, Riemenschneider S D, Shen L, et al. Tight frame: An efficient way for highresolution image reconstruction [J]. Applied and Computational Harmonic Analysis, 2004, 17(1): 91115. [Google Scholar]
 Chan T, Marquina A, Mulet P. Highorder total variationbased image restoration [J]. SIAM Journal on Scientific Computing, 2000, 22(2): 503516. [CrossRef] [MathSciNet] [Google Scholar]
 Lefkimmiatis S, Ward J P, Unser M. Hessian Schattennorm regularization for linear inverse problems [J]. IEEE Transactions on Image Processing, 2013, 22(5): 18731888. [NASA ADS] [CrossRef] [MathSciNet] [PubMed] [Google Scholar]
 You Y L, Kaveh M. Fourthorder partial differential equations for noise removal [J]. IEEE Transactions on Image Processing, 2000, 9(10): 17231730. [NASA ADS] [CrossRef] [MathSciNet] [PubMed] [Google Scholar]
 Lysaker M, Lundervold A, Tai X C. Noise removal using fourthorder partial differential equation with applications to medical magnetic resonance images in space and time [J]. IEEE Transactions on Image Processing, 2003, 12(12): 15791590. [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
 Hu Y, Jacob M. Higher degree total variation (HDTV) regularization for image recovery [J]. IEEE Transactions on Image Processing, 2012, 21(5): 25592571. [NASA ADS] [CrossRef] [MathSciNet] [PubMed] [Google Scholar]
 Shi Q, Sun N, Sun T, et al. Structureadaptive CBCT reconstruction using weighted total variation and Hessian penalties [J]. Biomedical Optics Express, 2016, 7(9): 32993322. [Google Scholar]
 Papafitsoros K, Schönlieb C B. A combined first and second order variational approach for image reconstruction [J]. Journal of Mathematical Imaging and Vision, 2014, 48(2): 308338. [Google Scholar]
 Bredies K, Kunisch K, Pock T. Total generalized variation [J]. SIAM Journal on Imaging Sciences, 2010, 3(3): 492526. [CrossRef] [MathSciNet] [Google Scholar]
 Zhang J, Ma M, Wu Z, et al. Highorder total bounded variation model and its fast algorithm for Poissonian image restoration [J]. Mathematical Problems in Engineering, 2019, 2019: 111. [Google Scholar]
 Liu X, Huang L. Total bounded variationbased Poissonian images recovery by split Bregman iteration [J]. Mathematical Methods in the Applied Sciences, 2012, 35(5): 520529. [NASA ADS] [CrossRef] [MathSciNet] [Google Scholar]
 Adam T, Paramesran R, Mingming Y, et al. Combined higher order nonconvex total variation with overlapping group sparsity for impulse noise removal [J]. Multimedia Tools and Applications, 2021, 80(12): 1850318530. [Google Scholar]
 Jiang L, Huang J, Lv X G, et al. Alternating direction method for the highorder total variationbased Poisson noise removal problem [J]. Numerical Algorithms, 2015, 69(3): 495516. [Google Scholar]
 Liu J, Huang T Z, Lv X G, et al. Highorder total variationbased Poissonian image deconvolution with spatially adapted regularization parameter [J]. Applied Mathematical Modelling, 2017, 45: 516529. [CrossRef] [MathSciNet] [Google Scholar]
 Wang X, Feng X, Wang W, et al. Iterative reweighted total generalized variation based Poisson noise removal model[J]. Applied Mathematics and Computation, 2013, 223: 264277. [Google Scholar]
 Liu H, Tan S. Image regularizations based on the sparsity of corner points [J]. IEEE Transactions on Image Processing, 2018, 28(1): 7287. [Google Scholar]
 Ono S, Miyata T, Yamada I. Cartoontexture image decomposition using blockwise lowrank texture characterization [J]. IEEE Transactions on Image Processing, 2014, 23(3): 11281142. [NASA ADS] [CrossRef] [MathSciNet] [PubMed] [Google Scholar]
 Liu P, Xiao L. Efficient multiplicative noise removal method using isotropic second order total variation [J]. Computers & Mathematics with Applications, 2015, 70(8): 20292048. [Google Scholar]
 Wang S, Huang T Z, Zhao X L, et al. Speckle noise removal in ultrasound images by firstand secondorder total variation [J]. Numerical Algorithms, 2018, 78(2): 513533. [Google Scholar]
 Mei J J, Huang T Z. Primaldual splitting method for highorder model with application to image restoration [J]. Applied Mathematical Modelling, 2016, 40(3):23222332. [Google Scholar]
 Li F, Shen C, Fan J, et al. Image restoration combining a total variational filter and a fourthorder filter [J]. Journal of Visual Communication and Image Representation, 2007, 18 (4): 322330. [CrossRef] [Google Scholar]
 Gabay D. Chapter IX applications of the method of multipliers to variational inequalities [J]. Studies in Mathematics & Its Applications, 1983, 15: 299331. [CrossRef] [Google Scholar]
 Zhou W, Bovik A C, Sheikh H R, et al. Image quality assessment: From error visibility to structural similarity [J]. IEEE Trans Image Process, 2004, 13(4): 600612. [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
All Tables
All Figures
Fig. 1 Effect of shear operator ${S}_{{\theta}_{d}}$ with different angles and directions (a) Original image X; (b) ${S}_{{\theta}_{d}}X$ with ${\theta}_{d}=\frac{\text{\pi}}{4},d=\text{'}{\text{l}}^{\prime}$; (c)${S}_{{\theta}_{d}}X$ with ${\theta}_{d}=\frac{\text{\pi}}{4},d=\text{'}{\text{r}}^{\prime}$ 

In the text 
Fig. 2 Comparison between 2ndgradient operator and shear 2ndgradient operator applied on Barbara image u  
In the text 
Fig. 3 The test images  
In the text 
Fig. 4 The PSNR results versus the parameter ρ_{1} for fixed λ  
In the text 
Fig. 5 The PSNR results versus the parameter λ for fixed ρ_{1}=ρ_{2}=10  
In the text 
Fig. 6 Restoration of four degraded images with Gaussian blur and comparison with other algorithms From left to right: the noisy images, FTVd, ADMMFrame, ADMMTV, HOTV, and SHOTV; each value in parentheses represents the corresponding SSIM value of the restored image 

In the text 
Fig. 7 Restoration of four degraded images with Motion blur and comparison with other algorithms From left to right: the noisy images, FTVd, ADMMFrame, ADMMTV, HOTV and SHOTV; each value in parentheses represents the corresponding SSIM value of the restored image 

In the text 
Fig. 8 Relative error values versus iteration number  
In the text 
Current usage metrics show cumulative count of Article Views (fulltext article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 4896 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.