Open Access
Issue
Wuhan Univ. J. Nat. Sci.
Volume 30, Number 5, October 2025
Page(s) 427 - 440
DOI https://doi.org/10.1051/wujns/2025305427
Published online 04 November 2025

© Wuhan University 2025

Licence Creative CommonsThis is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

0 Introduction

Image deblurring, a key challenge in image processing, aims to accurately restore potentially sharp images from degraded image observations. The blur phenomenon is usually caused by the convolution of the point spread function (PSF) of the image capture device with the real information of the scene. This process can be mathematically expressed as:

f = K u + n . (1)

where u represents a clear image, K represents the blur kernel, f is a known blurred image, and n is the noise. The non-blind image deblurring refers to the restoration of a clear image, knowing the blurred image and the blur kernel.

Image deblurring is a challenging inverse problem, the complexity of which lies in the fact that the solution may neither exist nor be unique. In the reconstruction process, small perturbations in the data may cause relatively obvious errors. Hence, regularization technique is very important to transform the inverse problem into well-posed problem in order to solve the ill-conditioned problem. Over time, people have introduced many methods of image deblurring in various applications[1-4]. The prior information of images is typically incorporated into the deblurring process through a regularization technology, which involves minimizing the objective function:

m i n u { 1 2 K u - f 2 2 + λ R ( u ) } . (2)

In model (2), the first term is referred to as data fidelity term, which ensures that the model accurately fits the data. The second term R(u), acts as a regularization functional to prevent overfitting and improve the performance of the model. And λ>0, as a regularization parameter, effectively adjusts the trade-off between fidelity and regularization terms, ensuring that the model finds the optimal solution between fitting the data and avoiding excessive complexity.

The key to enhancing image deblurring is to design effective regularization term R(u) and enforce the prior knowledge of the image. Classical regularization models, including quadratic Tikhonov regularization[5], Mumford-Shah (MS) model[6] and total variational (TV) regularization[7], are widely used because of their good use of local structural patterns to preserve image edges and restore smooth regions. However, these models often fall short in preserving complex image details and tend to result in excessive smoothing effects. This is largely due to their reliance on piece-wise constant assumptions, which can oversimplify the complexity of the image. To solve this problem, advanced regularization strategies should be explored to more accurately simulate the statistical features of natural images, thereby enhancing detail retention while minimizing excessive smoothing. In our previous work[8], we extended the TV norm to the shear total variation (STV) norm by adding two shear gradient (SG) terms, and then proposed a shear total variation deblurring model.

In fact, apart from the extensively employed L1 norm, various regularization terms exhibit distinct advantages in diverse scenarios. For example, the nuclear norm minimization (NNM) algorithm proposed by Ji et al[9] established a model by approximating the rank function with the nuclear norm and effectively restored the low-rank matrix to achieve image denoising. However, the fixed threshold processing method overlooks the disparity among singular values, potentially introducing bias in the data. To address this limitation, Gu et al[10] proposed the weighted nuclear norm minimization (WNNM) algorithm, which significantly improves the denoising effect by dynamically adjusting the weight of singular values. At the same time, Bai et al[11] innovatively introduced the adaptive correction term, effectively alleviating the excessive constraint of L1 norm and further optimizing the performance of the model.

In the pursuit of more efficient regularization methods, researchers have also proposed non-convex regularization terms, including L1/L2 regularization[12], L1/2 regularization[13], Lp regularization[14], and Logarithmic regularization term[15]. These methods, while maintaining sparsity, avoid some inherent defects of L1 norm and show superiority in some specific problems. These regularization strategies not only enrich the toolbox of image processing, but also promote the continuous optimization and innovation of algorithms, providing more possibilities for solving complex problems.

In order to make more efficient use of the sparsity of image gradient, many researchers have studied the regularization terms to significantly enhance the sparse representation of image gradient, thus optimizing the image processing effect. As Huo et al[16] showed, the L1-βLq minimization provides an effective way to address the q-ratio sparsity minimization model. This approach takes into account a new minimization method and applies it to signal recovery and image reconstruction. It maintains image detail during processing while reducing noise and distortion. Li et al[8] proposed a new SG operator, which has been experimentally verified to detect richer edge information than the gradient operator. Additionally, the alternating direction method of multipliers (ADMM) algorithm is used to efficiently solve the proposed model. The SG operator has significant advantages in image deblurring. It can capture gradient information in multiple directions and perform fine analysis on local areas, thereby precisely locating blurred regions and addressing them specifically. Compared with the gradient operator, it places greater emphasis on suppressing noise, reducing the interference of noise on gradient information, and providing more accurate gradient information, thus significantly enhancing the performance of deblurring. The method of Joshi et al[17] focused on edge detection and extracted sharp edges by identifying areas with significant gradient changes in blur images. Then, optimized adjustments are made along the detected edges to obtain a more accurate image reconstruction.

In this paper, we propose a novel approach that combines STV norm and L1/L2 regularization to enhance gradient information in multiple directions of the image and mitigate the excessive penalty on large attitude image gradients caused by L1 regularization, thereby improving image deblurring performance. The contributions of this paper are summarized as follows:

First, a brief introduction is given to the definitions of shear operators and SG operators. In comparison to Ref. [8], we have conducted a more comprehensive study on some important properties of the shear operator and the SG operator, and applies these properties to the deblurring model.

Second, a novel image deblurring model is proposed by integrating the STV norm with L1/L2 regularization to improve the efficacy of image deblurring. The proposed model is successfully solved by the ADMM algorithm.

Finally, we compare our algorithm with some existing algorithms in image deblurring experiments, and find that the deblurring effect of the proposed algorithm is significantly improved.

The structure of this paper is summarized as follows: Section 1 provides comprehensive definitions of the SG operator and the STV norm, and briefly describes the shear operator and Block-Circulant-with-Circulant-Blocks (BCCB) matrix, in order to lay a solid theoretical foundation for the subsequent content. In Section 2, the concrete matrix forms and some properties of the shear operators and the SG operators are described. In Section 3, we propose the L1/L2-STV deblurring model to deal with the image deblurring problem, and employ the ADMM to solve the non-convex optimization problem. In Section 4, we verify the validity of the model through a series of numerical experiments. These experimental results demonstrate the practicability of the model and further highlight the advantages of the model. Finally, Section 5 is the conclusion.

1 Preliminaries

1.1 Shear Operator

Let Sθd be a shear operator with the shear angle θd[0,π] (see Ref. [18]), and d{t, b, l, r},

S θ d : R n × n R n × n : X X s h r , (3)

where the (i^,j^)-th entry of Xshr is given, for i^=1,,m and  j^=1,,n, by Xi^,j^shr:=Xi¯,j¯ with

i ¯   : = { [ ( i ^ - 1 + [ 4 π θ j ^ ] ) m o d   n ] + 1 ,                 i f   d = t , [ ( i ^ - 1 + [ 4 π θ ( n - j ^ ) ] ) m o d   n ] + 1 ,     i f   d = b , i ^ ,                                                             o t h e r w i s e ,

j ¯   : = { [ ( j ^ - 1 + [ 4 π θ i ^ ] ) m o d   n ] + 1 ,                 i f   d = r ,   [ ( j ^ - 1 + [ 4 π θ ( n - i ^ ) ] ) m o d   n ] + 1 ,     i f   d = l , j ^ ,                                                              o t h e r w i s e ,

where [] represents the procedure rounding the real number to the nearest integer in the zero direction. Parameter d identifies the direction of the shear axis, which can be represented in four directions: top (t), bottom (b), left (l) and right (r). θd represents the strength of the shear, that is, the degree to which the image is deformed in a particular direction. In the case of SθtX, this expression means that the image X is sheared at the θd angle through the top axis. It should be noted that when θd=0, the operator Sθd degenerates into a unit transformation, meaning that the image does not undergo any shear deformation.

1.2 Shear Gradient Operator and Shear TV Norm

Total variation was first proposed by Rudin, Osher and Fatemi[19] to optimize image denoising and restoration problems. Given a grayscale image uRn×n, its discrete TV norm is

T V ( u ) = D i u 1 . (4)

where D 1=Dx and D 2=Dy represent linear transformation operators for first-order differences in the horizontal and vertical directions, respectively. In (4), using only finite differences in these two directions simplifies the calculation, but it has limitations. Specifically, focusing only on the finite differences in horizontal and vertical directions will overlook details in the image, resulting in loss of information in other directions in the recovered or reconstructed image.

To address this issue, more sophisticated directional difference operators are employed to comprehensively capture the intricate details in the image, such as the SG operator[8], which takes the following form:

D ( D   3 , D   4 ) = ( S θ d T D x S θ d , S θ d T D y S θ d ) , (5)

where the superscript T means transpose. Then, for the grayscale image u, its discrete STV norm is defined as

S T V ( u ) = D i u 1 . (6)

Therefore, the SG operator is a generalization of the gradient operator and is used to capture directional features in image.

1.3 The Advantages of SG Operator in Image Processing

The gradient operator of the image can help us better understand and analyze the edge and detail information of the image. Gradient operator is used in many ways in image processing tasks, including Tikhonov regularization[20] and TV regularization[7].

The comparison of the effect between the gradient operator and the SG operator in processing the Barbara image is visually demonstrated in Fig. 1. From the comparison of Fig. 1 (a) and (d), we found that the gradient operator is insufficient in capturing details, failing to effectively preserve the subtle features of image, which leads to unsatisfactory results in image processing. However, when we turned to images processed with the SG operator, as shown in Fig. 1 (b)-(c) and (e)-(f), the SG operator accurately captures and enhances the texture of the image, which significantly enhances the sense of layer and detail expression of the image. The operator not only improves the visual impact of the image, but also improves the effect of image restoration by mining more image information, which plays a key role in image processing tasks such as image restoration.

thumbnail Fig. 1 The gradient operator and the shear gradient operator from different angles are applied to the image u-'Barbara' (a) Dxu; (b) SθbTDxSθbu with θb=45°; (c) SθbTDxSθbu with θb=90°; (d) Dyu; (e) SθlTDySθlu with θl=45°; (f) SθlTDySθlu with θl=90°

1.4 Definition and Diagonalizations of BCCB Matrix

Under the periodic boundary conditions, both the blur matrix and gradient matrix exhibit BCCB matrix. This special structure means that the matrix can be diagonalized by two-dimensional discrete Fourier transform (2D DFT).

Definition 1   (BCCB matrix) Let C be a block circulant matrix of the following form

C = [ C 0 C n - 1 C n - 2 C 1 C 1 C 0 C n - 1 C 2 C n - 2 C n - 3 C n - 4 C n - 1 C n - 1 C n - 2 C n - 3 C 0 ] , (7)

where CiRn×n is a cyclic matrix. In this case, we call C the BCCB matrix.

It is rather straightforward to demonstrate that every BCCB matrix is a unitary matrix; that is, C*C=CC*. Consequently, C possesses a unitary spectral decomposition.

C = F * Λ F ,              F * F = N I (8)

where F is the two-dimensional unitary DFT matrix, Λ is a diagonal matrix consisting of all the eigenvalues of C, N=n4, I represents the identity matrix.

2 The Matrix Form and Properties of the Shear Operator and the SG Operator

In this section, we first discuss the matrix representation of the shear operator, and find that these matrices can be expressed in the form of special matrices. Next, we analyze the SG operator and also obtain its matrix form, which can also be represented by special matrices. Finally, we discuss the matrix forms and properties of shear operator and SG operator in different cases by changing the angle and direction.

2.1 The Matrix Form and Properties of the Shear Operator

Let Eij be a special matrix in which the elements in position (i,j) are 1 and all other positions are 0. The set of matrices defined as {Eij|i,j=1,2,,n} forms an orthogonal basis of the linear space Rn×n. Further, if we define Sθd as the shear operator matrix represented under the above basis {Eij|i,j=1,2,,n}, i.e.

S θ d ( E 11 , E 21 , , E n n ) = ( E 11 , E 21 , , E n n ) S θ d (9)

where θd=45°. By Eq. (9), we can obtain that Sθd is a matrix with the following forms

S θ l = [ E 11 E n n E ( n - 1 ) ( n - 1 ) E 22 E 22 E 11 E n n E ( n - 1 ) ( n - 1 ) E n n E ( n - 1 ) ( n - 1 ) E ( n - 2 ) ( n - 2 ) E 11 ] ,  

S θ r = [ E 11 E 22 E 33 E n n E n n E 11 E 22 E ( n - 1 ) ( n - 1 ) E 22 E 33 E 44 E 11 ]

S θ t = [ E 0 0 0 0 E n - 1 c 0 0 0 0 0 E 1 c ] ,  

S θ b = [ E 0 0 0 0 E 1 c 0 0 0 0 0 E n - 1 c ] (10)

where E is the identity matrix of order n, Eic is circulant matrix that circularly shifts the elements in the arry E along columns by i units. The following describes its properties:

1) EiiEjj={0,  if ii jj Eii,   if ii =jj; i=1nEii=E.

2) EicEjc=E(i+j)mod nc; E0c=E. i,j=0,1,2,,n-1.

3) (Eic)T=En-ic, i=1,2,,n-1.

Similarly, when θd=90°, the shear matrix Sθd has the following forms

S θ l = [ E 11 E 22 E 33 E n n E n n E 11 E 22 E ( n - 1 ) ( n - 1 ) E 22 E 33 E 44 E 11 ] ,  

S θ r = [ E 11 E n n E ( n - 1 ) ( n - 1 ) E 22 E 22 E 11 E n n E ( n - 1 ) ( n - 1 ) E n n E ( n - 1 ) ( n - 1 ) E ( n - 2 ) ( n - 2 ) E 11 ] ,

S θ t = [ E 0 0 0 0 E 1 c 0 0 0 0 0 E n - 1 c ] ,  

S θ b = [ E 0 0 0 0 E n - 1 c 0 0 0 0 0 E 1 c ] . (11)

From the shear matrix form obtained above, we can obtain some properties of the shear operator:

Property 1 (Matrix form of Sθd ) The operator Sθd is linear, so it can be represented as a matrix denoted by Sθd. Moreover, since Sθd is a permutation matrix, it is orthonormal.

Property 2 No matter what value θd takes, Sθb=SθtT and Sθl=SθrT holds.

Property 3 Regardless of d taken, the shear operator with θ=45° is equal to the transpose of the shear operator with θ=90°, i.e. Sθd|θ=45°=SθdT|θ=90°.

Property 4 When both θ and d change, the shear operator holds the following equation,

S θ t | θ = 45 ° = S θ b | θ = 90 ° ,   S θ t | θ = 90 ° = S θ b | θ = 45 ° , S θ r | θ = 45 ° = S θ l | θ = 90 ° ,   S θ r | θ = 90 ° = S θ l | θ = 45 ° . (12)

2.2 The Matrix Form and Properties of the SG Operator

Given that the gradient matrix exhibits the BCCB structure under periodic boundary conditions, it can be seen that the matrices of operators D 3 and D 4 in equation (5) also possess the BCCB structure. The matrix representation of the SG operator D 3=SθdTDxSθd with θ=45° has the following forms:

D 3 = S θ l T D x S θ l = S θ r T D x S θ r = [ - E E 0 0 0 0 0 - E E 0 0 0 0 0 - E E 0 0 E 0 0 0 0 - E ] = D x ,

D 3 = S θ t T D x S θ t = [ - E E n - 1 c 0 0 0 0 0 - E E n - 1 c 0 0 0 0 0 - E E n - 1 c 0 0 E n - 1 c 0 0 0 0 - E ] ,   D 3 = S θ b T D x S θ b = [ - E E 1 c 0 0 0 0 0 - E E 1 c 0 0 0 0 0 - E E 1 c 0 0 E 1 c 0 0 0 0 - E ] (13)

Similarly, the matrix representation of D 4=SθdTDySθd with θ=45° has the following forms:

D 4 = S θ l T D y S θ l = [ - E 0 0 0 0 E n - 1 c E n - 1 c - E 0 0 0 0 0 E n - 1 c - E 0 0 0 0 0 0 0 E n - 1 c - E ] ,   D 4 = S θ r T D y S θ r = [ - E E n - 1 c 0 0 0 0 0 - E E n - 1 c 0 0 0 0 0 - E E n - 1 c 0 0 E n - 1 c 0 0 0 0 - E ] .

D 4 = S θ t T D y S θ t = S θ b T D y S θ b = [ - E + E n - 1 c 0 0 0 0 0 0 - E + E n - 1 c 0 0 0 0 0 0 - E + E n - 1 c 0 0 0 0 0 0 0 0 - E + E n - 1 c ] = D y . (14)

When θ=90°, D 3and D 4 also has the following forms:

D 3 = S θ l T D x S θ l = S θ r T D x S θ r = D x ,

D 3 = S θ t T D x S θ t = [ - E E 1 c 0 0 0 0 0 - E E 1 c 0 0 0 0 0 - E E 1 c 0 0 E 1 c 0 0 0 0 - E ] ,   D 4 = S θ b T D x S θ b = [ - E E n - 1 c 0 0 0 0 0 - E E n - 1 c 0 0 0 0 0 - E E n - 1 c 0 0 E n - 1 c 0 0 0 0 - E ] ,

D 4 = S θ b T D y S θ b = S θ t T D y S θ t = D y

D 4 = S θ l T D y S θ l = [ - E E n - 1 c 0 0 0 0 0 - E E n - 1 c 0 0 0 0 0 - E E n - 1 c 0 0 E n - 1 c 0 0 0 0 - E ] ,   D 4 = S θ r T D y S θ r = [ - E 0 0 0 0 E n - 1 c E n - 1 c - E 0 0 0 0 0 E n - 1 c - E 0 0 0 0 0 0 0 E n - 1 c - E ] . (15)

In the same way, we can also deduce the properties of the SG operator from its matrix form mentioned above.

Property 5 The shear operator Sθd and the horizontal gradient operator Dx are commutative when d="l" or "r"; Similarly, Sθd and the vertical operator Dy are commutative when d= "t" or "b". i.e.,

S θ d T D x S θ d = D x        w h e n   d = " l "   o r   " r " (16)

S θ d T D y S θ d = D y        w h e n   d = " t "   o r   " b " (17)

Proof   When θ=45° and d="l" or "t", the desired results can be obtained by multiplying the three matrices, i.e., SθlTDxSθl and SθtTDySθt :

S θ l T D x S θ l = [ E 11 E 22 E 33 E n n E n n E 11 E 22 E ( n - 1 ) ( n - 1 ) E 22 E 33 E 44 E 11 ] [ - E E 0 0 0 0 0 - E E 0 0 0 0 0 - E E 0 0 E 0 0 0 0 - E ] [ E 11 E n n E ( n - 1 ) ( n - 1 ) E 22 E 22 E 11 E n n E ( n - 1 ) ( n - 1 ) E n n E ( n - 1 ) ( n - 1 ) E ( n - 2 ) ( n - 2 ) E 11 ] = D x ,

S θ t T D y S θ t = [ E 0 0 0 0 E 1 c 0 0 0 0 0 E n - 1 c ] [ - E + E n - 1 c 0 0 0 0 0 0 - E + E n - 1 c 0 0 0 0 0 0 - E + E n - 1 c 0 0 0 0 0 0 0 0 - E + E n - 1 c ] [ E 0 0 0 0 E n - 1 c 0 0 0 0 0 E 1 c ] = D y (18)

Similarly, the same result can be obtained when the angle and d are taken in other cases.

Property 6 When θ is the same, there are some equality relationships between the SG operator D 3 and D 4, i.e.

S θ t T D x S θ t = S θ r T D y S θ r          w h e n   θ = 45 ° (19)

S θ b T D x S θ b = S θ l T D y S θ l          w h e n   θ = 90 ° (20)

Property 7 When the direction d of two shear operators is same and the angle difference between them is 3π4, the two shear operators are equal (θd=kπ4(k=0,1,,n)). The same is true of the SG operator. That is, the shear operator and the SG operator is periodic, i.e

1 )   S θ d | θ = 0 ° = S θ d | θ = 135 ° = E ,    S θ d | θ = 45 ° = S θ d | θ = 180 ° ,     S θ d | θ = 90 ° = S θ d | θ = 225 ° .

2 )   S θ d T D x S θ d | θ = 0 ° = S θ d T D x S θ d | θ = 135 ° = D x ,    S θ d T D x S θ d | θ = 45 ° = S θ d T D x S θ d | θ = 180 ° ,    S θ d T D x S θ d | θ = 90 °   =   S θ d T D x S θ d | θ = 225 ° .

3 )   S θ d T D y S θ d | θ = 0 ° = S θ d T D y S θ d | θ = 135 ° = D y ,    S θ d T D y S θ d | θ = 45 ° = S θ d T D y S θ d | θ = 180 ° ,    S θ d T D y S θ d | θ = 90 °   =   S θ d T D y S θ d | θ = 225 ° . (21)

3 L 1 / L 2 - S T V Model and Algorithm

In this section, we will propose a deblurring model that combines STV norm with L1/L2 minimization. Considering the advantages of STV norm and the SG operator, we propose the following optimization model:

m i n u i = 1 4 D i u 1 D i u 2       s . t     K u = f (22)

where 12 represents the ratio of the L1 and L2 norm.

Two significant advantages of L1/L2 minimization in optimization problem (22) are the inherent scale invariance and the unconstrained parameters. However, the non-convex nonlinearity of the L1 to L2 ratio introduces complexity to the model solution and the theoretical analysis of convergence. To solve this problem and ensure the robustness of the optimization process, we introduce the ADMM framework. ADMM significantly reduces the complexity of problems by decomposing them into multiple simple sub problems and solving them alternately, making it particularly suitable for tasks such as image deblurring and denoising. In addition, ADMM can effectively handle non-convex optimization problems, enabling it to perform excellently in image processing. Specifically, the original problem is neatly transformed into a constrained optimization problem:

m i n u i = 1 4 P i 1 Q i 2       s . t     K u = f   ,   P i = D i u   ,   Q i = D i u   (23)

The corresponding augmented Lagrange function of (23) can be written as

ρ 1 , ρ 2 , ρ 3 ( u , P i , Q i , μ i , ν i , λ ) = i = 1 4 P i 1 Q i 2 + λ , K u - f + ρ 1 2 K u - f F 2 + i = 1 4 { μ i , D i u - P i + i = 1 4 ν i , D i u - Q i + ρ 2 2 D i u - P i F 2 + ρ 3 2 D i u - Q i F 2 }

   = i = 1 4 P i 1 Q i 2 + ρ 1 2 K u - f + λ ρ 1 F 2 + i = 1 4 { ρ 2 2 D i u - P i + μ i ρ 2 F 2 + ρ 3 2 D i u - Q i + ν i ρ 3 F 2 } , (24)

where ρ1, ρ2 and ρ3 are three penalty parameters, Variables λ, μi and νi are the Lagrange multipliers. For the optimization problem (24), our strategy is to delicately decompose it into a series of independent sub-problems, thus minimizing the separation of each sub-problem. Based on this, we employ the ADMM algorithm as follows:

u k + 1 = a r g m i n u   ρ 1 2 K u - f + λ k ρ 1 F 2 + i = 1 4 { ρ 2 2 D i u - P i k + μ i k ρ 2 F 2 + ρ 3 2 D i u - Q i k + ν i k ρ 3 F 2 } (25)

P i k + 1 = a r g m i n P i   i = 1 4 P i 1 Q i k 2 + i = 1 4 ρ 2 2 D i u k + 1 - P i + μ i k ρ 2 F 2 , (26)

Q i k + 1 = a r g m i n Q i   i = 1 4 P k + 1 1 Q i 2 + i = 1 4 ρ 3 2 D i u k + 1 - Q i + ν i k ρ 2 F 2 , (27)

λ k + 1 = λ k + ρ 1 ( K u k + 1 - f ) , (28)

μ i k + 1 = μ i k + ρ 2 ( D i u k + 1 - P i k + 1 ) , (29)

ν i k + 1 = ν i k + ρ 3 ( D i u k + 1 - Q i k + 1 ) . (30)

Next, we will discuss the optimization strategies for solving each sub-problem one by one.

3.1 u Sub-Problem

Given Pik,Qik,μik,νik,λk, the minimization of ρ1,ρ2,ρ3(u,Pi,Qi,μi,νi,λ) with respect to u is a least squares problem and the corresponding normal equation is

[ ρ 1 K T K + ( ρ 2 + ρ 3 ) i = 1 4 D i T D i ] u = ρ 1 K T f + i = 1 4 D i T ( ρ 2 P i + ρ 3 Q i - μ i - ν i ) . (31)

Under periodic boundary conditions, an effective method for handling BCCB matrix is provided by the fast Fourier transform (FFT). The specific steps can be distilled as follows: First, according to formula (31), we calculate the right-hand vector and apply the forward FFT to it. Second, by component-wisely dividing the eigenvalues of ρ1KTK+(ρ2+ρ3)i=14DiTDi we get Fuk+1. Finally, the inverse FFT is applied to Fuk+1, and the results in the frequency domain are converted back to the time domain to obtain a new iterative value of uk+1.

As a consequence, Eq. (31) can be efficiently solved by one FFT operation and one inverse FFT operation as

u k + 1 = F - 1 ( F [ ρ 1 K T f + i = 1 4 D i T ( ρ 2 P i + ρ 3 Q i - μ i - ν i ) ] F [ ρ 1 K T K + ( ρ 2 + ρ 3 ) i = 1 4 D i T D i ] ) . (32)

where F represents the 2D DFT and F-1 is the inverse transform.

3.2 P i Sub-Problem

The sub-problem Pi is equivalent to

P i k + 1 = a r g m i n P i   i = 1 4 P i 1 + i = 1 4 ρ 2 Q i k 2 2 P i - ( D i u k + 1 + μ i k ρ 2 ) F 2 (33)

and it obtains a closed-form solution by applying the soft threshold operation, succinctly expressed as,

P i k + 1 = m a x   { | D i u k + 1 + μ i k ρ 2 | - 1 ρ 2 Q i k 2 , 0 } s g n ( D i u k + 1 + μ i k ρ 2 ) (34)

where the symbol "" denotes the Hadamard product, which is the element-wise multiplication of two matrices or vectors of the same dimension. The term "sgn" denotes the sign function, which is utilized to extract the sign of a real number.

3.3 Q i Sub-Problem

For the sub-problem Qi, let ck=Pik+11, dk=Diuk+1+νikρ3 and the minimization problem simplifies to

Q i k + 1 = a r g m i n Q i   i = 1 4 c k Q i 2 + i = 1 4 ρ 3 2 Q i - d k F 2 . (35)

When dk=0, any vector Qi with Qi2=ckρ33 is a solution that optimally addresses the minimization problem, skillfully reducing the objective function to the smallest value. When ck=0, Qi=dk is the solution. When ck0,dk0, since the Qi sub-problem is a function of variables Qi, we take the derivative of Qi,

( - c k Q i 2 3 + ρ 3 ) Q i = ρ 3 d k (36)

Therefore, there exists a positive number τk such that Qi=τkdk. Thus, Eq. (36) becomes

( - c k ( τ k ) 3 d k 2 3 + ρ 3 ) τ k d k = ρ 3 d k (37)

Given dk, we set ηk=dk2, MK=ckρ3(ηk)3 . Then τk is the root of

  τ 3 - τ 2 - M k = 0 (38)

Let F(τ)=τ3-τ2-Mk, we can obtain that the equation F(τ)=0 has only one real solution and has the following closed form:

τ = 1 3 + 1 3 ( C k + 1 C k ) (39)

where Ck=27Mk+2+(27Mk+2)2-423. In summary, we have

Q i k + 1 = { e k ,    i f   d k = 0 , τ k d k ,     i f   d k   0 , (40)

where ek is a random vector with the L2 norm to be ckρ33.

3.4 λ ,   μ i and νi Update

We update λ, μ and ν by Eqs. (28)-(30), where i=1,2,3,4.

During the iterative process of the algorithm, we pay close attention to the change of u. Once it happens

u k + 1 - u k m a x { u k , 1 } < ϵ (41)

the algorithm terminates its iterative process. ϵ is a predefined small positive number that determines when the iterative process should stop. This indicates that the algorithm has reached a steady state, and continued iteration will no longer significantly improve the result, but may increase unnecessary computational costs. Through this strategy, we not only ensure the convergence of the algorithm, but also effectively avoid over-calculation and improve the efficiency of resource utilization. The above process is summarized in Algorithm 1.

4 Experiments

In this section, we perform a series of numerical experiments to show the performance of the proposed model. All experiments were done on a laptop with an AMD Ryzen 7840H CPU and 16 GB of RAM, running Windows 11 (64-bit) operating system, and MATLAB 2019b software installed.

Algorithm 1: S T V - L 1 / L 2
Initialization: Starting point (u0,Pi0,Qi0,λ0,μi0,νi0), ρ1>0, ρ2>0,
ρ3>0.
Iteration:
1. Compute uk+1 by (31) and (32).
2. Compute Pik+1 by (34).
3. Compute Qik+1 by (40).
4. Update λk+1 by (28).
5. Update μik+1 by (29).
6. Update νik+1 by (30).
7. k=k+1
Until the stopping criterion (41) is satisfied.

4.1 Parameter Selection

In this subsection, our focus lies on the selection of several key parameters. In order to ensure that the shear matrix is a BCCB matrix for fast calculation, we set the parameter  θd=kπ4(k=1,2,3,4) in the experiment, and select the  θd that achieves the best performance.

Then, we employed the method of controlling variables to conduct a sensitivity analysis on parameters ρ1, ρ2, and ρ3 to assess their roles in the image restoration process. The experiment involved two sets of images, one set of image Boat suffered from motion blur ("motion",35,50), while the other set of image Monarch was destroyed by Gaussian blur ("Gaussian", [20 20], 10). Through these experiments, we discuss the influence of ρ1, ρ2 and ρ3 on image restoration under different noise levels and types of blur kernels. Figures 2, 3, and 4 respectively illustrate how the peak signal-to-noise ratio (PSNR) values vary with the changes in ρ1, ρ2, and ρ3 at different noise levels. We can see that ρ1 is much more sensitive and plays a crucial role in effectively restoring clear and sharp image from the blurred image.

thumbnail Fig. 2 The PSNR results versus the parameter ρ1 for fixed ρ2 and ρ3

thumbnail Fig. 3 The PSNR results versus the parameter ρ2 for fixed ρ1 and ρ3

thumbnail Fig. 4 The PSNR results versus the parameter ρ3 for fixed ρ1 and ρ2

4.2 Comparisons with Other TV-Based Methods

In this experiment, the test images we selected are mainly from the Set12 dataset and the USC-SIPI image dataset. These two datasets are widely used in the fields of image processing and computer vision due to their diverse and high-quality image samples. Figure 5 shows the test images used in a series of experiments and their degraded version after processing with different blur kernels. Specifically, Fig. 5(g)-(l) show specific blur effects generated by using the fspecial function in MATLAB software, including ("motion",35,50), ("average",10) and ("Gaussian", [20 20], 10).

thumbnail Fig. 5 The test images and the corresponding blurred images

Figure 6 presents the original image of "Leaves" and its Gaussian-blurred version with a noise level of 0.256, along with the deblurring results obtained using FTVd[21], ADMM TV[22], ADMM STV[8], and our proposed method. Figure 7 displays the original image of "Barbara" and its "average" blurred version with a noise level of 0.512, alongside the deblurring results by the comparative methods and our proposed method. It can be observed that our algorithm exhibits more pronounced improvements in visual quality compared to other existing algorithms. In terms of visual contrast, the deblurring images obtained by ADMM TV exhibit blocky artifacts while FTVd reconstruction appears overly cartoon-like. Although the ADMM STV method yields slightly better results than other algorithms, some artifacts still persist. Our proposed STV-L1/L2 method generates images with improved suppression of staircase artifacts and enhanced preservation of fine details.

thumbnail Fig. 6 Visual effects comparison of leaves by different methods with Gaussian blur (the noise level is 0.256)

thumbnail Fig. 7 Visual comparison of Barbara by different methods with average blur (the noise level is 0.512)

The results of PSNR, SSIM, and CPU time obtained by FTVd, ADMM TV, ADMM STV algorithms on blurred images generated by different images and blurred kernels are presented in Table 1. It is evident from Table 1 that the proposed STV-L1/L2 algorithm consistently achieves higher values compared to other algorithms. The improvement of quantitative indicators and visual effects is mainly due to the utilization of SG operator for enhancing more directional information and the L1/L2 regularizer for avoiding the over-penalty of high-amplitude gradients. In terms of calculation speed, our algorithm is slightly slower than ADMM STV algorithm, but it can better improve the effect of image deblurring.

Table 1

The results of PSNR, SSIM, and CUP time(t) with different methods

5 Conclusion

In this paper, the shear operator, the SG operator and their matrix representation are discussed, and their properties are summarized. Further, the application of SG operator in image deblurring is described. This paper combined the STV norm and L1/L2 minimization and proposed the STV-L1/L2 deblurring model, which not only utilizes the advantages of L1/L2 minimization, but also adopts STV norm to maintain the gradient edge sharpness of the image and enhance the orientation features of the image. This method has been proved to be an effective non-blind image deblurring solution. The ratio of L1 to L2 has non-convex nonlinear characteristics, and it is challenging to solve the model, but by using ADMM algorithm, this problem is effectively solved. Experimental results show that the proposed algorithm is superior to existing image deblurring techniques in many objective and perceived quality evaluation indexes.

The data that support the findings of this study are openly available on the website: https://github.com/zt9877/STV-L1-L2.git.

References

  1. Wang Y T, Wang Z Y, Tao D P, et al. AllFocus: Patch-based video out-of-focus blur reconstruction[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2017, 27(9): 1895-1908. [Google Scholar]
  2. Liu Y Q, Du X, Shen H L, et al. Estimating generalized Gaussian blur kernels for out-of-focus image deblurring[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2021, 31(3): 829-843. [Google Scholar]
  3. Tang Z J, Yan S Y, Xu C Q. Adaptive super-resolution image reconstruction based on fractal theory[J]. Displays, 2023, 80: 102544. [Google Scholar]
  4. Gao Y C, Ji H B, Liu W J. Persymmetric adaptive subspace detectors for range-spread targets[J]. Digital Signal Processing, 2019, 89: 116-123. [Google Scholar]
  5. Tikhonov A N, Glasko V B. Use of the regularization method in non-linear problems[J]. USSR Computational Mathematics and Mathematical Physics, 1965, 5(3): 93-107. [Google Scholar]
  6. Mumford D, Shah J. Optimal approximations by piecewise smooth functions and associated variational problems[J]. Communications on Pure and Applied Mathematics, 1989, 42(5): 577-685. [Google Scholar]
  7. Chan T, Esedoglu S, Park F, et al. Total variation image restoration: Overview and recent developments[M]//Handbook of Mathematical Models in Computer Vision. New York: Springer-Verlag, 2005: 17-31. [Google Scholar]
  8. Li W Y, Zhang T, Gao Q L. Non-blind image deblurring via shear total variation norm[J]. Wuhan University Journal of Natural Sciences, 2024, 29(3): 219-227. [Google Scholar]
  9. Ji H, Liu C Q, Shen Z W, et al. Robust video denoising using low rank matrix completion[C]//2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2010: 1791-1798. [Google Scholar]
  10. Gu S H, Xie Q, Meng D Y, et al. Weighted nuclear norm minimization and its applications to low level vision[J]. International Journal of Computer Vision, 2017, 121(2): 183-208. [Google Scholar]
  11. Bai M R, Zhang X J, Shao Q Q. Adaptive correction procedure for TVL1 image deblurring under impulse noise[J]. Inverse Problems, 2016, 32(8): 085004. [Google Scholar]
  12. Rahimi Y, Wang C, Dong H B, et al. A scale-invariant approach for sparse signal recovery[J]. SIAM Journal on Scientific Computing, 2019, 41(6): A3649-A3672. [Google Scholar]
  13. Xu Z B, Chang X Y, Xu F M, et al. L1/2 regularization: A thresholding representation theory and a fast solver[J]. IEEE Transactions on Neural Networks and Learning Systems, 2012, 23(7): 1013-1027. [Google Scholar]
  14. Zuo W M, Meng D Y, Zhang L, et al. A generalized iterated shrinkage algorithm for non-convex sparse coding[C]//2013 IEEE International Conference on Computer Vision. New York: IEEE, 2013: 217-224. [Google Scholar]
  15. Jia X X, Feng X C, Wang W W, et al. Bayesian inference for adaptive low rank and sparse matrix estimation[J]. Neurocomputing, 2018, 291: 71-83. [Google Scholar]
  16. Huo L M, Chen W G, Ge H M, et al. L1-βLq minimization for signal and image recovery[J]. SIAM Journal on Imaging Sciences, 2023, 16(4): 1886-1928. [Google Scholar]
  17. Joshi N, Szeliski R, Kriegman D J. PSF estimation using sharp edge prediction[C]//2008 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2008: 1-8. [Google Scholar]
  18. Ono S, Miyata T, Yamada I. Cartoon-texture image decomposition using blockwise low-rank texture characterization[J]. IEEE Transactions on Image Processing, 2014, 23(3): 1128-1142. [NASA ADS] [CrossRef] [MathSciNet] [PubMed] [Google Scholar]
  19. Rudin L I, Osher S, Fatemi E. Nonlinear total variation based noise removal algorithms[J]. Physica D: Nonlinear Phenomena, 1992, 60(1-4): 259-268. [Google Scholar]
  20. Tikhonov A. On the regularization of ill-posed problems[J]. Soviet Mathematics Doklady, 1963, 153(6):1111-1146. [Google Scholar]
  21. Wang Y L, Yang J F, Yin W T, et al. A new alternating minimization algorithm for total variation image reconstruction[J]. SIAM Journal on Imaging Sciences, 2008, 1(3): 248-272. [CrossRef] [MathSciNet] [Google Scholar]
  22. Jiao Y L, Jin Q N, Lu X L, et al. Alternating direction method of multipliers for linear inverse problems[J]. SIAM Journal on Numerical Analysis, 2016, 54(4): 2114-2137. [Google Scholar]

All Tables

Table 1

The results of PSNR, SSIM, and CUP time(t) with different methods

All Figures

thumbnail Fig. 1 The gradient operator and the shear gradient operator from different angles are applied to the image u-'Barbara' (a) Dxu; (b) SθbTDxSθbu with θb=45°; (c) SθbTDxSθbu with θb=90°; (d) Dyu; (e) SθlTDySθlu with θl=45°; (f) SθlTDySθlu with θl=90°
In the text
thumbnail Fig. 2 The PSNR results versus the parameter ρ1 for fixed ρ2 and ρ3
In the text
thumbnail Fig. 3 The PSNR results versus the parameter ρ2 for fixed ρ1 and ρ3
In the text
thumbnail Fig. 4 The PSNR results versus the parameter ρ3 for fixed ρ1 and ρ2
In the text
thumbnail Fig. 5 The test images and the corresponding blurred images
In the text
thumbnail Fig. 6 Visual effects comparison of leaves by different methods with Gaussian blur (the noise level is 0.256)
In the text
thumbnail Fig. 7 Visual comparison of Barbara by different methods with average blur (the noise level is 0.512)
In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.