Open Access
Issue
Wuhan Univ. J. Nat. Sci.
Volume 26, Number 6, December 2021
Page(s) 495 - 506
DOI https://doi.org/10.1051/wujns/2021266495
Published online 17 December 2021

© Wuhan University 2021

Licence Creative CommonsThis is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

0 Introduction

Image deblurring is a classical problem in low level vision. A blurred image can be considered as a convolution between the latent image and the point spread function (PSF) of the image capture device. The blurring process can be modeled as(1)where is an original image without any form of degradation, is a degradation matrix, f is an observed image, and n is the additive noise.

The goal of image deblurring is to estimate the original image u from an observed image f degraded by blur kernel and noise[1]. In this paper, we mainly perform image restoration processing on blur kernel and Gaussian noise. The function of Gaussian noise can be expressed as(2)where σ and μ represent the standard derivative and average value of the noise distribution n, respectively.

Image deblurring is an ill-posed inverse problem. In other words, its solution neither exists nor is unique, and small disturbances in the data may cause large errors in reconstruction. Hence, in order to overcome this ill-posedness, regularization techniques need to be considered to produce reasonable approximate solutions from noisy data. Rudin-Osher-Fatemi (ROF) model[2] is one of the most successful regularization methods due to its ability to preserve sharp edges[3]. The corresponding minimization task is as follows:(3)where ||∙||2 denotes the Euclidean norm, ||∙||TV is the discrete TV regularization term. The first term of (3) is called the fidelity term, and the second term is called the regularization term (or penalty) which describes prior knowledge of the image. λ>0 is the regularization parameter which controls the balance between fidelity term and regularization term. During the last few decades, a series of methods based on total variation and its variants have been developed.

The well-known first-order total variation methods such as FTVd (fast total variation deconvolution)[4] and ADMM (alternating direction method of multipliers) TV[5,6] have also achieved satisfactory results in the image restoration. FTVd is a total variation model based on splitting techniques. In Ref. [7], Chan et al proposed a frame-based deblurring problem using the ADMM algorithm. In Ref. [5], Jiao et al used the ADMM iterative method to solve the TV deblurring problem (ADMM TV). Compared with FTVd, the ADMM TV method can achieve comparable experimental performance without the need for parameter selection.

However, it is well known that TV-based restoration methods suffer from the staircase artifact in flat regions. There are many image restoration methods that have been developed to overcome these shortcomings. A well-known method is to replace the first-order total variation in the regularization term with a second-order (or higher) total variation.

In 2000, Chan et al[8] proposed the high-order TV (HOTV) model to improve the restoration quality. Contrary to TV regularization that penalizes gradient magnitude, regularizations using second-order derivatives do not penalize ramp regions whose intensity varies linearly, and therefore do not force the intensity of the region to remain constant. In other words, these regularizations have ability to potentially recover a broader class of images that consist of more than just piecewise-constant regions[9]. In Ref. [10], a second-order Laplace model for image deblurring was proposed by You and Kaveh. In 2003, a second-order derivative regularization[11] (Hessian regularization) model was proposed, called the Lysaker, Lundervold and Tai (LLT) model, which can effectively suppress the staircase artifacts and maintain the smoothness of flat areas of the image. In 2012, Hu et al[12] improved the model for image recovery and designed the maximum-minimum (MM) algorithm to solve the model. Their numerical experiments show that second-order TV has an advantage over classical TV regularization in avoiding staircase artifacts. In Ref. [13], the authors pointed out that high-order TV’s ability to preserve edges became weaker compared with first-order TV regularization and tended to smoothen edges and other small details. Thus, hybrid models that combine higher-order TV with other regularization were proposed in many literatures[13-15].

Among the hybrid models, by replacing the first-order TV with a second-order TV, Zhang et al[16] obtained better results than the model with strongly convex terms presented in Ref. [17]. Furthermore, in Refs. [18] and [19], a combination of first-order and second-order TV priors is used to recover blurred images and the minimization model is solved by the ADMM algorithm. To further improve the restoration performance, Liu et al[20] investigated a spatially adaptive regularization parameter updating scheme. As an adaptive balancing scheme between first-order and second-order TV, Wang et al[21] developed a Poisson denoising framework based on an iterative weighted total generalized variational (TGV) model. In Ref. [22], Liu et al pointed out that TGV can maintain sharp edges and perform well in areas of gradual intensity slopes.

TV or HOTV norm only contains finite difference in horizontal and vertical directions, and this leads to losing some detail information in other directions. In order to incorporate more direction information in gradient domain, in this paper we propose a novel shear high-order TV (SHOTV) norm which contains finite difference in different directions. Compared with higher order TV regularization, the proposed SHOTV regularization method provides a more meaningful description of the gradient information as it contains information about the different directions of the image gradients.

1 Preliminary

1.1 Shear Operator

Let be a shear operator with the shear angle (see Ref. [23]), and :(4)where the -th entry of Xshr is given, for and , by with(5)(6)where |.| represents the procedure rounding the real number to the nearest integer in the zero direction. The parameter d indicates the axis of shear (top , bottom , left , right ), and θd determines the degree of shear. For example, means that the image X is sheared with top axis and θd. In particular, θd=0 means is the identity transformation, i.e., no shear. Figure 1 shows the results of shear operator in different directions at an angle of .

thumbnail Fig. 1 Effect of shear operator with different angles and directions

Effect of shear operator with different angles and directions

(a) Original image X; (b) with ; (c) with

1.2 Shear Gradient and Shear HOTV Norm

TV regularization was introduced by Rudin et al[2], for image denoising and further extended to other image restoration tasks. For the gray-scale image , its discrete total variation norm is(7)where is defined as follows

TV norm only contains finite difference in horizontal and vertical directions, and this leads to losing some detailed information in other directions.

TV regularization preserves fine features and sharp edges, but it usually produces staircase artifact that is not presented in real images[24,25]. In many applications[26,27], some authors have proposed a higher-order TV to reduce staircase artifact. HOTV of u is defined as(8)where denotes the second-order difference of the ((j-1)n+i)-th entry of the vector u. By combining the higher-order finite difference operator and the shear operator, we propose the shear high-order gradient (SHOG) operator by the following way:(9)where the superscript T means conjugate transpose. and denote shear along the 'top'(or 'bottom') and 'left'(or 'right') direction with different angles, respectively. With different types of texture images, we choose different directions based on our experience. Then for the gray-scale image u, its discrete shear higher-order total variation (SHOTV) norm is(10)

The shear gradient has following properties:

Remark 1   (Matrix form of ) The operator is linear, so it can be represented as matrix, which is denoted by . Moreover, since is permutation matrix, it is orthonormal.

Theorem 1   The shear operator and the horizontal gradient operator Dxx are commutative when ; Similarly, and the vertical operator Dyy are commutative when . i.e.,(11)(12)

Proof   Firstly, we show that , i.e., . In fact, for , by the definition of gradient operator and shear operator, we have(13)where is defined as in Eq. (7).(14)

By Eq. (13) and Eq. (14), we obtain , i.e. .

Lastly, by the orthogonality of matrix , we can obtain

By Theorem 1, the shear gradient operator is essentially a generalization of gradient operator.

The comparison between HOG operator and SHOG operator applied on Barbara image are shown in Fig. 2. It can be observed that more direction information is captured and more detailed edge information can be retained by the SHOG operator. From the local comparison graph, it can be seen that the SHOG operator preserves more information in the texture part of the image.

thumbnail Fig. 2 Comparison between 2nd-gradient operator and shear 2nd-gradient operator applied on Barbara image u

Comparison between 2nd-gradient operator and shear 2nd-gradient operator applied on Barbara image u

1.3 BCCB Matrices

Under the periodic boundary conditions, both the blurring matrix and gradient matrix are Block-Circu- lant-with-Circulant-Blocks (BCCB), and thus are diagonalizable by the 2D discrete Fourier transform. In this subsection, we will show that the SHOG matrices are BCCB when the shear angle is .

Let Eij denote a matrix whose (i,j)-entry is 1, zero elsewhere. Then {Eij|i,j=1,2,3} is a set of orthonormal basis of linear space R3×3. Let denote the shear matrix of shear operator under the basis {Eij|i,j=1,2,…,n}, i.e.(15)where indicates shearing at angle of along the '1' direction. is a permutation matrix of the following form

Since gradient matrices are BCCB matrices under periodic boundary condition, it can be verified that the in formula (10) are also BCCB matrices when shear angle is . The shear gradient matrices and with have the following form:

The shear gradient matrices and with shear angel have the following form:

But unfortunately, when the shear angle θd, do not always have BCCB structure. When the shear angle is θ1=40°, the shear matrix has the following form:

It can be noticed that the above matrix is not a BCCB matrix.

2 Proposed Model

Considering the advantages of high-order TV and shear operator, we propose the following optimization model(16)where the term ||·||1 represents convex norm regularization. The variables λ,ω>0 are regularization parameters that control the data fidelity term and the shear high-order regularization. I(u) is an indication function to impose hard constraints on the objective function defined aswhere =[0,255]. To minimize problem (16) robustly, we hereby adopt framework of alternating direction method of multipliers (ADMM)[28]. Equation (16) can be converted into a constrained optimization problem:(17)

The corresponding augmented Lagrangian function of (17) can be written as(18)where ρ1, ρ2 are penalty parameters. Variablesμ1 andμ2 are the Lagrange multipliers associated with the constraints w=DSu and z=u, respectively. ADMM minimizes the Lagrangian function on two sets of variables u, w and z. Since w and z are decoupled from each other, they can be solved separately.

ADMM is an algorithm using proximal splitting techniques for solving the following convex optimization problem:(19)where are closed convex functions, Ai are linear transforms, χi are nonempty closed convex sets, and c is a given vector.

Part of the appeal of ADMM is the fact that the algorithm lends themselves to parallel implementations. The algorithm is given in Algorithm 1.

Algorithm 1ADMM
1 Initialization: Starting point .
2 Iteration:
  1)
  2)
  3)
  4) k=k+1.
Until a stopping criterion is satisfied.

In this section, we discuss the optimization strategies for solving each sub-problem of (18) one by one.

1) u Sub-Problem

The sub-problem of u is a least square problem with the following form(20)

From the optimality conditions, we have(21)

Considering what was discussed in subsection 1.3, (DS)TDS are BCCB matrices only when the shear angle is . While the shear angle is not , computing the inverse of the left hand of (21) in each iteration is costly. In this paper, we set the shear angle . KTK is also BCCB matrix under the periodic boundary condition. As a consequence, Eq. (21) can be efficiently solved by one FFT operation and one inverse FFT operation as(22)where F represents the two-dimensional discrete Fourier transform and F-1 is the inverse transform.

2) w Sub-Problem

The w sub-problem is a convex second-order deblurring problem(23)

We write for simplicity, and solve problem (23) using a two-dimensional shrinkage operator. i.e.,(24)

3) z Sub-Problem(25)

The z sub-problem is a projection problem. For an 8-bit image, the range of pixel values can be kept at [0, 255], and the closed-form solution can be obtained by using the projection operator(26)

The Lagrangian multipliers are updated as follows(27)(28)

The proposed algorithm is named SHOTV and shown in Algorithm 2.

Algorithm 2SHOTV
1 Initialization: Starting point
.
2 Iteration:
  1) Compute according to (21).
  2) Compute according to (24).
  3) Compute according to (26).
  4) Update according to (27).
  5) Update according to (28).
  6) Check the stopping criterion
  7) k=k+1.
  8) end

3 Numerical Experiments

In this section, we present various numerical results to illustrate the performance of the proposed algorithm for non-blind image deblurring. All experiments are carried out on Windows 10 64-bit and Matlab 2019b running on a desktop equipped with an Intel Core i5-10300 H CPU 2.5 GHz and 16 GB of RAM. The source code for all competing methods was obtained from the original authors and we use the default parameter settings.

3.1 Experiment Setting

The test images are shown in Fig. 3, and the pixel values of the images are normalized to [0, 1] for simplicity. Both Peak Signal to Noise Ratio (PSNR) and Structural Similarity metrics (SSIM)[29] are used to evaluate the quality of the reconstructed images. They are defined as follows:(29)(30)where u is the clean image, f is the recovered image, Maxu is the largest possible value of the image u. and are the mean values of images u and f, σf and σf are the standard variance of images u and f, respectively, and σuf is the covariance of u and f, c1,c2>0 are constants. In general, high PSNR and SSIM values imply better image quality.

thumbnail Fig. 3 The test images

The test images

3.2 Parameter Selection Discussion

In this subsection, we focus on the choice of parameters ρ1,ρ2 and λ. For simplicity, we set the parameter ρ1=ρ2. Then we investigate the sensitivity of the parameter ρ1 and λ. We test the selection of ρ1 on both House and Parrot images. They are corrupted by motion blur (fspecial(‘motion’, 35, 50)) and Gaussian blur (fspecial (‘gaussian’, [9,9], 9)). Figure 4 shows the change of PSNR values versus the parameter ρ1 under three different noise levels. We empirically set the parameters ρ1∈[5,30] in the subsequent experiments.

thumbnail Fig. 4 The PSNR results versus the parameter ρ1 for fixed λ

The PSNR results versus the parameter ρ1 for fixed λ

Then, we discuss the selection of the parameter λ. Our experiments are tested on Pirate and Barbara images with three different levels of noise and two different blur kernels. For simplicity, we set the parameter ρ1=ρ2=10. Figure 5 shows the curves of PSNR value with respect to parameter λ under three different noise levels. For the parameter λ in the subsequent experiments, the empirical value is taken to be in the range [800, 2 000]. As a matter of fact, the above empirical parameter settings settings are also generally valid for other test images.

thumbnail Fig. 5 The PSNR results versus the parameter λ for fixed ρ1=ρ2=10

The PSNR results versus the parameter λ for fixed ρ1=ρ2=10

3.3 Image Deblurring

In this subsection, we report the performance of the proposed SHOTV method for image deblurring and compare it with some leading deblurring methods, including FTVd[4], ADMM frame[7], ADMM TV[5] and HOTV[18].

First, we test the 9×9 Gaussian blur with a standard deviation of 9. The blurred image is generated using the MATLAB function “imfilter” with periodic boundary conditions. Then, the blurred image is corrupted with zero-mean additive white Gaussian noise σ2=0.001. The PSNR and SSIM values for the deblurring experiments are reported in Table 1 and Table 2. The numbers in bold indicate the best performance. One can observe that the proposed method outperforms the other competing methods. We choose Fingerprint, Cameraman, Hill and Shirt images for Gaussian deblurring and denoising comparison, and the recovered images are shown in Fig. 6. From the comparison, we can see that the restored images by our method retain more texture and have fewer artifacts. As can be seen from the Shirt image in Fig. 6, our proposed algorithm retains more texture information through the shear operator, resulting in a significant improvement in image deblurring.

thumbnail Fig. 6 Restoration of four degraded images with Gaussian blur and comparison with other algorithms

Restoration of four degraded images with Gaussian blur and comparison with other algorithms

From left to right: the noisy images, FTVd, ADMM-Frame, ADMM-TV, HOTV, and SHOTV; each value in parentheses represents the corresponding SSIM value of the restored image

Similarly, we test the motion blur with a len size 35 in the blur kernel and a rotation angle 50º. The blurred image is generated using the MATLAB function “imfilter” with periodic boundary conditions, and the blurred image is corrupted by zero-mean additive white Gaussian noise with ρ2=0.001. The PSNR and SSIM results for all competing deblurring methods are reported in Table 2. One can observe that the proposed SHOTV method can obtain better result than other competing mehtods. The blurred images and the deblurred images are shown in Fig. 7. Compared with other methods, our algorithm can preserve more texture structure and the deblurred image contains fewer artifacts compared with first-order TV.

thumbnail Fig. 7 Restoration of four degraded images with Motion blur and comparison with other algorithms

Restoration of four degraded images with Motion blur and comparison with other algorithms

From left to right: the noisy images, FTVd, ADMM-Frame, ADMM-TV, HOTV and SHOTV; each value in parentheses represents the corresponding SSIM value of the restored image

Table 1

PSNR and SSIM comparison of different methods for Gaussian deblurring

Table 2

PSNR (dB) and SSIM comparison of different methods for motion deblurring

3.4 Convergence Analysis

In order to verify the convergence of the algorithm numerically, we apply Algorithm 2 to four different images corrupted by different blur kernels and additive white Gaussian noise with deviation ρ2=0.001, where Gaussian blur is applied to Fingerprint and Shirt images, and Gaussian blur is applied to Barbara and Zebra images. We use the equation to calculate the relative error curve for each iteration of the recovery image of Algorithm 2. As shown in Fig. 8, the relative error decreases as the number of iterations increases, which numerically illustrates the convergence of the algorithm.

thumbnail Fig. 8 Relative error values versus iteration number

Relative error values versus iteration number

4 Conclusion

In this paper, a new SHOG operator is proposed by combining the high-order gradient operator and the shear operator. By both theoretical analysis and experiments, we show that the proposed SHOG operator incorporates more directionality and can detect more abundant edge information. We extend the HOTV norm to the SHOTV norm based on the SHOG operator, and then employ SHOTV norm as the regularization term to establish an image deblurring model. We also study some properties of the SHOG operator, and show that the SHOG matrices are Block-Circulant-with-Circulant-Blocks (BCCB) when the shear angle is . ADMM scheme is exploited to solve the proposed model efficiently. Extensive numerical experiments demonstrate the superiority of the proposed method in terms of visual quality and quantitative metrics.

References

  1. AbbassM Y, KimH W, AbdelwahabS A, et al. Image deconvolution using homomorphic technique [J]. Signal, Image and Video Processing, 2019, 13(4): 703-709. [Google Scholar]
  2. RudinL I, OsherS, FatemiE. Nonlinear total variation based noise removal algorithms [J]. Physica D: Nonlinear Phenomena, 1992, 60(1-4): 259-268. [Google Scholar]
  3. DuH, LiuY. Minmax-concave total variation denoising [J]. Signal, Image and Video Processing, 2018, 12(6): 1027-1034. [Google Scholar]
  4. WangY, YangJ, YinW, et al. A new alternating minimization algorithm for total variation image reconstruction [J]. SIAM Journal on Imaging Sciences, 2008, 1(3): 248-272. [CrossRef] [MathSciNet] [Google Scholar]
  5. JiaoY, JinQ, LuX, et al. Alternating direction method of multipliers for linear inverse problems [J]. SIAM Journal on Numerical Analysis, 2016, 54(4): 2114-2137. [Google Scholar]
  6. ChangH, LouY, DuanY, et al. Total variation-based phase retrieval for Poisson noise removal [J]. SIAM Journal on Imaging Sciences, 2018, 11(1): 24-55. [CrossRef] [MathSciNet] [Google Scholar]
  7. ChanR H, RiemenschneiderS D, ShenL, et al. Tight frame: An efficient way for high-resolution image reconstruction [J]. Applied and Computational Harmonic Analysis, 2004, 17(1): 91-115. [Google Scholar]
  8. ChanT, MarquinaA, MuletP. High-order total variation-based image restoration [J]. SIAM Journal on Scientific Computing, 2000, 22(2): 503-516. [CrossRef] [MathSciNet] [Google Scholar]
  9. LefkimmiatisS, WardJ P, UnserM. Hessian Schatten-norm regularization for linear inverse problems [J]. IEEE Transactions on Image Processing, 2013, 22(5): 1873-1888. [NASA ADS] [CrossRef] [MathSciNet] [PubMed] [Google Scholar]
  10. YouY L, KavehM. Fourth-order partial differential equations for noise removal [J]. IEEE Transactions on Image Processing, 2000, 9(10): 1723-1730. [NASA ADS] [CrossRef] [MathSciNet] [PubMed] [Google Scholar]
  11. LysakerM, LundervoldA, TaiX C. Noise removal using fourth-order partial differential equation with applications to medical magnetic resonance images in space and time [J]. IEEE Transactions on Image Processing, 2003, 12(12): 1579-1590. [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
  12. HuY, JacobM. Higher degree total variation (HDTV) regularization for image recovery [J]. IEEE Transactions on Image Processing, 2012, 21(5): 2559-2571. [NASA ADS] [CrossRef] [MathSciNet] [PubMed] [Google Scholar]
  13. ShiQ, SunN, SunT, et al. Structure-adaptive CBCT reconstruction using weighted total variation and Hessian penalties [J]. Biomedical Optics Express, 2016, 7(9): 3299-3322. [Google Scholar]
  14. PapafitsorosK, SchönliebC B. A combined first and second order variational approach for image reconstruction [J]. Journal of Mathematical Imaging and Vision, 2014, 48(2): 308-338. [Google Scholar]
  15. BrediesK, KunischK, PockT. Total generalized variation [J]. SIAM Journal on Imaging Sciences, 2010, 3(3): 492-526. [CrossRef] [MathSciNet] [Google Scholar]
  16. ZhangJ, MaM, WuZ, et al. High-order total bounded variation model and its fast algorithm for Poissonian image restoration [J]. Mathematical Problems in Engineering, 2019, 2019: 1-11. [Google Scholar]
  17. LiuX, HuangL. Total bounded variation-based Poissonian images recovery by split Bregman iteration [J]. Mathematical Methods in the Applied Sciences, 2012, 35(5): 520-529. [NASA ADS] [CrossRef] [MathSciNet] [Google Scholar]
  18. AdamT, ParamesranR, MingmingY, et al. Combined higher order non-convex total variation with overlapping group sparsity for impulse noise removal [J]. Multimedia Tools and Applications, 2021, 80(12): 18503-18530. [Google Scholar]
  19. JiangL, HuangJ, LvX G, et al. Alternating direction method for the high-order total variation-based Poisson noise removal problem [J]. Numerical Algorithms, 2015, 69(3): 495-516. [Google Scholar]
  20. LiuJ, HuangT Z, LvX G, et al. High-order total variation-based Poissonian image deconvolution with spatially adapted regularization parameter [J]. Applied Mathematical Modelling, 2017, 45: 516-529. [CrossRef] [MathSciNet] [Google Scholar]
  21. WangX, FengX, WangW, et al. Iterative reweighted total generalized variation based Poisson noise removal model[J]. Applied Mathematics and Computation, 2013, 223: 264-277. [Google Scholar]
  22. LiuH, TanS. Image regularizations based on the sparsity of corner points [J]. IEEE Transactions on Image Processing, 2018, 28(1): 72-87. [Google Scholar]
  23. OnoS, MiyataT, YamadaI. Cartoon-texture image decomposition using blockwise low-rank texture characterization [J]. IEEE Transactions on Image Processing, 2014, 23(3): 1128-1142. [NASA ADS] [CrossRef] [MathSciNet] [PubMed] [Google Scholar]
  24. LiuP, XiaoL. Efficient multiplicative noise removal method using isotropic second order total variation [J]. Computers & Mathematics with Applications, 2015, 70(8): 2029-2048. [Google Scholar]
  25. WangS, HuangT Z, ZhaoX L, et al. Speckle noise removal in ultrasound images by first-and second-order total variation [J]. Numerical Algorithms, 2018, 78(2): 513-533. [Google Scholar]
  26. MeiJ J, HuangT Z. Primal-dual splitting method for high-order model with application to image restoration [J]. Applied Mathematical Modelling, 2016, 40(3):2322-2332. [Google Scholar]
  27. LiF, ShenC, FanJ, et al. Image restoration combining a total variational filter and a fourth-order filter [J]. Journal of Visual Communication and Image Representation, 2007, 18 (4): 322-330. [CrossRef] [Google Scholar]
  28. GabayD. Chapter IX applications of the method of multipliers to variational inequalities [J]. Studies in Mathematics & Its Applications, 1983, 15: 299-331. [CrossRef] [Google Scholar]
  29. ZhouW, BovikA C, SheikhH R, et al. Image quality assessment: From error visibility to structural similarity [J]. IEEE Trans Image Process, 2004, 13(4): 600-612. [NASA ADS] [CrossRef] [PubMed] [Google Scholar]

All Tables

Table 1

PSNR and SSIM comparison of different methods for Gaussian deblurring

Table 2

PSNR (dB) and SSIM comparison of different methods for motion deblurring

All Figures

thumbnail Fig. 1 Effect of shear operator with different angles and directions

Effect of shear operator with different angles and directions

(a) Original image X; (b) with ; (c) with

In the text
thumbnail Fig. 2 Comparison between 2nd-gradient operator and shear 2nd-gradient operator applied on Barbara image u

Comparison between 2nd-gradient operator and shear 2nd-gradient operator applied on Barbara image u

In the text
thumbnail Fig. 3 The test images

The test images

In the text
thumbnail Fig. 4 The PSNR results versus the parameter ρ1 for fixed λ

The PSNR results versus the parameter ρ1 for fixed λ

In the text
thumbnail Fig. 5 The PSNR results versus the parameter λ for fixed ρ1=ρ2=10

The PSNR results versus the parameter λ for fixed ρ1=ρ2=10

In the text
thumbnail Fig. 6 Restoration of four degraded images with Gaussian blur and comparison with other algorithms

Restoration of four degraded images with Gaussian blur and comparison with other algorithms

From left to right: the noisy images, FTVd, ADMM-Frame, ADMM-TV, HOTV, and SHOTV; each value in parentheses represents the corresponding SSIM value of the restored image

In the text
thumbnail Fig. 7 Restoration of four degraded images with Motion blur and comparison with other algorithms

Restoration of four degraded images with Motion blur and comparison with other algorithms

From left to right: the noisy images, FTVd, ADMM-Frame, ADMM-TV, HOTV and SHOTV; each value in parentheses represents the corresponding SSIM value of the restored image

In the text
thumbnail Fig. 8 Relative error values versus iteration number

Relative error values versus iteration number

In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.