Open Access
Issue
Wuhan Univ. J. Nat. Sci.
Volume 27, Number 6, December 2022
Page(s) 550 - 556
DOI https://doi.org/10.1051/wujns/2022276550
Published online 10 January 2023

© Wuhan University 2022

Licence Creative CommonsThis is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0</ext-link>), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

DOI https://doi.org/10.1051/wujns/2022276550

0 Introduction

Pilling of textile is a very unpleasant feature that results from daily washing and wearing, which is affected by fiber properties, yarn properties, and fabric structure. For the quality control of fabric products, pilling evaluation has been always considered as an important issue. Traditional evaluation methods have some disadvantages of high cost, subjectivity, poor reliability and low efficiency. It is necessary to develop the objective and digital technology to replace the traditional methods.

For this purpose, researchers used some objective evaluation methods based on 2D image analysis techniques in fabric pilling evaluation such as the Fourier Transform, Wavelet Transform, and Artificial Neural Networks. Yun et al[1] used the Fourier Transform algorithm which divided the image information into low and high frequencies where low frequencies included the deterministic structure and high frequencies represented the noise and pills. Their experimental results showed that the method was suitable for pilling evaluation of woven fabrics. Deng et al[2] used multi-scale 2D Dual-Tree Complex Wavelet Transform (CWT) to extract six characteristics at different scales from images of textile, indicating this evaluation system had excellent performance for knitted, woven, and nonwoven fabrics. Xiao et al[3] transformed the pilling image to the frequency domain using Fourier Transform, and combined it with energy algorithm, multi-dimensional Discrete Wavelet Transform and iterative thresholding algorithm to obtain pilling segmentation images. This objective evaluation method was capable of obtaining full and accurate pilling information and deep learning algorithms achieved 94.2% classification accuracy. Wu et al[4] proposed a Convolutional Neural Network-based pilling evaluation system by extracting pill features and texture features. The rating accuracy of their model reached 97.70%.

However, the analysis of pilling based on 2D images has limitations due to lighting as well as pattern variations. For more accuracy, more and more methods based on 3D information are used in fabric pilling evaluation. Kang et al[5] developed a noncontact 3D measurement method for reconstructing the 3D model of the fabric. A CCD camera was used to capture the image of the laser line projected on its surface. Using a height-threshold algorithm, the 3D model was converted into a binary image, and the parameters extracted from that image were used to calculate the pilling grade. The results of their method correlated well with the manual evaluation method. Xu et al[6] investigated a 3D fabric surface reconstruction system that used two side-by-side images of fabric particles taken by a pair of ordinary cameras without special illumination. To make the system resistant to fabric structures, colors, fiber contents, and other factors, robust calibration and stereo-matching algorithms were implemented. Liu et al[7] proposed a method based on structure from motion (SFM) and patch-based multi-view stereo (PMVS) algorithm for pilling evaluation. The pilling segmentation was achieved by adaptive threshold segmentation and morphological analysis.

Multi-view stereo can only reconstruct the macroscopic contours of the fabric surface without significant texture details. Laser triangulation can get texture details, but this method is usually high-cost, time-consuming and difficult in operation. In most other methods whose 3D reconstruction is based on surface features, such as stereo matching, the 3D model cannot be generated when the surface features are blurry.

This paper designs a simple and low-cost system for pilling evaluation which can not only recover the macroscopic contour of the fabric, but also reconstruct its tissue points. This system first reconstructs a 3D model of the fabric surface using semi-calibrated near-light Photometric Stereo (PS). In mapping the 3D model to a 2D image, a low-pass filter is used to eliminate fabric texture for the 2D depth image. The binary image of pills is segmented by a global iterative threshold to obtain the pilling number and area. Finally, the classification is completed by K-Nearest Neighbor (KNN). Figure 1 shows the flowchart of fabric pilling evaluation.

thumbnail Fig. 1 Pilling grade evaluation system

1 System Setup

We have designed a computer image acquisition system, which consists of a data acquisition facility and image data analysis facility, as shown in Fig. 2. Imaging hardware used in the data acquisition facility includes a box, a high-resolution digital camera (NIKKOR D7200), a macro lens (NIKKOR AF-S) and eight LED light sources (1 W). The PS is sensitive to the light information in the image, and the inside walls of the box are painted with black matt varnish to minimize the reflection of the scattered light and the effect of stray light. To calibrate parallel light, the light sources were concentrated with a range of 15 luminous angles. The acquired images are analyzed and reconstructed using MATLAB R2020a.

thumbnail Fig. 2 Self developed PS system

2 Semi-Calibrated Near-Light PS

PS was first proposed by Woodham[8] in 1980, which takes multiple images with different illumination using a specific LED configuration and then infers the 3D model. The traditional PS method assumes that lighting is caused by infinitely distant point sources, and the intensity of the light source needs to be calibrated. In this paper, a near-light model is developed, and the source intensities are assumed to be unknown. Modeling near-light sources means that low-cost lighting devices such as LEDs can be used, and the calibration procedure is simplified by assuming unknown source intensities (semi-calibrated). This paper uses semi-calibrated near-light PS technology[9] to complete the 3D reconstruction of fabric surfaces.

For 3D reconstruction of fabric surfaces by using semi-calibrated near-light PS, we designed the multi-light image acquisition system (Fig. 2) and conducted the calibration to obtain camera intrinsic matrix and the light position parameters which are crucial for 3D reconstruction. The process of calibration is described in Section 2.1. The image acquisition is conducted after system calibration, and eight light sources are controlled to irradiate and shoot the samples in turn. A variational formulation is established, which describes the relationship between the grayscale values of pixel corresponding to the surface point of 3D model, and tackle the nonconvex variational model numerically to complete the 3D reconstruction.

2.1 System Calibration

According to the method proposed by Zhang[10], taking a set of calibration board images, the camera is calibrated with the Matlab Camera Calibration Toolbox to obtain the camera intrinsic matrix. Figure 3 shows one of the corner point detection results.

thumbnail Fig. 3 The results of corner point detection

Triangulation is used to calibrate the location parameters of the light source[11]. A pair of metal spheres are placed in the scene to generate visible highlights for each light source, and multiple highlight points are extracted by thresholding. Then the Canny operator[12] is used to detect the sphere contours, as shown in Fig.4. The light source position parameters are calculated from the actual radius of the sphere, the focal length of the camera, and the physical pixel size of the camera sensor.

thumbnail Fig. 4 Test the center and radius of the metal ball

2.2 Photometric Model

The relationship between the surface point s of the 3D model and the 2D pixel point i=(x,y) can be expressed as follows:

s ( i ) = z ( i ) K - 1 [ x , y , 1 ] T (1)

where z(x,y) is the depth value of the 3D model and K is the intrinsic matrix of the camera.

Considering non-parallel illumination features, the attenuation caused by distance Flk can be expressed as follows:

F l k = 1 s - x l k 2 (2)

where k represents the k-th light source,lrepresents the irradiated light vector of surface point s, and xlk is the light source position. Thus, the incident light vector lk(s) at the surface point s can be expressed as:

l k ( s ) = 1 s - x l k 2   [ x l k - s ] s - x l k (3)

The normal vector n(s) is the unit-length vector proportional to xs(x,y)×ys(x,y):

n ( s ) = J ( i ) T [ l o g z ¨ ( i )   - 1 ] d ( i ; l o g z ¨ ( i ) ) (4)

l o g z ¨ ( i ) = z ( i ) (5)

J ( i ) = [ f d X - f c o t β d X - ( x - u 0 ) 0 f d Y s i n β - ( y - v 0 ) 0 0 1 ] (6)

d ( i ; l o g z ¨ ( i ) ) = J ( i ) T [ l o g z ¨ ( i ) - 1 ] (7)

On the camera sensor, f  is the focal length, dX and dY are the physical lengths of pixels in the X and Y directions. In the pixel coordinate system,u0 andv0 indicate the sensor center coordinates, βrepresents the angle between the horizontal and vertical edges of the photographic plate.

The albedoρ(s) expression was modified following the method of Quéau et al[13]:

ρ ( s ) = ρ ¨ ( i ) d ( i ; l o g z ¨ ( i ) ) (8)

The relationship between the grayscale values of pixel pk(i) corresponding to the surface point s can be expressed as:

p   k ( i ) = ρ ( s ) e k m a x { l k ( s ) · n ( s ) , 0 } , k [ 1 , M ] (9)

where ek represents the light source intensity of the k-th light source.

Combining the above equations into a system of nonlinear partial differential equations:

p k ( i ) = ρ ¨ ( i ) e k m a x { [ J ( i ) l k ( i ; l o g z ( i ) ) ] T [ l o g z ( i ) - 1 ] , 0 } , k [ 1 , M ] (10)

j   =   1   N are the corresponding pixels. Then, the discrete counterpart of Eq. (10) is written as:

p j k = ρ ¨ j e k m a x { [ J j l k ( l o g z j ) ] T [ l o g z j - 1 ] , 0 } , k [ 1 , M ] , j [ 1 , N ] (11)

Optimize the discrete part assuming that Q consists of all rank-1 N-by-M matrices:

m i n l o g z , θ : θ Q   F ( θ , l o g z ) = j = 1 N k = 1 M λ 2 l o g   ( 1 + ( ρ ¨ j e k m a x { [ J j l k ( l o g   z j ) ] T [ l o g   z j - 1 ] , 0 } - p j k ) 2 λ 2 ) (12)

where λ is the user-defined parameter of the Cauchy estimator, and in our experiment λ = 8. The nonconvex model of Eq. (12) is minimized alternatively over variables θ and logz. In each subproblem, we solve a local quadratic model of Eq. (12) using the positive definite approximation of Hessian to achieve 3D reconstruction.

3 Experiment and Result Analysis

3.1 Preparation of Samples

In this study, 98 fabric samples were classified in pilling grades. Fabric samples were cut into squares of 30 mm×30 mm, resulting in an image of 512×512 pixels. The pilling severity is divided into five grades. From Grade 1 to 5, the pilling degree of fabric decreases gradually until almost no pilling is seen in Grade 5[14]. Five samples selected from the datasets and graded according to American Society of Testing Materials (ASTM) standards were analyzed as standard samples as shown in Fig.5. The subjective evaluation results were taken based on the evaluation results of five experts who compared each fabric with the standard samples. Those with the highest consistency were taken as the subjective evaluation results.

thumbnail Fig. 5 Standard pilling images of Grade 1 to Grade 5

3.2 2D Depth Image Generation

In the feature extraction phase, the complexity of extracting 2D image features can be ignored compared with the 3D feature extraction which needs 3D point cloud processing. The 3D model can be projected onto a 2D plane with a determined mapping relationship[15]. The coordinates of the point s=(x,y,z(i)) in the depth image are converted to the pixel coordinates of the grayscale image. Eq.(13) converts the depth value z(i) into the grayscale value for the corresponding pixel.

G ( i ) = 255 z ( i ) - z m H (13)

where G(i) is the grayscale value of the 2D depth image, zm is the minimum depth value of 3D model, and H is the range of the depth value.

3.3 Pilling Segmentation

To make the pilling segmentation easier in the 2D depth image, fast Fourier Transform (FFT) algorithm[16] is used to filter the texture which is usually the patterns with a high degree of periodicity. FFT converts the image to the different frequency domains for analysis by seperating the texture information and pilling information as the high-frequency component and the low-frequency component, respectively. The low-frequency component is highlighted by a Gaussian low-pass filter to eliminate the textured background. For each non-textured image, a global threshold is determined automatically using the adaptive iterative method which selects the average gray value of the whole image as the initial threshold and determines the optimal threshold by an iterative process. Figure 6 shows the processing results from sample image to binary image.

thumbnail Fig. 6 Fabric pilling sample of Grade 1 to Grade 5

3.4 Pilling Evaluation

In the subjective evaluation method, the size, total area, and coverage of pills are the main factors that influence the expert evaluation[2]. In this study, the number and area are selected as the features for evaluating fabric pilling. In the binary image, the pilling number refers to the number of pill areas. The values of the pilling pixels are all 1, and the connected area is searched according to the 8-connected objects. The first connected area encountered is labeled as 1, then the search is carried out successively. The pill area is the total number of pixels counted in the detection area. It is worth noting that the total area of the pill is Stotal.

In this paper, the KNN classifier[17] was used to classify pilling samples and perform a 2-fold cross-validation. The datasets are randomly divided into two subsets with equal numbers of the samples. One subset is used as a training set, and the other as a test set. KNN classifies the test sample by comparing the distance or similarity between the training samples and the test sample, and the K training samples closest to the test sample are found. K=3 is set in this experiment. The category with the highest frequency among the 3 points as the prediction category of the test sample, which is the objective evaluation result of the test sample. The accuracy PACC of the objective evaluation is determined by comparing the results of the subjective evaluation.

P A C C =   N T - N M N T (14)

where NM is the number of samples inconsistent between objective and subjective evaluation, and NT is the total number of samples.

As shown in Table 1, a total of seven samples are misclassified, which gives the system a classification accuracy of 92.8%. As shown in Fig. 7, a scatter plot shows how feature parameters relate to grade. The ordinate represents the pilling area, the abscissas represent the pilling number in samples, the point with different colors represent different pilling grades, and the misclassified samples have also been marked with red. Figure 7 illustrates that KNN is effective in classifying samples with small intra-class spacing and large inter-class spacing.

thumbnail Fig. 7 Results of different evaluation methods

Table 1

Result of objective evaluation

4 Conclusion

This paper proposed an effective way to objectively evaluate the pilling images based on PS. Self-developed image acquisition equipment is used to capture multiple images with different illumination, and then the semi-calibrated near-light PS algorithm is used to reconstruct the fabric surface. The 3D model is then converted into a 2D depth image for texture filtering. The transformed non-texture image is segmented into a binary image by the iterative threshold segmentation method, and the defined feature parameters of fabric pilling, including pilling number and area, were extracted. Finally, the KNN classifier was used to identify the fabric samples. The experimental results show that the system is effective and reliable for pilling evaluation. This method performs well for plain fabrics, but it is insufficient for patterned fabrics. The following work should focus on building non-Lambertian models.

Reference

  1. Yun S Y, Kim S, Park C K. Development of an objective fabric pilling evaluation method[J]. Fibers and Polymers, 2013, 14(5): 832-837. [CrossRef] [Google Scholar]
  2. Deng Z, Wang L, Wang X. An integrated method of feature extraction and objective evaluation of fabric pilling[J]. The Journal of the Textile Institute, 2011, 102(1): 1-13. [CrossRef] [Google Scholar]
  3. Xiao Q, Wang R, Sun H Y, et al. Objective evaluation of fabric pilling based on image analysis and deep learning algorithm[J]. International Journal of Clothing Science and Technology, 2021, 33(4): 495-512. [CrossRef] [Google Scholar]
  4. Wu J, Wang D, Xiao Z T, et al. Knitted fabric and nonwoven fabric pilling objective evaluation based on SONet[J]. The Journal of the Textile Institute, 2022, 113(7): 1418-1427. [CrossRef] [Google Scholar]
  5. Kang T, Cho D H, Kim S. Objective evaluation of fabric pilling using stereovision[J]. Textile Research Journal, 2004, 74: 1013-1017. [CrossRef] [Google Scholar]
  6. Xu B G, Yu W R, Wang R W. Stereovision for three-dimensional measurements of fabric pilling[J]. Textile Research Journal, 2011, 81(20): 2168-2179. [CrossRef] [Google Scholar]
  7. Liu L L, Deng N, Xin B J, et al. Objective evaluation of fabric pilling based on multi-view stereo vision[J]. The Journal of the Textile Institute, 2021, 112(12): 1986-1997. [CrossRef] [Google Scholar]
  8. Woodham R J. Photometric method for determining surface orientation from multiple images[J]. Optical Engineering, 1980, 19(1): 139-144. [NASA ADS] [CrossRef] [Google Scholar]
  9. Quéau Y, Durix B, Wu T, et al. LED-based photometric stereo: Modeling, calibration and numerical solution[J]. Journal of Mathematical Imaging and Vision, 2018, 60(3): 313-340. [Google Scholar]
  10. Zhang Z Y. Flexible camera calibration by viewing a plane from unknown orientations [C]// Proceedings of the Seventh IEEE International Conference on Computer Vision. New York: IEEE, 1999: 666-673. [Google Scholar]
  11. Ahmad J, Sun J A, Smith L, et al. An improved photometric stereo through distance estimation and light vector optimization from diffused maxima region[J]. Pattern Recognition Letters, 2014, 50: 15-22. [NASA ADS] [CrossRef] [Google Scholar]
  12. Canny J. A computational approach to edge detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1986, 8(6): 679-698. [CrossRef] [Google Scholar]
  13. Quéau Y, Wu T, Lauze F, et al. A non-convex variational approach to photometric stereo under inaccurate lighting [C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE, 2017: 350-359. [Google Scholar]
  14. Kim Y, Kim S. Study on the integration of fabric pilling generation and evaluation system[J]. Textile Science and Engineering, 2016, 53(5): 360-365. [CrossRef] [Google Scholar]
  15. Wang Y L, Deng N, Xin B J. Investigation of 3D surface profile reconstruction technology for automatic evaluation of fabric smoothness appearance[J]. Measurement, 2020, 166: 108264. [Google Scholar]
  16. Xu B. Instrumental evaluation of fabric pilling[J]. The Journal of the Textile Institute, 1997, 88(4): 488-500. [CrossRef] [Google Scholar]
  17. Cover T, Hart P. Nearest neighbor pattern classification[J]. IEEE Transactions on Information Theory, 1967, 13(1): 21-27. [CrossRef] [Google Scholar]

All Tables

Table 1

Result of objective evaluation

All Figures

thumbnail Fig. 1 Pilling grade evaluation system
In the text
thumbnail Fig. 2 Self developed PS system
In the text
thumbnail Fig. 3 The results of corner point detection
In the text
thumbnail Fig. 4 Test the center and radius of the metal ball
In the text
thumbnail Fig. 5 Standard pilling images of Grade 1 to Grade 5
In the text
thumbnail Fig. 6 Fabric pilling sample of Grade 1 to Grade 5
In the text
thumbnail Fig. 7 Results of different evaluation methods
In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.