Issue 
Wuhan Univ. J. Nat. Sci.
Volume 27, Number 6, December 2022



Page(s)  550  556  
DOI  https://doi.org/10.1051/wujns/2022276550  
Published online  10 January 2023 
CLC number: TP 399
Photometric StereoBased 3D Reconstruction Method for the Objective Evaluation of Fabric Pilling
School of Fashion Engineering, Shanghai University of Engineering Science, Shanghai 201620, China
^{†} To whom correspondence should be addressed. Email: xinbj@sues.edu.cn
Received:
22
September
2022
Fabric pilling evaluation has been considered as an essential element for textile quality inspection. Traditional manual method is still based on human eyes and brain, which is subjective with low efficiency. This paper proposes an objective evaluation method based on semicalibrated nearlight Photometric Stereo (PS). Fabric images are digitalized by selfdeveloped image acquisition system. The 3D depth information of each point could be obtained by PS algorithm and then mapped to 2D grayscale image. After that, the nontextured image could be filtered by using the Gaussian lowpass filter. The pilling segmentation is conducted by using global iterative threshold segmentation method, and then KNearest Neighbor (KNN) is finally selected as a tool for the grade classification of fabric pilling. Our experimental results show that the proposed evaluation system could achieve excellent judging performance for the objective pilling evaluation.
Key words: photometric stereo / pilling evaluation / 3D reconstruction / image analysis / fast Fourier Transform
Biography: LUO Jian, male, Master candidate, research direction: photometric stereobased fabric 3D reconstruction algorithm and application. Email: 512987427@qq.com
Supported by the National Natural Science Foundation of China (61876106)
© Wuhan University 2022
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0</extlink>), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
DOI https://doi.org/10.1051/wujns/2022276550
0 Introduction
Pilling of textile is a very unpleasant feature that results from daily washing and wearing, which is affected by fiber properties, yarn properties, and fabric structure. For the quality control of fabric products, pilling evaluation has been always considered as an important issue. Traditional evaluation methods have some disadvantages of high cost, subjectivity, poor reliability and low efficiency. It is necessary to develop the objective and digital technology to replace the traditional methods.
For this purpose, researchers used some objective evaluation methods based on 2D image analysis techniques in fabric pilling evaluation such as the Fourier Transform, Wavelet Transform, and Artificial Neural Networks. Yun et al^{[1]} used the Fourier Transform algorithm which divided the image information into low and high frequencies where low frequencies included the deterministic structure and high frequencies represented the noise and pills. Their experimental results showed that the method was suitable for pilling evaluation of woven fabrics. Deng et al^{[2]} used multiscale 2D DualTree Complex Wavelet Transform (CWT) to extract six characteristics at different scales from images of textile, indicating this evaluation system had excellent performance for knitted, woven, and nonwoven fabrics. Xiao et al^{[3]} transformed the pilling image to the frequency domain using Fourier Transform, and combined it with energy algorithm, multidimensional Discrete Wavelet Transform and iterative thresholding algorithm to obtain pilling segmentation images. This objective evaluation method was capable of obtaining full and accurate pilling information and deep learning algorithms achieved 94.2% classification accuracy. Wu et al^{[4]} proposed a Convolutional Neural Networkbased pilling evaluation system by extracting pill features and texture features. The rating accuracy of their model reached 97.70%.
However, the analysis of pilling based on 2D images has limitations due to lighting as well as pattern variations. For more accuracy, more and more methods based on 3D information are used in fabric pilling evaluation. Kang et al^{[5]} developed a noncontact 3D measurement method for reconstructing the 3D model of the fabric. A CCD camera was used to capture the image of the laser line projected on its surface. Using a heightthreshold algorithm, the 3D model was converted into a binary image, and the parameters extracted from that image were used to calculate the pilling grade. The results of their method correlated well with the manual evaluation method. Xu et al^{[6]} investigated a 3D fabric surface reconstruction system that used two sidebyside images of fabric particles taken by a pair of ordinary cameras without special illumination. To make the system resistant to fabric structures, colors, fiber contents, and other factors, robust calibration and stereomatching algorithms were implemented. Liu et al^{[7]} proposed a method based on structure from motion (SFM) and patchbased multiview stereo (PMVS) algorithm for pilling evaluation. The pilling segmentation was achieved by adaptive threshold segmentation and morphological analysis.
Multiview stereo can only reconstruct the macroscopic contours of the fabric surface without significant texture details. Laser triangulation can get texture details, but this method is usually highcost, timeconsuming and difficult in operation. In most other methods whose 3D reconstruction is based on surface features, such as stereo matching, the 3D model cannot be generated when the surface features are blurry.
This paper designs a simple and lowcost system for pilling evaluation which can not only recover the macroscopic contour of the fabric, but also reconstruct its tissue points. This system first reconstructs a 3D model of the fabric surface using semicalibrated nearlight Photometric Stereo (PS). In mapping the 3D model to a 2D image, a lowpass filter is used to eliminate fabric texture for the 2D depth image. The binary image of pills is segmented by a global iterative threshold to obtain the pilling number and area. Finally, the classification is completed by KNearest Neighbor (KNN). Figure 1 shows the flowchart of fabric pilling evaluation.
Fig. 1 Pilling grade evaluation system 
1 System Setup
We have designed a computer image acquisition system, which consists of a data acquisition facility and image data analysis facility, as shown in Fig. 2. Imaging hardware used in the data acquisition facility includes a box, a highresolution digital camera (NIKKOR D7200), a macro lens (NIKKOR AFS) and eight LED light sources (1 W). The PS is sensitive to the light information in the image, and the inside walls of the box are painted with black matt varnish to minimize the reflection of the scattered light and the effect of stray light. To calibrate parallel light, the light sources were concentrated with a range of 15 luminous angles. The acquired images are analyzed and reconstructed using MATLAB R2020a.
Fig. 2 Self developed PS system 
2 SemiCalibrated NearLight PS
PS was first proposed by Woodham^{[8]} in 1980, which takes multiple images with different illumination using a specific LED configuration and then infers the 3D model. The traditional PS method assumes that lighting is caused by infinitely distant point sources, and the intensity of the light source needs to be calibrated. In this paper, a nearlight model is developed, and the source intensities are assumed to be unknown. Modeling nearlight sources means that lowcost lighting devices such as LEDs can be used, and the calibration procedure is simplified by assuming unknown source intensities (semicalibrated). This paper uses semicalibrated nearlight PS technology^{[9]} to complete the 3D reconstruction of fabric surfaces.
For 3D reconstruction of fabric surfaces by using semicalibrated nearlight PS, we designed the multilight image acquisition system (Fig. 2) and conducted the calibration to obtain camera intrinsic matrix and the light position parameters which are crucial for 3D reconstruction. The process of calibration is described in Section 2.1. The image acquisition is conducted after system calibration, and eight light sources are controlled to irradiate and shoot the samples in turn. A variational formulation is established, which describes the relationship between the grayscale values of pixel corresponding to the surface point of 3D model, and tackle the nonconvex variational model numerically to complete the 3D reconstruction.
2.1 System Calibration
According to the method proposed by Zhang^{[10]}, taking a set of calibration board images, the camera is calibrated with the Matlab Camera Calibration Toolbox to obtain the camera intrinsic matrix. Figure 3 shows one of the corner point detection results.
Fig. 3 The results of corner point detection 
Triangulation is used to calibrate the location parameters of the light source^{[11]}. A pair of metal spheres are placed in the scene to generate visible highlights for each light source, and multiple highlight points are extracted by thresholding. Then the Canny operator^{[12]} is used to detect the sphere contours, as shown in Fig.4. The light source position parameters are calculated from the actual radius of the sphere, the focal length of the camera, and the physical pixel size of the camera sensor.
Fig. 4 Test the center and radius of the metal ball 
2.2 Photometric Model
The relationship between the surface point $s$ of the 3D model and the 2D pixel point $i=(x,y)$ can be expressed as follows:
$s\left(i\right)=z\left(i\right){\mathit{K}}^{\mathrm{1}}{\left[x,y,\mathrm{1}\right]}^{\mathrm{T}}$(1)
where $z(x,y)$ is the depth value of the 3D model and $\mathit{K}$ is the intrinsic matrix of the camera.
Considering nonparallel illumination features, the attenuation caused by distance ${F}_{l}^{k}$ can be expressed as follows:
${F}_{l}^{k}=\frac{\mathrm{1}}{{\Vert s{x}_{\mathit{l}}^{k}\Vert}^{\mathrm{2}}}$(2)
where $k$ represents the kth light source,$\mathit{l}$represents the irradiated light vector of surface point $s$, and ${x}_{\mathit{l}}^{k}$ is the light source position. Thus, the incident light vector ${\mathit{l}}^{k}\left(s\right)$ at the surface point $s$ can be expressed as:
${\mathit{l}}^{k}\left(s\right)=\frac{\mathrm{1}}{{\Vert s{x}_{\mathit{l}}^{k}\Vert}^{\mathrm{2}}}\text{}\frac{\left[{x}_{\mathit{l}}^{k}s\right]}{\Vert s{x}_{\mathit{l}}^{k}\Vert}$(3)
The normal vector $\mathit{n}(s)$ is the unitlength vector proportional to ${\partial}_{x}s\left(x,y\right)\times {\partial}_{y}s\left(x,y\right)$:
$\mathit{n}\left(s\right)=\frac{\mathit{J}{(i)}^{\mathrm{T}}\left[\begin{array}{c}\nabla \mathrm{l}\mathrm{o}\mathrm{g}\ddot{z}\left(i\right)\text{}\\ \mathrm{1}\end{array}\right]}{\mathrm{d}(i;\nabla \mathrm{l}\mathrm{o}\mathrm{g}\ddot{z}\left(i\right))}$(4)
$\nabla \mathrm{l}\mathrm{o}\mathrm{g}\ddot{z}\left(i\right)=z\left(i\right)$(5)
$\mathit{J}\left(i\right)=\left[\begin{array}{ccc}\frac{f}{\mathrm{d}X}& \frac{f\mathrm{c}\mathrm{o}\mathrm{t}\beta}{\mathrm{d}X}& (x{u}_{\mathrm{0}})\\ \mathrm{0}& \frac{f}{\mathrm{d}Y\mathrm{s}\mathrm{i}\mathrm{n}\beta}& (y{v}_{\mathrm{0}})\\ \mathrm{0}& \mathrm{0}& \mathrm{1}\end{array}\right]$(6)
$\mathrm{d}\left(i;\nabla \mathrm{l}\mathrm{o}\mathrm{g}\ddot{z}\left(i\right)\right)=\Vert \mathit{J}{(i)}^{\mathrm{T}}\left[\begin{array}{c}\nabla \mathrm{l}\mathrm{o}\mathrm{g}\ddot{z}\left(i\right)\\ \mathrm{1}\end{array}\right]\Vert $(7)
On the camera sensor, $f\text{}$ is the focal length, $\mathrm{d}X$ and $\mathrm{d}Y$ are the physical lengths of pixels in the X and Y directions. In the pixel coordinate system,${u}_{\mathrm{0}}$ and${v}_{\mathrm{0}}$ indicate the sensor center coordinates, $\beta $represents the angle between the horizontal and vertical edges of the photographic plate.
The albedo$\rho (s)$ expression was modified following the method of Quéau et al^{[13]}:
$\rho (s)=\ddot{\rho}\left(i\right)\mathrm{d}(i;\nabla \mathrm{l}\mathrm{o}\mathrm{g}\ddot{z}\left(i\right))$(8)
The relationship between the grayscale values of pixel ${p}^{k}\left(i\right)$ corresponding to the surface point $s$ can be expressed as:
$p{\text{}}^{k}\left(i\right)=\rho \left(s\right){e}^{k}\mathrm{m}\mathrm{a}\mathrm{x}\{{\mathit{l}}^{k}\left(s\right)\xb7\mathit{n}\left(s\right),\mathrm{0}\},k\in \left[\mathrm{1},M\right]$(9)
where ${e}^{k}$ represents the light source intensity of the $k$th light source.
Combining the above equations into a system of nonlinear partial differential equations:
${p}^{k}\left(i\right)=\ddot{\rho}\left(i\right){e}^{k}\mathrm{m}\mathrm{a}\mathrm{x}\{[\mathit{J}(i){\mathit{l}}^{k}{\left(i;\mathrm{l}\mathrm{o}\mathrm{g}z\left(i\right)\right)]}^{\mathrm{T}}\left[\begin{array}{c}\nabla \mathrm{l}\mathrm{o}\mathrm{g}z\left(i\right)\\ \mathrm{1}\end{array}\right],\mathrm{0}\},k\in \left[\mathrm{1},M\right]$(10)
$j\text{}=\text{}\mathrm{1}\dots \text{}N$ are the corresponding pixels. Then, the discrete counterpart of Eq. (10) is written as:
${p}_{j}^{k}={\ddot{\rho}}_{j}{e}^{k}\mathrm{m}\mathrm{a}\mathrm{x}\{[{\mathit{J}}_{\mathit{j}}{\mathit{l}}^{\mathit{k}}{\left(\mathrm{l}\mathrm{o}\mathrm{g}{z}_{j}\right)]}^{\mathrm{T}}\left[\begin{array}{c}\nabla \mathrm{l}\mathrm{o}\mathrm{g}{z}_{j}\\ \mathrm{1}\end{array}\right],\mathrm{0}\},\forall k\in \left[\mathrm{1},M\right],\forall j\in \left[\mathrm{1},N\right]$(11)
Optimize the discrete part assuming that Q consists of all rank1 NbyM matrices:
$\underset{\mathrm{l}\mathrm{o}\mathrm{g}z,\theta :\theta \in \mathit{Q}}{\mathrm{m}\mathrm{i}\mathrm{n}}\text{}F(\theta ,\mathrm{l}\mathrm{o}\mathrm{g}z)={\displaystyle \sum _{j=\mathrm{1}}^{N}}{\displaystyle \sum _{k=\mathrm{1}}^{M}}{\lambda}^{\mathrm{2}}\mathrm{l}\mathrm{o}\mathrm{g}\text{}(\mathrm{1}+\frac{({\ddot{\rho}}_{j}{e}^{k}\mathrm{m}\mathrm{a}\mathrm{x}\left\{{\left[{\mathit{J}}_{j}{\mathit{l}}^{k}\left(\mathrm{l}\mathrm{o}\mathrm{g}\text{}{z}_{j}\right)\right]}^{\mathrm{T}}\left[\begin{array}{c}\nabla \mathrm{l}\mathrm{o}\mathrm{g}\text{}{z}_{j}\\ \mathrm{1}\end{array}\right],\mathrm{0}\right\}{p}_{j}^{k}{)}^{\mathrm{2}}}{{\lambda}^{\mathrm{2}}})$(12)
where $\lambda $ is the userdefined parameter of the Cauchy estimator, and in our experiment $\lambda \text{}=\text{}\mathrm{8}$. The nonconvex model of Eq. (12) is minimized alternatively over variables $\theta $ and $\mathrm{l}\mathrm{o}\mathrm{g}z$. In each subproblem, we solve a local quadratic model of Eq. (12) using the positive definite approximation of Hessian to achieve 3D reconstruction.
3 Experiment and Result Analysis
3.1 Preparation of Samples
In this study, 98 fabric samples were classified in pilling grades. Fabric samples were cut into squares of 30 mm×30 mm, resulting in an image of 512×512 pixels. The pilling severity is divided into five grades. From Grade 1 to 5, the pilling degree of fabric decreases gradually until almost no pilling is seen in Grade 5^{[14]}. Five samples selected from the datasets and graded according to American Society of Testing Materials (ASTM) standards were analyzed as standard samples as shown in Fig.5. The subjective evaluation results were taken based on the evaluation results of five experts who compared each fabric with the standard samples. Those with the highest consistency were taken as the subjective evaluation results.
Fig. 5 Standard pilling images of Grade 1 to Grade 5 
3.2 2D Depth Image Generation
In the feature extraction phase, the complexity of extracting 2D image features can be ignored compared with the 3D feature extraction which needs 3D point cloud processing. The 3D model can be projected onto a 2D plane with a determined mapping relationship^{[15]}. The coordinates of the point $s=(x,y,z(i))$ in the depth image are converted to the pixel coordinates of the grayscale image. Eq.(13) converts the depth value $z(i)$ into the grayscale value for the corresponding pixel.
$G(i)=\mathrm{255}\frac{z(i){z}_{\mathrm{m}}}{H}$(13)
where $G(i)$ is the grayscale value of the 2D depth image, ${z}_{\mathrm{m}}$ is the minimum depth value of 3D model, and $H$ is the range of the depth value.
3.3 Pilling Segmentation
To make the pilling segmentation easier in the 2D depth image, fast Fourier Transform (FFT) algorithm^{[16]} is used to filter the texture which is usually the patterns with a high degree of periodicity. FFT converts the image to the different frequency domains for analysis by seperating the texture information and pilling information as the highfrequency component and the lowfrequency component, respectively. The lowfrequency component is highlighted by a Gaussian lowpass filter to eliminate the textured background. For each nontextured image, a global threshold is determined automatically using the adaptive iterative method which selects the average gray value of the whole image as the initial threshold and determines the optimal threshold by an iterative process. Figure 6 shows the processing results from sample image to binary image.
Fig. 6 Fabric pilling sample of Grade 1 to Grade 5 
3.4 Pilling Evaluation
In the subjective evaluation method, the size, total area, and coverage of pills are the main factors that influence the expert evaluation^{[2]}. In this study, the number and area are selected as the features for evaluating fabric pilling. In the binary image, the pilling number refers to the number of pill areas. The values of the pilling pixels are all 1, and the connected area is searched according to the 8connected objects. The first connected area encountered is labeled as 1, then the search is carried out successively. The pill area is the total number of pixels counted in the detection area. It is worth noting that the total area of the pill is ${S}_{\mathrm{t}\mathrm{o}\mathrm{t}\mathrm{a}\mathrm{l}}$.
In this paper, the KNN classifier^{[17]} was used to classify pilling samples and perform a 2fold crossvalidation. The datasets are randomly divided into two subsets with equal numbers of the samples. One subset is used as a training set, and the other as a test set. KNN classifies the test sample by comparing the distance or similarity between the training samples and the test sample, and the K training samples closest to the test sample are found. K=3 is set in this experiment. The category with the highest frequency among the 3 points as the prediction category of the test sample, which is the objective evaluation result of the test sample. The accuracy ${P}_{\mathrm{A}\mathrm{C}\mathrm{C}}$ of the objective evaluation is determined by comparing the results of the subjective evaluation.
${P}_{\mathrm{A}\mathrm{C}\mathrm{C}}=\text{}\frac{{N}_{\mathrm{T}}{N}_{\mathrm{M}}}{{N}_{\mathrm{T}}}$(14)
where ${N}_{\mathrm{M}}$ is the number of samples inconsistent between objective and subjective evaluation, and ${N}_{\mathrm{T}}$ is the total number of samples.
As shown in Table 1, a total of seven samples are misclassified, which gives the system a classification accuracy of 92.8%. As shown in Fig. 7, a scatter plot shows how feature parameters relate to grade. The ordinate represents the pilling area, the abscissas represent the pilling number in samples, the point with different colors represent different pilling grades, and the misclassified samples have also been marked with red. Figure 7 illustrates that KNN is effective in classifying samples with small intraclass spacing and large interclass spacing.
Fig. 7 Results of different evaluation methods 
Result of objective evaluation
4 Conclusion
This paper proposed an effective way to objectively evaluate the pilling images based on PS. Selfdeveloped image acquisition equipment is used to capture multiple images with different illumination, and then the semicalibrated nearlight PS algorithm is used to reconstruct the fabric surface. The 3D model is then converted into a 2D depth image for texture filtering. The transformed nontexture image is segmented into a binary image by the iterative threshold segmentation method, and the defined feature parameters of fabric pilling, including pilling number and area, were extracted. Finally, the KNN classifier was used to identify the fabric samples. The experimental results show that the system is effective and reliable for pilling evaluation. This method performs well for plain fabrics, but it is insufficient for patterned fabrics. The following work should focus on building nonLambertian models.
Reference
 Yun S Y, Kim S, Park C K. Development of an objective fabric pilling evaluation method[J]. Fibers and Polymers, 2013, 14(5): 832837. [CrossRef] [Google Scholar]
 Deng Z, Wang L, Wang X. An integrated method of feature extraction and objective evaluation of fabric pilling[J]. The Journal of the Textile Institute, 2011, 102(1): 113. [CrossRef] [Google Scholar]
 Xiao Q, Wang R, Sun H Y, et al. Objective evaluation of fabric pilling based on image analysis and deep learning algorithm[J]. International Journal of Clothing Science and Technology, 2021, 33(4): 495512. [CrossRef] [Google Scholar]
 Wu J, Wang D, Xiao Z T, et al. Knitted fabric and nonwoven fabric pilling objective evaluation based on SONet[J]. The Journal of the Textile Institute, 2022, 113(7): 14181427. [CrossRef] [Google Scholar]
 Kang T, Cho D H, Kim S. Objective evaluation of fabric pilling using stereovision[J]. Textile Research Journal, 2004, 74: 10131017. [CrossRef] [Google Scholar]
 Xu B G, Yu W R, Wang R W. Stereovision for threedimensional measurements of fabric pilling[J]. Textile Research Journal, 2011, 81(20): 21682179. [CrossRef] [Google Scholar]
 Liu L L, Deng N, Xin B J, et al. Objective evaluation of fabric pilling based on multiview stereo vision[J]. The Journal of the Textile Institute, 2021, 112(12): 19861997. [CrossRef] [Google Scholar]
 Woodham R J. Photometric method for determining surface orientation from multiple images[J]. Optical Engineering, 1980, 19(1): 139144. [NASA ADS] [CrossRef] [Google Scholar]
 Quéau Y, Durix B, Wu T, et al. LEDbased photometric stereo: Modeling, calibration and numerical solution[J]. Journal of Mathematical Imaging and Vision, 2018, 60(3): 313340. [Google Scholar]
 Zhang Z Y. Flexible camera calibration by viewing a plane from unknown orientations [C]// Proceedings of the Seventh IEEE International Conference on Computer Vision. New York: IEEE, 1999: 666673. [Google Scholar]
 Ahmad J, Sun J A, Smith L, et al. An improved photometric stereo through distance estimation and light vector optimization from diffused maxima region[J]. Pattern Recognition Letters, 2014, 50: 1522. [NASA ADS] [CrossRef] [Google Scholar]
 Canny J. A computational approach to edge detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1986, 8(6): 679698. [CrossRef] [Google Scholar]
 Quéau Y, Wu T, Lauze F, et al. A nonconvex variational approach to photometric stereo under inaccurate lighting [C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE, 2017: 350359. [Google Scholar]
 Kim Y, Kim S. Study on the integration of fabric pilling generation and evaluation system[J]. Textile Science and Engineering, 2016, 53(5): 360365. [CrossRef] [Google Scholar]
 Wang Y L, Deng N, Xin B J. Investigation of 3D surface profile reconstruction technology for automatic evaluation of fabric smoothness appearance[J]. Measurement, 2020, 166: 108264. [Google Scholar]
 Xu B. Instrumental evaluation of fabric pilling[J]. The Journal of the Textile Institute, 1997, 88(4): 488500. [CrossRef] [Google Scholar]
 Cover T, Hart P. Nearest neighbor pattern classification[J]. IEEE Transactions on Information Theory, 1967, 13(1): 2127. [CrossRef] [Google Scholar]
All Tables
All Figures
Fig. 1 Pilling grade evaluation system  
In the text 
Fig. 2 Self developed PS system  
In the text 
Fig. 3 The results of corner point detection  
In the text 
Fig. 4 Test the center and radius of the metal ball  
In the text 
Fig. 5 Standard pilling images of Grade 1 to Grade 5  
In the text 
Fig. 6 Fabric pilling sample of Grade 1 to Grade 5  
In the text 
Fig. 7 Results of different evaluation methods  
In the text 
Current usage metrics show cumulative count of Article Views (fulltext article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 4896 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.