Open Access
Review
Issue |
Wuhan Univ. J. Nat. Sci.
Volume 30, Number 1, February 2025
|
|
---|---|---|
Page(s) | 1 - 20 | |
DOI | https://doi.org/10.1051/wujns/2025301001 | |
Published online | 12 March 2025 |
- He K M, Zhang X Y, Ren S Q, et al .Deep residual learning for image recognition[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2016: 770-778. [Google Scholar]
- Vaswani A, Shazeer N, Parmar N, et al .Attention is all you need[C]//2017 Conference on Neural Information Processing Systems (NeurIPS). Long Beach: Neural Information Processing Systems Foundation, 2017: 5998-6008. [Google Scholar]
- Szegedy C, Zaremba W, Sutskever I, et al. Intriguing properties of neural networks[EB/OL]. [2014-02-19]. https://arxiv.org/abs/1312.6199. [Google Scholar]
- Goodfellow I J, Shlens J, Szegedy C. Explaining and harnessing adversarial examples[EB/OL]. [2015-03-20]. https://arxiv.org/abs/1412.6572. [Google Scholar]
- Carlini N, Wagner D. Towards evaluating the robustness of neural networks[C]//2017 IEEE Symposium on Security and Privacy (SP). New York: IEEE, 2017: 39-57. [Google Scholar]
- Madry A, Makelov A, Schmidt L, et al. Towards deep learning models resistant to adversarial attacks[EB/OL]. [2019-09-04]. https://arxiv.org/abs/1706.06083. [Google Scholar]
- Kurakin A, Goodfellow I J, Bengio S. Adversarial examples in the physical world[C]//5th International Conference on Learning Representations. Chapman and Hall: CRC, 2017: 99-112. [Google Scholar]
- Zhang H Y, Yu Y D, Jiao J T, et al .Theoretically principled trade-off between robustness and accuracy[C]//2019 Proceedings of the 36th International Conference on Machine Learning (ICML). Long Beach: PMLR, 2019: 7472-7482. [Google Scholar]
- Zhang J F, Xu X L, Han B, et al .Attacks which do not kill training make adversarial learning stronger[C]//2020 Proceedings of the 37th International Conference on Machine Learning (ICML). Virtual Event: PMLR, 2020: 11278-11287. [Google Scholar]
- Nie W, Guo B, Huang Y, et al .Diffusion models for adversarial purification[C]//2022 Proceedings of the International Conference on Machine Learning (ICML). Baltimore: PMLR, 2022: 16805-16827. [Google Scholar]
- Jia X J, Zhang Y, Wu B Y, et al .LAS-AT: Adversarial training with learnable attack strategy[C]//2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans: PMLR, 2022: 13388-13398. [Google Scholar]
- Samangouei P, Kabkab M, Chellappa R. Defense-GAN: Protecting classifiers against adversarial attacks using generative models[EB/OL]. [2018-05-18]. https://arxiv.org/abs/1805.06605. [Google Scholar]
- Song Y, Kim T, Nowozin S, et al. PixelDefend: Leveraging generative models to understand and defend against adversarial examples[EB/OL]. [2018-05-21]. http://arxiv.org/abs/1710.10766. [Google Scholar]
- Serban A, Poll E, Visser J. Adversarial examples on object recognition[J]. ACM Computing Surveys, 2020, 53(3): 1-38. [Google Scholar]
- Machado G R, Silva E, Goldschmidt R R. Adversarial machine learning in image classification: A survey toward the defender's perspective[J]. ACM Computing Surveys, 2021, 55(1): 1-38. [Google Scholar]
- Long T, Gao Q, Xu L L, et al. A survey on adversarial attacks in computer vision: Taxonomy, visualization and future directions[J]. Computers & Security, 2022, 121: 102847. [CrossRef] [Google Scholar]
- Wang J, Wang C Y, Lin Q Z, et al. Adversarial attacks and defenses in deep learning for image recognition: A survey[J]. Neurocomputing, 2022, 514: 162-181. [CrossRef] [Google Scholar]
- Li Y J, Xie B, Guo S T, et al. A survey of robustness and safety of 2D and 3D deep learning models against adversarial attacks[J]. ACM Computing Surveys, 2024, 56(6): 1-37. [CrossRef] [Google Scholar]
- Costa J C, Roxo T, Proença H, et al. How deep learning sees the world: A survey on adversarial attacks & defenses[J]. IEEE Access, 2024, 12: 61113-61136. [NASA ADS] [CrossRef] [Google Scholar]
- Akhtar N, Mian A. Threat of adversarial attacks on deep learning in computer vision: A survey[J]. IEEE Access, 2018, 6: 14410-14430. [CrossRef] [Google Scholar]
- Metzen J H, Kumar M C, Brox T, et al .Universal adversarial perturbations against semantic image segmentation[C]//2017 IEEE International Conference on Computer Vision (ICCV). New York: IEEE, 2017: 2774-2783. [Google Scholar]
- Xie C H, Wang J Y, Zhang Z S, et al .Adversarial examples for semantic segmentation and object detection[C]//2017 IEEE International Conference on Computer Vision (ICCV). New York: IEEE, 2017: 1378-1387. [Google Scholar]
- Pony R, Naeh I, Mannor S. Over-the-air adversarial flickering attacks against video recognition networks[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2021: 515-524. [Google Scholar]
- Eykholt K, Evtimov I, Fernandes E, et al .Robust physical-world attacks on deep learning visual classification[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2018: 1625-1634. [Google Scholar]
- Finlayson S G, Bowers J D, Ito J, et al. Adversarial attacks on medical machine learning[J]. Science, 2019, 363(6433): 1287-1289. [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
- Song Y, Kim T, Nowozin S, et al. PixelDefend: Leveraging generative models to understand and defend against adversarial examples[EB/OL]. [2018-05-21]. https://arxiv.org/abs/1710.10766. [Google Scholar]
- Andriushchenko M, Croce F, Flammarion N, et al .Square attack: A query-efficient black-box adversarial attack via random search[C]//Lecture Notes in Computer Science. Cham: Springer-Verlag, 2020: 484-501. [Google Scholar]
- Zhao Z Y, Liu Z R, Larson M. Towards large yet imperceptible adversarial image perturbations with perceptual color distance[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2020: 1036-1045. [Google Scholar]
- Xiao C W, Zhu J Y, L B, et al. Spatially transformed adversarial examples[EB/OL]. [2018-01-09]. https://arxiv.org/abs/1801.02612. [Google Scholar]
- Brown T B, Mané D, Roy A, et al. Adversarial patch[EB/OL]. [2018-05-17]. https://arxiv.org/abs/1712.09665. [Google Scholar]
- Li J, Schmidt F R, Kolter J Z. Adversarial camera stickers: A physical camera-based attack on deep learning systems[C]//2019 Proceedings of the 36th International Conference on Machine Learning (ICML). Long Beach: PMLR, 2019: 3896-3904. [Google Scholar]
- Goodfellow I J, Pouget-Abadie J, Mirza M, et al .Generative adversarial nets[C]//2014 Proceedings of the Advances in Neural Information Processing Systems (NeurIPS). Montreal: NeurIPS, 2014: 2672-2680. [Google Scholar]
- He Z W, Wang W, Dong J, et al. Transferable sparse adversarial attack[EB/OL]. [2021-05-31]. https://arxiv.org/abs/2105.14727. [Google Scholar]
- Papernot N, McDaniel P, Jha S, et al .The limitations of deep learning in adversarial settings[C]//2016 IEEE European Symposium on Security and Privacy (EuroS&P). New York: IEEE, 2016: 372-387. [Google Scholar]
- Wang Z, Bovik A C, Sheikh H R, et al. Image quality assessment: From error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4): 600-612. [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
- Zhang R, Isola P, Efros A A, et al .The unreasonable effectiveness of deep features as a perceptual metric[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). New York: IEEE, 2018: 586-595. [Google Scholar]
- Song Y, Shu R, Kushman N, et al .Constructing unrestricted adversarial examples with generative models[C]//2018 Proceedings of the Advances in Neural Information Processing Systems (NeurIPS). Montreal: NeurIPS, 2018: 8322-8333. [Google Scholar]
- Moosavi-Dezfooli S M, Fawzi A, Frossard P. DeepFool: A simple and accurate method to fool deep neural networks[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2016: 2574-2582. [Google Scholar]
- Moosavi-Dezfooli S M, Fawzi A, Fawzi O, et al .Universal adversarial perturbations[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2017: 86-94. [Google Scholar]
- Brendel W, Rauber J, Bethge M. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models[EB/OL]. [2018-02-16]. https://arxiv.org/abs/1712.04248. [Google Scholar]
- Modas A, Moosavi-Dezfooli S M, Frossard P. SparseFool: A few pixels make a big difference[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2019: 9079-9088. [Google Scholar]
- Xie C H, Zhang Z S, Zhou Y Y, et al .Improving transferability of adversarial examples with input diversity[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2019: 2725-2734. [Google Scholar]
- Dong Y P, Pang T Y, Su H, et al .Evading defenses to transferable adversarial examples by translation-invariant attacks[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2019: 4307-4316. [Google Scholar]
- Lin J D, Song C B, He K, et al. Nesterov accelerated gradient and scale invariance for adversarial attacks[EB/OL]. [2020-02-03]. https://arxiv.org/abs/1908.06281v5. [Google Scholar]
- Chen X S, Yan X Y, Zheng F, et al .One-shot adversarial attacks on visual tracking with dual attention[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2020: 10173-10182. [Google Scholar]
- Croce F, Hein M. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks[C]//2020 Proceedings of the 37th International Conference on Machine Learning (ICML). New York: PMLR, 2020: 2206-2216. [Google Scholar]
- Qiu H N, Xiao C W, Yang L, et al .SemanticAdv: Generating adversarial examples via attribute-conditioned image editing[C]//European Conference on Computer Vision. Cham: Springer-Verlag, 2020: 19-37. [Google Scholar]
- Chen J H, Gu Q Q. RayS: A ray searching method for hard-label adversarial attack[C]//2020 Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. New York: ACM, 2020: 1739-1747. [Google Scholar]
- Mahmood K, Nguyen P H, Nguyen L M, et al. Buzz: Buffer zones for defending adversarial examples in image classification[EB/OL]. [2020-06-16]. https://arxiv.org/abs/1910.02785. [Google Scholar]
- Chen S Z, He Z B, Sun C J, et al. Universal adversarial attack on attention and the resulting dataset DAmageNet[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(4): 2188-2197. [PubMed] [Google Scholar]
- Wu W B, Su Y X, Chen X X, et al .Boosting the transferability of adversarial samples via attention[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2020: 1158-1167. [Google Scholar]
- Mahmood K, Mahmood R, van Dijk M. On the robustness of vision transformers to adversarial examples[C]//2021 IEEE/CVF International Conference on Computer Vision (ICCV). New York: IEEE, 2021: 7818-7827. [Google Scholar]
- Shahin Shamsabadi A, Oh C, Cavallaro A. Semantically adversarial learnable filters[J]. IEEE Transactions on Image Processing, 2021, 30: 8075-8087. [NASA ADS] [CrossRef] [MathSciNet] [PubMed] [Google Scholar]
- Zhao Z Y, Liu Z R, Larson M A. On success and simplicity: A second look at transferable targeted attacks[C]//2021 Advances in Neural Information Processing Systems (NeurIPS). Virtual: NeurIPS, 2021: 6115-6128. [Google Scholar]
- Hu S S, Liu X G, Zhang Y C, et al .Protecting facial privacy: Generating adversarial identity masks via style-robust makeup transfer[C]//2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2022: 14994-15003. [Google Scholar]
- Zhang J P, Wu W B, Huang J T, et al .Improving adversarial transferability via neuron attribution-based attacks[C]//2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2022: 14973-14982. [Google Scholar]
- Bai Y, Wang Y S, Zeng Y Y, et al. Query efficient black-box adversarial attack on deep neural networks[J]. Pattern Recognition, 2023, 133: 109037. [CrossRef] [Google Scholar]
- Chen Z Y, Li B, Wu S, et al. Content-based unrestricted adversarial attack[EB/OL]. [2023-11-29]. https://arxiv.org/abs/2305.10665. [Google Scholar]
- Duan M X, Qin Y C, Deng J Y, et al. Dual attention adversarial attacks with limited perturbations[J]. IEEE Transactions on Neural Networks and Learning Systems, 2023, 99: 1-15. [Google Scholar]
- Wang X S, He K. Enhancing the transferability of adversarial attacks through variance tuning[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2021: 1924-1933. [Google Scholar]
- Jain S, Dutta T. Towards understanding and improving adversarial robustness of vision transformers[C]//2024 Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2024: 24736-24745. [Google Scholar]
- Jain S, Dutta T. Towards understanding and improving adversarial robustness of vision transformers[C]//2024 Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2024: 24736-24745. [Google Scholar]
- Wang Y S, Zou D F, Yi J F, et al .Improving adversarial robustness requires revisiting misclassified examples[C]//2020 8th International Conference on Learning Representations (ICLR). Addis Ababa: OpenReview.net, 2020. [Google Scholar]
- Wong E, Rice L, Kolter J Z. Fast is better than free: Revisiting adversarial training[EB/OL]. [2020-01-12]. https://arxiv.org/abs/2001.03994v1. [Google Scholar]
- Xie C H, Wu Y X, van der Maaten L, et al .Feature denoising for improving adversarial robustness[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2019: 501-509. [Google Scholar]
- Madaan D, Shin J, Hwang S J. Adversarial neural pruning with latent vulnerability suppression[C]//2020 Proceedings of the 37th International Conference on Machine Learning (ICML). New York: PMLR, 2020: 6575-6585. [Google Scholar]
- Kim W J, Cho Y, Jung J, et al .Feature separation and recalibration for adversarial robustness[C]//2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2023: 8183-8192. [Google Scholar]
- Huang X W, Kwiatkowska M, Wang S, et al .Safety verification of deep neural networks[C]//2017 Proceedings of the 29th International Conference on Computer Aided Verification (CAV). Heidelberg: Springer-Verlag, 2017: 3-29. [Google Scholar]
- Gowal S, Dvijotham K, Stanforth R, et al. On the effectiveness of interval bound propagation for training verifiably robust models[EB/OL]. [2019-08-29]. http://arxiv.org/abs/1810.12715. [Google Scholar]
- Tramèr F, Kurakin A, Papernot N, et al. Ensemble adversarial training: Attacks and defenses[EB/OL]. [2020-04-26].https://arxiv.org/abs/1705.07204v5. [Google Scholar]
- Guo C, Rana M, Cissé M, et al. Countering adversarial images using input transformations[EB/OL]. [2018-01-25]. https://arxiv.org/abs/1711.00117v2. [Google Scholar]
- Hill M, Mitchell J C, Zhu S C. Stochastic security: Adversarial defense using long-run dynamics of energy-based models[EB/OL]. [2021-03-18]. https://arxiv.org/abs/2005.13525. [Google Scholar]
- Gowal S, Dvijotham K, Stanforth R, et al. On the effectiveness of interval bound propagation for training verifiably robust models[EB/OL]. [2019-08-29]. http://arxiv.org/abs/1810.12715. [Google Scholar]
- Ehlers R. Formal verification of piece-wise linear feed-forward neural networks[C]//2017 Proceedings of the 15th International Symposium on Automated Technology for Verification and Analysis (ATVA). Heidelberg: Springer-Verlag, 2017: 269-286. [Google Scholar]
- Tjeng V, Xiao K Y, Tedrake R. Evaluating robustness of neural networks with mixed integer programming[EB/OL]. [2019-02-18]. https://arxiv.org/abs/1711.07356v2. [Google Scholar]
- Qin Y, Zhang C Y, Chen T, et al. Understanding and improving robustness of vision transformers through patch-based negative augmentation[J]. Advances in Neural Information Processing Systems, 2022, 35: 16276-16289. [Google Scholar]
- Dong J H, Moosavi-Dezfooli S M, Lai J H, et al .The enemy of my enemy is my friend: Exploring inverse adversaries for improving adversarial training[C]//2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2023: 24678-24687. [Google Scholar]
- Jia X J, Li J S, Gu J D, et al. Fast propagation is better: Accelerating single-step adversarial training via sampling subnetworks[J]. IEEE Transactions on Information Forensics and Security, 2024, 19: 4547-4559. [CrossRef] [Google Scholar]
- Ali K, Bhatti M S, Saeed A, et al. Adversarial robustness of vision transformers versus convolutional neural networks[J]. IEEE Access, 2024, 12: 105281-105293. [CrossRef] [Google Scholar]
- Meng D Y, Chen H. Magnet: A two-pronged defense against adversarial examples[C]//Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 2017: 135-147. [Google Scholar]
- Jia X J, Wei X X, Cao X C, et al .ComDefend: An efficient image compression model to defend adversarial examples[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2019: 6077-6085. [Google Scholar]
- Zhou J L, Liang C, Chen J. Manifold projection for adversarial defense on face recognition[C]//Computer Vision--ECCV 2020: 16th European Conference. Berlin: Springer-Verlag, 2020: 288-305. [Google Scholar]
- Ho C H, Vasconcelos N. DISCO: Adversarial defense with local implicit functions[J]. Advances in Neural Information Processing Systems, 2022, 35: 23818-23837. [Google Scholar]
- Wong E, Kolter J Z. Provable defenses against adversarial examples via the convex outer adversarial polytope[C]//2018 Proceedings of the 35th International Conference on Machine Learning (ICML). New York: PMLR, 2018, 80: 5283-5292. [Google Scholar]
- Chiang P Y, Ni R K, Abdelkader A, et al. Certified defenses for adversarial patches[EB/OL]. [2020-09-25]. https://arxiv.org/abs/2003.06693. [Google Scholar]
- Chen Z Y, Li B, Xu J H, et al .Towards practical certifiable patch defense with vision transformer[C]//2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2022: 15127-15137. [Google Scholar]
- Pang T Y, Xu K, Du C, et al .Improving adversarial robustness via promoting ensemble diversity[C]//2019 Proceedings of the 36th International Conference on Machine Learning (ICML). New York: PMLR, 2019, 97: 4970-4979. [Google Scholar]
- Bui A T, Le T, Zhao H, et al. Improving ensemble robustness by collaboratively promoting and demoting adversarial robustness[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2021, 35(8): 6831-6839. [CrossRef] [Google Scholar]
- Wang H J, Wang Y S. Self-ensemble adversarial training for improved robustness[EB/OL]. [2022-05-03]. https://arxiv.org/abs/2203.09678. [Google Scholar]
- Deng Y A, Mu T T. Understanding and improving ensemble adversarial defense[EB/OL].[2023-11-02]. https://arxiv.org/abs/2310.18477. [Google Scholar]
- Gilmer J, Metz L, Faghri F, et al. Adversarial spheres[EB/OL]. [2018-09-10]. https://arxiv.org/abs/1801.02774. [Google Scholar]
- Tsipras D, Santurkar S, Engstrom L, et al. Robustness may be at odds with accuracy[EB/OL]. [2019-09-09]. https://arxiv.org/abs/1805.12152. [Google Scholar]
- Salman H, Khaddaj A, Leclerc G, et al. Raising the cost of malicious AI-powered image editing[EB/OL]. [2023-02-13]. https://arxiv.org/abs/2302.06588. [Google Scholar]
- Ho J, Jain A, Abbeel P. Denoising diffusion probabilistic models[EB/OL]. [2020-12-16]. https://arxiv.org/abs/2006.11239. [Google Scholar]
- Cao Q, Shen L, Xie W D, et al .VGGFace2: A dataset for recognising faces across pose and age[C]//2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018). New York: IEEE, 2018: 67-74. [Google Scholar]
- Gao J, Lanchantin J, Soffa M L, et al .Black-box generation of adversarial text sequences to evade deep learning classifiers[C]//2018 IEEE Security and Privacy Workshops (SPW). New York: IEEE, 2018: 50-56. [Google Scholar]
- Ren S H, Deng Y H, He K, et al .Generating natural language adversarial examples through probability weighted word saliency[C]//2019 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: Association for Computational Linguistics, 2019: 1085-1097. [Google Scholar]
- Jin D, Jin Z J, Zhou J T, et al. Is BERT really robust? A strong baseline for natural language attack on text classification and entailment[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(5): 8018-8025. [CrossRef] [Google Scholar]
- Lin Y C, Hong Z W, Liao Y H, et al .Tactics of adversarial attack on deep reinforcement learning agents[C]//2017 Proceedings of the 26th International Joint Conference on Artificial Intelligence. Berkeley: International Joint Conferences on Artificial Intelligence Organization, 2017: 3756-3762. [Google Scholar]
- Liu F, Shroff N B. Data poisoning attacks on stochastic bandits[C]//2019 Proceedings of the 36th International Conference on Machine Learning (ICML). New York: PMLR, 2019: 4042-4050. [Google Scholar]
- Gleave A, Dennis M, Wild C, et al .Adversarial policies: Attacking deep reinforcement learning[C]//2020 Proceedings of the 8th International Conference on Learning Representations (ICLR). Addis Ababa: OpenReview.net, 2020. [Google Scholar]
- Papernot N, McDaniel P, Goodfellow I. Transferability in machine learning: From phenomena to black-box attacks using adversarial samples[EB/OL]. [2016-05-24]. https://arxiv.org/abs/1605.07277. [Google Scholar]
- Chen Y B, Liu W W. A theory of transfer-based black-box attacks: Explanation and implications[C]//NIPS'23: 37th International Conference on Neural Information Processing Systems. New York: Curran Associates Inc, 2024: 13887-13907. [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.