Issue |
Wuhan Univ. J. Nat. Sci.
Volume 30, Number 1, February 2025
|
|
---|---|---|
Page(s) | 1 - 20 | |
DOI | https://doi.org/10.1051/wujns/2025301001 | |
Published online | 12 March 2025 |
Computer Science
CLC number: TP183
A Survey of Adversarial Examples in Computer Vision: Attack, Defense, and Beyond
计算机视觉领域中对抗样本的综述:攻击、防御及其他
1 School of Computer Science, Wuhan University, Wuhan 430072, Hubei, China
2 National Engineering Research Center for Multimedia Software (NERCMS), Wuhan University, Wuhan 430072, Hubei, China
3 Key Laboratory of Multimedia and Network Communication Engineering, Hubei Province, Wuhan University, Wuhan 430072, Hubei, China
4 School of Cyber Science and Engineering, Wuhan University, Wuhan 430072, Hubei, China
† Corresponding author. E-mail: cliang@whu.edu.cn
Received:
12
July
2024
Recent years have witnessed the ever-increasing performance of Deep Neural Networks (DNNs) in computer vision tasks. However, researchers have identified a potential vulnerability: carefully crafted adversarial examples can easily mislead DNNs into incorrect behavior via the injection of imperceptible modification to the input data. In this survey, we focus on (1) adversarial attack algorithms to generate adversarial examples, (2) adversarial defense techniques to secure DNNs against adversarial examples, and (3) important problems in the realm of adversarial examples beyond attack and defense, including the theoretical explanations, trade-off issues and benign attacks in adversarial examples. Additionally, we draw a brief comparison between recently published surveys on adversarial examples, and identify the future directions for the research of adversarial examples, such as the generalization of methods and the understanding of transferability, that might be solutions to the open problems in this field.
摘要
近年来,深度神经网络(DNNs)在计算机视觉任务中的表现日益出色。然而,研究人员发现了一个潜在的漏洞:精心设计的对抗样本很容易通过对输入数据进行不可察觉的修改而误导 DNNs 做出错误行为。在本综述中,我们重点关注(1)对抗攻击算法以生成对抗样本,(2)对抗防御技术以保护 DNNs 免受对抗样本攻击,以及(3)对抗样本领域中除攻击和防御之外的重要问题,包括对抗样本的理论解释、折中问题以及良性攻击。此外,我们对最近发表的对抗样本综述进行了简要比较,并确定了对抗样本研究的未来方向,如方法的跨域泛化、对可迁移性的理解等,这些方向可能会解决该领域的未解决问题。
Key words: computer vision / adversarial examples / adversarial attack / adversarial defense
关键字 : 计算机视觉 / 对抗样本 / 对抗攻击 / 对抗防御
Cite this article: XU Keyizhi, LU Yajuan, WANG Zhongyuan, et al. A Survey of Adversarial Examples in Computer Vision: Attack, Defense, and Beyond[J]. Wuhan Univ J of Nat Sci, 2025, 30(1): 1-20.
Biography: XU Keyizhi, male, Master candidate, research direction: AI security, adversarial examples. E-mail: xukeyizhi@whu.edu.cn
Foundation item: Supported by the National Natural Science Foundation of China(U1903214, 62372339, 62371350, 61876135), the Ministry of Education Industry-University Cooperative Education Project(202102246004, 220800006041043, 202002142012), and the Fundamental Research Funds for the Central Universities (2042023kf1033)
© Wuhan University 2025
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.