Open Access
Issue
Wuhan Univ. J. Nat. Sci.
Volume 28, Number 1, February 2023
Page(s) 35 - 44
DOI https://doi.org/10.1051/wujns/2023281035
Published online 17 March 2023
  1. He Y, Zhao N, Yin H X. Integrated networking, caching, and computing for connected vehicles: A deep reinforcement learning approach[J]. IEEE Transactions on Vehicular Technology, 2018, 67(1): 44-55. [CrossRef] [Google Scholar]
  2. Zhao D B, Chen Y R, Lv L. Deep reinforcement learning with visual attention for vehicle classification[J]. IEEE Transactions on Cognitive and Developmental Systems, 2017, 9(4): 356-367. [CrossRef] [Google Scholar]
  3. Wang X, Yang W, Weinreb J, et al. Searching for prostate cancer by fully automated magnetic resonance imaging classification: deep learning versus non-deep learning[J]. Scientific Reports, 2017, 7(1): 15415. [Google Scholar]
  4. Xiong H Y, Alipanahi B, Lee L J, et al. The human splicing code reveals new insights into the genetic determinants of disease[J]. Science, 2015, 347(6218): 1254806. [CrossRef] [PubMed] [Google Scholar]
  5. Ching T, Himmelstein D S, Beaulieu-Jones B K, et al. Opportunities and obstacles for deep learning in biology and medicine[J]. Journal of the Royal Society Interface, 2018, 15(141): 20170387. [Google Scholar]
  6. Branson K. A deep (learning) dive into a cell[J]. Nature Methods, 2018, 15(4): 253-254. [CrossRef] [PubMed] [Google Scholar]
  7. Deng Y, Bao F, Kong Y Y, et al. Deep direct reinforcement learning for financial signal representation and trading[J]. IEEE Transactions on Neural Networks and Learning Systems, 2017, 28(3): 653-664. [CrossRef] [PubMed] [Google Scholar]
  8. Szegedy C, Zaremba W, Sutskever I, et al. Intriguing properties of neural networks[EB/OL]. [2021-12-06].http://www.arXiv:1312.6199. [Google Scholar]
  9. Kurakin A, Goodfellow I J, Bengio S. Adversarial examples in the physical world[EB/OL]. [2022-02-15]. http://www.arXiv:1607.02533. [Google Scholar]
  10. Carlini N, Wagner D. Towards evaluating the robustness of neural networks[C]//2017 IEEE Symposium on Security and Privacy (SP). Washingtong D C: IEEE, 2017: 39-57. [Google Scholar]
  11. Xie C H, Zhang Z S, Zhou Y Y, et al. Improving transferability of adversarial examples with input diversity[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washingtong D C: IEEE, 2019: 2725-2734. [Google Scholar]
  12. Wu W B, Su Y X, Chen X X, et al. Boosting the transferability of adversarial samples via attention[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washingtong D C: IEEE, 2020: 1158-1167. [Google Scholar]
  13. Selvaraju R R, Cogswell M, Das A, et al. Grad-CAM: Visual explanations from deep networks via gradient-based localization[C]//Proceedings of the IEEE International Conference on Computer Vision. Washingtong D C: IEEE, 2017: 618-626. [Google Scholar]
  14. Guo C, Gardner J R, You Y R, et al. Simple black-box adversarial attacks[EB/OL]. [2019-05-17]. https://doi.org/10.48550/arXiv.1905.07121. [Google Scholar]
  15. Dong Y P, Pang T Y, Su H, et al. Evading defenses to transferable adversarial examples by translation-invariant attacks[C]// The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Washingtong D C: IEEE, 2019:4307-4316. [Google Scholar]
  16. Wu W B, Su Y X, Lyu M R, et al. Improving the transferability of adversarial samples with adversarial transformations[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washingtong D C: IEEE, 2021: 9020-9029. [Google Scholar]
  17. Papernot N, McDaniel P, Jha S, et al. The limitations of deep learning in adversarial settings[C]//2016 IEEE European Symposium on Security and Privacy (EuroS&P). Washington D C: IEEE, 2016: 372-387. [Google Scholar]
  18. Zhou W, Hou X, Chen Y, et al. Transferable adversarial perturbations[C]//Proceedings of the European Conference on Computer Vision (ECCV). Washingtong D C: IEEE, 2018: 452-467. [Google Scholar]
  19. Krizhevsky A. Learning Multiple Layers of Features from Tiny Images[D]. Tront: University of Tront, 2009. [Google Scholar]
  20. LeCun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278-2324. [CrossRef] [Google Scholar]
  21. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[EB/OL]. [2022-09-15]. http://www.arXiv:1409.1556. [Google Scholar]
  22. He K M, Zhang X Y, Ren S Q, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washingtong D C: IEEE, 2016: 770-778. [Google Scholar]
  23. Szegedy C, Vanhoucke V, Ioffe S, et al. Rethinking the inception architecture for computer vision[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington D C: IEEE, 2016: 2818-2826. [Google Scholar]
  24. Hu J, Shen L, Sun G. Squeeze-and-excitation networks[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washingtong D C: IEEE, 2018: 7132-7141. [Google Scholar]
  25. Qin Z Q, Zhang P Y, Wu F, et al. FCAnet: Frequency channel attention networks[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. Washingtong D C: IEEE, 2021: 763-772. [Google Scholar]
  26. Beyer L, Zhai X, Kolesnikov A. Better plain ViT baselines for ImageNet-1k[EB/OL]. [2021-12-05]. http://www.arXiv:2205.01580, 2022. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.