Open Access
Issue
Wuhan Univ. J. Nat. Sci.
Volume 29, Number 4, August 2024
Page(s) 338 - 348
DOI https://doi.org/10.1051/wujns/2024294338
Published online 04 September 2024
  1. Tong Y Q, Liu J F, Liu S Z. China is implementing "Garbage Classification" action[J]. Environmental Pollution, 2020, 259: 113707. [NASA ADS] [CrossRef] [Google Scholar]
  2. Cheng Y W, Zhu J N, Jiang M X, et al. FloW: A dataset and benchmark for floating waste detection in inland waters[C]//2021 IEEE/CVF International Conference on Computer Vision (ICCV). New York: IEEE, 2021: 10933-10942. [CrossRef] [Google Scholar]
  3. Zhou T H, Yang M M, Jiang K, et al. MMW radar-based technologies in autonomous driving: A review[J]. Sensors, 2020, 20(24): 7283. [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
  4. Bansal M, Kumar M, Kumar M. 2D object recognition: A comparative analysis of SIFT, SURF and ORB feature descriptors[J]. Multimedia Tools and Applications, 2021, 80(12): 18839-18857. [CrossRef] [Google Scholar]
  5. Wei Y, Tian Q, Guo J, et al. Multi-vehicle detection algorithm through combining Harr and HOG features[J]. Math Comput Simul, 2018, 155: 130-145. [Google Scholar]
  6. Campbell C, Ying Y M. Learning with Support Vector Machines[M]. Cham: Springer International Publishing, 2011. [CrossRef] [Google Scholar]
  7. Charbuty B, Abdulazeez A. Classification based on decision tree algorithm for machine learning[J]. Journal of Applied Science and Technology Trends, 2021, 2(1): 20-28. [CrossRef] [Google Scholar]
  8. Bharati P, Pramanik A. Deep learning techniques—R-CNN to mask R-CNN: A survey[C]//Computational Intelligence in Pattern Recognition. Singapore: Springer-Verlag, 2020: 657-668. [CrossRef] [Google Scholar]
  9. Ren S Q, He K M, Girshick R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149. [CrossRef] [Google Scholar]
  10. Liu Q P, Bi J J, Zhang J W, et al. B-FPN SSD: An SSD algorithm based on a bidirectional feature fusion pyramid[J]. The Visual Computer, 2023, 39(12): 6265-6277. [CrossRef] [MathSciNet] [Google Scholar]
  11. Huang L C, Wang Z W, Fu X B. Pedestrian detection using RetinaNet with multi-branch structure and double pooling attention mechanism[J]. Multimedia Tools and Applications, 2024, 83(2): 6051-6075. [CrossRef] [Google Scholar]
  12. Diwan T, Anirudh G, Tembhurne J V. Object detection using YOLO: Challenges, architectural successors, datasets and applications[J]. Multimedia Tools and Applications, 2023, 82(6): 9243-9275. [CrossRef] [PubMed] [Google Scholar]
  13. Wang C Y, Bochkovskiy A, Liao H Y M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[C]//2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2023: 7464-7475. [CrossRef] [Google Scholar]
  14. Ding X H, Zhang X Y, Ma N N, et al. RepVGG: Making VGG-style ConvNets great again[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2021: 13728-13737. [CrossRef] [Google Scholar]
  15. Lee Y, Hwang J W, Lee S, et al. An energy and GPU-computation efficient backbone network for real-time object detection[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). New York: IEEE, 2019: 752-760. [CrossRef] [Google Scholar]
  16. Zand M, Etemad A, Greenspan M. Oriented bounding boxes for small and freely rotated objects[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 4701715. [CrossRef] [Google Scholar]
  17. Zhang Y Q, Bai Y C, Ding M L, et al. Multi-task generative adversarial network for detecting small objects in the wild[J]. International Journal of Computer Vision, 2020, 128(6): 1810-1828. [Google Scholar]
  18. Gao C, Tang W, Jin L Z, et al. Exploring effective methods to improve the performance of tiny object detection[C]//European Conference on Computer Vision. Cham: Springer-Verlag, 2020: 331-336. [Google Scholar]
  19. Leng J X, Ren Y H, Jiang W, et al. Realize your surroundings: Exploiting context information for small object detection[J]. Neurocomputing, 2021, 433: 287-299. [CrossRef] [Google Scholar]
  20. Zhu X K, Lyu S C, Wang X, et al. TPH-YOLOv5: Improved YOLOv5 based on transformer prediction head for object detection on drone-captured scenarios[C]//2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). New York: IEEE, 2021: 2778-2788. [CrossRef] [Google Scholar]
  21. Benjumea A, Teeti I, Cuzzolin F, et al. YOLO-Z: Improving small object detection in YOLOv5 for autonomous vehicles[EB/OL]. [2021-10-01]. http://arxiv.org/abs/2112.11798. [Google Scholar]
  22. Qi L G, Gao J L. Small object detection based on improved yolov7[J]. Computer Engineering, 2023, 49: 41-48(Ch). [Google Scholar]
  23. Wang X R, Xu Y, Zhou J P, et al. Safflower picking recognition in complex environments based on an improved YOLOv7[J]. Transactions of the Chinese Society of Agricultural Engineering, 2023, 39(6): 169-176. [Google Scholar]
  24. Kang J, Wang Q, Liu W., et al. Detection model of aerial photo insulator multi-defect by integrating cat-bifpn and attention mechanism[J]. High Voltage Engineering, 2023, 49: 3361-3376(Ch). [Google Scholar]
  25. Tan M X, Pang R M, Le Q V. EfficientDet: Scalable and efficient object detection[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2020: 10778-10787. [CrossRef] [Google Scholar]
  26. Jiang Y Q, Tan Z Y, Wang J Y, et al. GiraffeDet: A heavy-neck paradigm for object detection[EB/OL]. [2022-10-01]. http://arxiv.org/abs/2202.04256. [Google Scholar]
  27. Li Y T, Fan Q S, Huang H S, et al. A modified YOLOv8 detection network for UAV aerial image recognition[J]. Drones, 2023, 7(5): 304. [Google Scholar]
  28. Tang Y H, Han K, Guo J Y, et al. GhostNetV2: Enhance cheap operation with long-range attention[J]. Advances in Neural Information Processing Systems, 2022, 35: 9969-9982. [Google Scholar]
  29. Rezatofighi H, Tsoi N, Gwak J Y, et al. Generalized intersection over union: A metric and a loss for bounding box regression[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2019: 658-666. [CrossRef] [Google Scholar]
  30. Zheng Z H, Wang P, Liu W, et al. Distance-IoU loss: Faster and better learning for bounding box regression[C]// Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(7): 12993-13000. [CrossRef] [Google Scholar]
  31. Tong Z J, Chen Y H, Xu Z W, et al. Wise-IoU: Bounding box regression loss with dynamic focusing mechanism[EB/OL]. [2023-10-01]. http://arxiv.org/abs/2301.10051. [Google Scholar]
  32. Ge Z, Liu S T, Wang F, et al. YOLOX: Exceeding YOLO series in 2021[EB/OL]. [2021-10-01]. http://arxiv.org/abs/2107.08430. [Google Scholar]
  33. Wang Y, Wang H Y, Xin Z H. Efficient detection model of steel strip surface defects based on YOLO-V7[J]. IEEE Access, 2022, 10: 133936-133944. [NASA ADS] [CrossRef] [Google Scholar]
  34. Zhang Y Y, Hong D, McClement D, et al. Grad-CAM helps interpret the deep learning models trained to classify multiple sclerosis types using clinical brain magnetic resonance imaging[J]. Journal of Neuroscience Methods, 2021, 353: 109098. [CrossRef] [PubMed] [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.