Open Access
Issue |
Wuhan Univ. J. Nat. Sci.
Volume 28, Number 6, December 2023
|
|
---|---|---|
Page(s) | 493 - 507 | |
DOI | https://doi.org/10.1051/wujns/2023286493 | |
Published online | 15 January 2024 |
- Le Goues C, Nguyen T, Forrest S, et al. GenProg: A generic method for automatic software repair[J]. IEEE Transactions on Software Engineering, 2012, 38(1): 54-72. [CrossRef] [Google Scholar]
- Saha R K, Lyu Y J, Yoshida H, et al. ELIXIR: Effective object oriented program repair[C]//ASE '17: Proceedings of the 32nd IEEE/ACM International Conference on Automated Software Engineering. New York: IEEE, 2017: 648-659. [Google Scholar]
- Wen M, Chen J J, Wu R X, et al. Context-aware patch generation for better automated program repair[C]//Proceedings of the 40th International Conference on Software Engineering. New York: ACM, 2018: 1-11. [Google Scholar]
- Jiang J J, Xiong Y F, Zhang H Y, et al. Shaping program repair space with existing patches and similar code[C]//Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis. New York: ACM, 2018: 298-309. [Google Scholar]
- Mechtaev S, Yi J Y, Roychoudhury A. Angelix: Scalable multiline program patch synthesis via symbolic analysis[C]//Proceedings of the 38th International Conference on Software Engineering. New York: ACM, 2016: 691-701. [Google Scholar]
- Long F, Rinard M. An analysis of the search spaces for generate and validate patch generation systems[C]//Proceedings of the 38th International Conference on Software Engineering. New York: ACM, 2016: 702-713. [Google Scholar]
- Koyuncu A, Liu K, Bissyandé T F, et al . Fixminer: Mining relevant fix patterns for automated program repair[J]. Empirical Software Engineering, 2020, 25(3):1980-2024. [CrossRef] [Google Scholar]
- Kim D, Nam J, Song J, et al. Automatic patch generation learned from human-written patches[C]//2013 35th International Conference on Software Engineering (ICSE). New York: IEEE, 2013: 802-811. [Google Scholar]
- Long F, Amidon P, Rinard M. Automatic inference of code transforms for patch generation[C]//Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering. New York: ACM, 2017: 727-739. [Google Scholar]
- Martinez M, Monperrus M. Mining software repair models for reasoning on the search space of automated program fixing[J]. Empirical Software Engineering, 2015, 20(1):176-205. [CrossRef] [MathSciNet] [Google Scholar]
- Zhong H, Su Z D. An empirical study on real bug fixes[C]//2015 IEEE/ACM 37th IEEE International Conference on Software Engineering. New York: IEEE, 2015: 913-923. [Google Scholar]
- Liu K, Kim D, Koyuncu A, et al. A closer look at real-world patches[C]//2018 IEEE International Conference on Software Maintenance and Evolution (ICSME). New York: IEEE, 2018: 275-286. [Google Scholar]
- Wang Y, Meng N, Zhong H. An empirical study of multi-entity changes in real bug fixes[C]//2018 IEEE International Conference on Software Maintenance and Evolution (ICSME). New York: IEEE, 2018: 287-298. [Google Scholar]
- Guo D Y, Lu S A, Duan N, et al. UniXcoder: Unified cross-modal pre-training for code representation[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg: Association for Computational Linguistics, 2022: 7212-7225. [Google Scholar]
- Feng Z Y, Guo D Y, Tang D Y, et al. Codebert: A pre-trained model for programming and natural languages[C]//Findings of the Association for Computational Linguistics: EMNLP 2020. Stroudsburg: Association for Computational Linguistics, 2022:1536-1547. [Google Scholar]
- Zhang J, Wang X, Zhang H Y, et al. A novel neural source code representation based on abstract syntax tree[C]//Proceedings of the 41st International Conference on Software Engineering. New York: IEEE Press, 2019:783-794. [Google Scholar]
- Yan X Q, Hu S Z, Mao Y Q, et al. Deep multi-view learning methods: A review[J]. Neurocomputing, 2021, 448: 106-129. [CrossRef] [Google Scholar]
- Just R, Jalali D, Ernst M D. Defects4J: A database of existing faults to enable controlled testing studies for Java programs[C]//Proceedings of the 2014 International Symposium on Software Testing and Analysis. New York: ACM, 2014: 437-440. [Google Scholar]
- Le Goues C, Pradel M, Roychoudhury A. Automated program repair[J]. Communications of the ACM, 2019, 62(12):56-65. [Google Scholar]
- Xuan J F, Martinez M, Demarco F,et al. Nopol: Automatic repair of conditional statement bugs in java programs[J]. IEEE Transactions on Software Engineering, 2017, 43(1):34-55. [CrossRef] [Google Scholar]
- Zheng G L, Nguyen T, Brida S G, et al. Atr: Template-based repair for alloy specifications[C]//Proceedings of the 31st ACM SIGSOFT International Symposium on Software Testing and Analysis. New York : ACM, 2022: 666-677. [Google Scholar]
- Chakraborty S, Ray B. On multi-modal learning of editing source code[C]//2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE). New York: IEEE, 2021: 443-455. [Google Scholar]
- Mashhadi E, Hemmati H. Applying CodeBERT for automated program repair of Java simple bugs[C]//2021 IEEE/ACM 18th International Conference on Mining Software Repositories (MSR). New York: IEEE, 2021: 505-509. [Google Scholar]
- Meng X X, Wang X, Zhang H Y, et al. Template-based neural program repair[C]//2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE). New York: IEEE, 2023: 1456-1468. [Google Scholar]
- Liu K, Koyuncu A, Kim D, et al. TBar: Revisiting template-based automated program repair[C]//Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis. New York: ACM, 2019: 31-42. [Google Scholar]
- Xu X Z, Wang X D, Xue J L. M3V: Multi-modal multi-view context embedding for repair operator prediction[C]//2022 IEEE/ACM International Symposium on Code Generation and Optimization (CGO). New York: IEEE, 2022: 266-277. [Google Scholar]
- Sobreira V, Durieux T, Madeiral F, et al. Dissection of a bug dataset: Anatomy of 395 patches from Defects4J[C]//2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER). New York: IEEE, 2018: 130-140. [Google Scholar]
- Islam M R, Zibran M F. How bugs are fixed: Exposing bug-fix patterns with edits and nesting levels[C]//Proceedings of the 35th Annual ACM Symposium on Applied Computing. New York: ACM, 2020: 1523-1531. [Google Scholar]
- Soto M, Thung F, Wong C P, et al. A deeper look into bug fixes: Patterns, replacements, deletions, and additions[C]//Proceedings of the 13th International Conference on Mining Software Repositories. New York: ACM, 2016: 512-515. [Google Scholar]
- Guo D Y, Ren S, Lu S, et al. Graphcodebert: Pre-training code representations with data flow[EB/OL]. [2020-12-18]. http://arXivpreprintarXiv:2009.08366. [Google Scholar]
- Hu F, Wang Y L, Du L, et al. Long code for code search[EB/OL]. [2022-08-25]. http://arXivpreprintarXiv:2208.11271. [Google Scholar]
- Zhuang X W, Yang Z S, Cordes D. A technical review of canonical correlation analysis for neuroscience applications[J]. Human Brain Mapping, 2020, 41(13): 3807-3833. [CrossRef] [PubMed] [Google Scholar]
- Zheng L C, Cheng Y, Yang H X, et al. Deep co-attention network for multi-view subspace learning[C]//Proceedings of the Web Conference 2021. New York: ACM, 2021: 1528-1539. [Google Scholar]
- Nguyen D K, Okatani T. Improved fusion of visual and language representations by dense symmetric co-attention for visual question answering[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2018: 6087-6096. [Google Scholar]
- Shuai J H, Xu L, Liu C, et al. Improving code search with co-attentive representation learning[C]//Proceedings of the 28th International Conference on Program Comprehension. New York: ACM, 2020: 196-207. [Google Scholar]
- Ding Z S, Li H, Shang W Y, et al. Can pre-trained code embeddings improve model performance? Revisiting the use of code embeddings in software engineering tasks[J]. Empirical Software Engineering, 2022, 27(3): 1-38. [CrossRef] [PubMed] [Google Scholar]
- Socher R, Lin C C, Manning C, et al. Parsing natural scenes and natural language with recursive neural networks[C]//Proceedings of the 28th International Conference on Machine Learning (ICML-11). New York: ACM, 2011: 129-136. [Google Scholar]
- Tai K S, Socher R, Manning C D. Improved semantic representations from tree-structured long short-term memory networks[C]//Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Stroudsburg: Association for Computational Linguistics, 2015: 1556-1566. [Google Scholar]
- Devlin J, Chang M W, Lee K, et al. BERT: Pre-training of deep bidirectional transformers for language understanding[EB/OL]. [2018-09-26]. https://arxiv.org/abs/1810.04805.pdf. [Google Scholar]
- Radford A, Narasimhan K, Salimans T, et al. Improving language understanding by generative pre-training[C]//North American Chapter of the Association for Computational Linguistics. Denver: NAACL, 2018. [Google Scholar]
- Raffel C, Shazeer N, Roberts A, et al. Exploring the limits of transfer learning with a unified text-to-text transformer[J]. The Journal of Machine Learning Research, 2020, 21(1):5485-5551. [MathSciNet] [Google Scholar]
- Mikolov T, Sutskever I, Chen K, et al. Distributed representations of words and phrases and their compositionality[C]//Advances in Neural Information Processing Systems. New York: ACM, 2013, 2:3111-3119. [Google Scholar]
- Kuncheva L I. Combining Pattern Classifiers[M]. New York: Wiley, 2014. [CrossRef] [Google Scholar]
- Lu J S, Yang J W, Batra D, et al. Hierarchical question-image co-attention for visual question answering[C]//Advances in Neural Information Processing Systems. New York: ACM, 2016, 29: 289-297. [Google Scholar]
- Feng G A, Hu Z W, Zhang L H, et al. Encoder fusion network with co-attention embedding for referring image segmentation[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2021: 15506-15515. [Google Scholar]
- Saha R K, Lyu Y J, Lam W, et al. Bugs.jar: A large-scale, diverse dataset of real-world Java bugs[C]//Proceedings of the 15th International Conference on Mining Software Repositories. New York: ACM, 2018: 10-13. [Google Scholar]
- Lee J, Kim D, Bissyandé T F, et al. Bench4BL: Reproducibility study on the performance of IR-based bug localization[C]//Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis. New York: ACM, 2018: 61-72. [Google Scholar]
- Delfim F M, Durieux T, Sobreira V, et al. Towards an automated approach for bug fix pattern detection[EB/OL]. [2018-07-30]. http://arxiv.org/pdf/1807.11286.pdf. [Google Scholar]
- Falleri J R, Morandat F, Blanc X, et al. Fine-grained and accurate source code differencing[C]//Proceedings of the 29th ACM/IEEE International Conference on Automated Software Engineering. New York: ACM, 2014: 313-324. [Google Scholar]
- Pawlak R, Monperrus M, Petitprez N, et al. SPOON: A library for implementing analyses and transformations of Java source code[J]. Software: Practice and Experience, 2016, 46(9): 1155-1179. [CrossRef] [Google Scholar]
- Paszke A, Gross S, Massa F, et al. Pytorch: An imperative style, high-performance deep learning library[C]//Advances in Neural Information Processing Systems. New York: ACM, 2019, 32:1-12, . [Google Scholar]
- Yu S W, Wang T, Wang J. Data augmentation by program transformation[J]. Journal of Systems and Software, 2022, 190: 111304. [CrossRef] [Google Scholar]
- Rabin M R I, Alipour M A. ProgramTransformer: A tool for generating semantically equivalent transformed programs[J]. Software Impacts, 2022, 14: 100429. [CrossRef] [Google Scholar]
- Wei J, Zou K. EDA: Easy data augmentation techniques for boosting performance on text classification tasks[EB/OL]. [2019-11-26]. https://arxiv.org/abs/1901.11196.pdf. [Google Scholar]
- Xu Y, Huang B, Zou X N, et al. Predicting effectiveness of generate-and-validate patch generation systems using random forest[J]. Wuhan University Journal of Natural Science, 2018, 23(5): 525-534. [CrossRef] [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.