Open Access
Issue |
Wuhan Univ. J. Nat. Sci.
Volume 30, Number 3, June 2025
|
|
---|---|---|
Page(s) | 222 - 230 | |
DOI | https://doi.org/10.1051/wujns/2025303222 | |
Published online | 16 July 2025 |
- Shivers O. Control flow analysis in scheme[C]//Proceedings of the ACM SIGPLAN 1988 Conference on Programming Language Design and Implementation. New York: ACM, 1988: 164-174. [Google Scholar]
- Chen L, Ye W, Zhang S K. Capturing source code semantics via tree-based convolution over API-enhanced AST[C]//Proceedings of the 16th ACM International Conference on Computing Frontiers. New York: ACM, 2019: 174-182. [Google Scholar]
- Guo D, Ren S, Lu S, et al. GraphCodeBERT: Pre-training code representations with data flow[EB/OL]. [2021-09-13]. https://arxiv.org/pdf/2009.08366. [Google Scholar]
- Hu X, Li G, Xia X, et al. Deep code comment generation[C]//Proceedings of the 26th Conference on Program Comprehension. New York: ACM, 2018: 200-210. [Google Scholar]
- Wang W H, Li G, Ma B, et al. Detecting code clones with graph neural network and flow-augmented abstract syntax tree[C]//2020 IEEE 27th International Conference on Software Analysis, Evolution and Reengineering (SANER). New York: IEEE, 2020: 261-271. [Google Scholar]
- Wang W H, Li G, Shen S J, et al. Modular tree network for source code representation learning[J]. ACM Transactions on Software Engineering and Methodology (TOSEM), 2020, 29(4): 1-23. [Google Scholar]
- Zhang J, Wang X, Zhang H Y, et al. A novel neural source code representation based on abstract syntax tree[C]//2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE). New York: IEEE/ACM, 2019: 783-794. [Google Scholar]
- Vallée-Rai R, Co P, Gagnon E, et al. Soot: A Java bytecode optimization framework[C]//CASCON First Decade High Impact Papers. New York: ACM, 2010: 214-224. [Google Scholar]
- IBM. WALA —Static analysis framework for Java[EB/OL]. [2018-04-19]. http://wala.sourceforge.net/. [Google Scholar]
- Pawlak R, Monperrus M, Petitprez N, et al. SPOON: A library for implementing analyses and transformations of Java source code[J]. Software: Practice and Experience, 2016, 46(9): 1155-1179. [Google Scholar]
- Jiang X, Zheng Z, Lyu C, et al. Treebert: A tree-based pre-trained model for programming language[C]//Uncertainty in Artificial Intelligence. New York: PMLR, 2021: 54-63. [Google Scholar]
- Ellis K. Human-like few-shot learning via Bayesian reasoning over natural language[EB/OL]. [2023-09-29].https://arxiv.org/pdf/2306.02797. [Google Scholar]
- Wang X, Wei J, Schuurmans D, et al. Self-consistency improves chain of thought reasoning in language models[EB/OL]. [2022-04-27]. https://arxivorg/pdf/2203.11171. [Google Scholar]
- Webson A, Pavlick E. Do prompt-based models really understand the meaning of their prompts?[EB/OL]. [2021-09-21]. https://arxivorg/pdf/2109.01247. [Google Scholar]
- Huang Q, Sun Y B, Xing Z C, et al. Let's discover more API relations: A large language model-based AI chain for unsupervised API relation inference[J]. ACM Transactions on Software Engineering and Methodology. New York: ACM, 2024, 33(8): 1-34. [Google Scholar]
- Ji Z, Yu T, Xu Y, et al. Towards mitigating LLM hallucination via self reflection[C]//Findings of the Association for Computational Linguistics: EMNLP 2023. New York: ACM, 2023: 1827-1843. [Google Scholar]
- Wu T S, Terry M, Cai C J. AI chains: Transparent and controllable human-AI interaction by chaining large language model prompts[C]//Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. New York: ACM, 2022: 1-22. [Google Scholar]
- Wu T, Jiang E, Donsbach A, et al. Promptchainer: Chaining large language model prompts through visual programming[C]//CHI Conference on Human Factors in Computing Systems Extended Abstracts. New York: ACM, 2022: 1-10. [Google Scholar]
- Achiam J, Adler S, Agarwal S, et al. GPT-4 technical report[EB/OL]. [2023-04-26]. https://arxivorg/pdf/2303.08774. [Google Scholar]
- Liu J, Shen D, Zhang Y, et al. What makes good in-context examples for GPT-3?[EB/OL]. [2021-06-12]. https://arxivorg/pdf/2101.06804. [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.