Open Access
Issue
Wuhan Univ. J. Nat. Sci.
Volume 28, Number 3, June 2023
Page(s) 237 - 245
DOI https://doi.org/10.1051/wujns/2023283237
Published online 13 July 2023
  1. Wood D D. Ethereum: A secure decentralised generalised transaction ledger[J]. Ethereum Project Yellow Paper, 2014(1):1-32. [Google Scholar]
  2. Chen X P, Liao P Y, Zhang Y X, et al. Understanding code reuse in smart contracts[C]//2021 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER). New York: IEEE, 2021: 470-479. [Google Scholar]
  3. He N Y, Wu L, Wang H Y, et al. Characterizing code clones in the Ethereum smart contract ecosystem[C]//Financial Cryptography and Data Security. Berlin: Springer-Verlag, 2020: 654-675. [Google Scholar]
  4. Vacca A, Fredella M, Di Sorbo A, et al. An empirical investigation on the trade-off between smart contract readability and gas consumption[C]//Proceedings of the 30th IEEE/ACM International Conference on Program Comprehension. New York: ACM, 2022: 214-224 . [Google Scholar]
  5. Guida L C, Daniel F. Supporting reuse of smart contracts through service orientation and assisted development[C]//2019 IEEE International Conference on Decentralized Applications and Infrastructures (DAPPCON). New York: IEEE, 2019: 59-68. [Google Scholar]
  6. Shi C C, Xiang Y, Yu J S, et al. Semantic code search for smart contracts[EB/OL]. [2022-09-12]. https://arxiv.org/abs/2111.14139. [Google Scholar]
  7. Smirnov Y V. Subject search in modern library information retrieval systems[J]. Scientific and Technical Libraries, 2021, 1(7): 87-96. [Google Scholar]
  8. Holzbaur L, Hollanti C, Wachter-Zeh A. Computational code-based single-server private information retrieval[C]//2020 IEEE International Symposium on Information Theory (ISIT). New York: IEEE, 2020: 1065-1070. [Google Scholar]
  9. Trotman A, Lilly K. JASSjr: The minimalistic BM25 search engine for teaching and learning information retrieval[C]//Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2020: 2185-2188. [Google Scholar]
  10. Jiang H, Nie L M, Sun Z Y, et al. ROSF: Leveraging information retrieval and supervised learning for recommending code snippets[J]. IEEE Transactions on Services Computing, 2019, 12(1): 34-46. [CrossRef] [Google Scholar]
  11. Cambronero J, Li H Y, Kim S, et al. When deep learning met code search[C]//ESEC/SIGSOFT. New York:ACM, 2019: 964-974. [Google Scholar]
  12. Gu X D, Zhang H Y, Kim S. Deep code search[C]//2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE). New York: IEEE, 2018: 933-944. [Google Scholar]
  13. Pour M V, Li Z, Ma L, et al. A search-based testing framework for deep neural networks of source code embedding[C]//2021 14th IEEE Conference on Software Testing, Verification and Validation (ICST). New York: IEEE, 2021: 36-46. [Google Scholar]
  14. Shuai J H, Xu L, Liu C, et al. Improving code search with co-attentive representation learning[C]//Proceedings of the 28th International Conference on Program Comprehension. New York: ACM, 2020: 196-207. [Google Scholar]
  15. Yang J, Fu C, Liu X Y, et al. Codee: A tensor embedding scheme for binary code search[J]. IEEE Transactions on Software Engineering, 2022, 48(7): 2224-2244. [CrossRef] [Google Scholar]
  16. Feng Z Y, Guo D Y, Tang D Y, et al. CodeBERT: A pre-trained model for programming and natural languages[EB/OL]. [2020-12-25]. https://arxiv.org/abs/2002.08155. [Google Scholar]
  17. Guo D Y, Ren S, Lu S, et al. GraphCodeBERT: Pre-training code representations with data flow[EB/OL]. [2020-12-25]. https://arxiv.org/abs/2009.08366. [Google Scholar]
  18. Wan Y, Zhao W, Zhang H Y, et al. What do they capture? : A structural analysis of pre-trained language models for source code[C]//Proceedings of the 44th International Conference on Software Engineering. New York: ACM, 2022: 2377-2388. [Google Scholar]
  19. Yuan X E, Lin G J, Tai Y H, et al. Deep neural embedding for software vulnerability discovery: Comparison and optimization[J]. Security and Communication Networks, 2022, 2022: 1-12. [Google Scholar]
  20. Karmakar A, Robbes R. What do pre-trained code models know about code? [C]//2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE). New York: IEEE, 2022: 1332-1336. [Google Scholar]
  21. Wang C Z, Yang Y H, Gao C Y, et al. No more fine-tuning? An experimental evaluation of prompt tuning in code intelligence[EB/OL]. [2021-02-25]. https://arxiv.org/abs/2207.11680. [Google Scholar]
  22. Han X, Zhang Z Y, Ding N, et al. Pre-trained models: Past, present and future[J]. AI Open, 2021, 2: 225-250. [CrossRef] [Google Scholar]
  23. Bisht M, Gupta R. Fine-tuned pre-trained model for script recognition[J]. International Journal of Mathematical, Engineering and Management Sciences, 2021, 6(5): 1297-1314. [CrossRef] [Google Scholar]
  24. Liu B Y, Cai Y F, Guo Y, et al. TransTailor: Pruning the pre-trained model for improved transfer learning[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2021, 35(10): 8627-8634. [CrossRef] [Google Scholar]
  25. Feist J, Grieco G, Groce A. Slither: A static analysis framework for smart contracts[C]//2019 IEEE/ACM 2nd International Workshop on Emerging Trends in Software Engineering for Blockchain (WETSEB). New York: IEEE, 2019: 8-15. [Google Scholar]
  26. Devlin J, Chang M W, Lee K, et al. BERT: Pre-training of deep bidirectional transformers for language understanding[EB/OL]. [2020-12-25]. https://arxiv.org/abs/1810.04805. [Google Scholar]
  27. Liu Y H, Ott M, Goyal N, et al. Roberta: A robustly optimized bert pretraining approach[EB/OL]. [2020-12-25]. https://arxiv.org/abs/1907.11692. [Google Scholar]
  28. Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[C]// NIPS'17: Proceedings of the 31st International Conference on Neural Information Processing Systems. New York: ACM, 2017: 6000-6010. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.