Open Access
Wuhan Univ. J. Nat. Sci.
Volume 27, Number 6, December 2022
Page(s) 465 - 475
Published online 10 January 2023
  1. Jin H W, Liu X J, Liu W W, et al. Analysis on ubiquitous power Internet of Things based on environmental protection[J]. IOP Conference Series: Earth and Environmental Science, 2019, 300(4): 042077. [NASA ADS] [CrossRef] [Google Scholar]
  2. Chen K, Mahfoud R J, Sun Y H, et al. Defect texts mining of secondary device in smart substation with GloVe and attention-based bidirectional LSTM[J]. Energies, 2020, 13(17): 4522. [CrossRef] [Google Scholar]
  3. Bakr H M, Shaaban M F, Osman A H, et al. Optimal allocation of distributed generation considering protection[J]. Energies, 2020, 13(9): 2402. [CrossRef] [Google Scholar]
  4. Liu G C, Zhao P, Qin Y, et al. Electromagnetic immunity performance of intelligent electronic equipment in smart substation's electromagnetic environment[J]. Energies, 2020, 13(5): 1130. [CrossRef] [Google Scholar]
  5. Sun H F, Wang Z Y, Wang J H, et al. Data-driven power outage detection by social sensors[J]. IEEE Transactions on Smart Grid, 2016, 7(5): 2516-2524. [CrossRef] [Google Scholar]
  6. Li Y C, Zhang P, Huang R. Lightweight quantum encryption for secure transmission of power data in smart grid[J]. IEEE Access, 2019, 7: 36285-36293. [CrossRef] [Google Scholar]
  7. Wang H F, Liu Z Q. An error recognition method for power equipment defect records based on knowledge graph technology[J]. Frontiers of Information Technology & Electronic Engineering, 2019, 20(11): 1564-1577. [CrossRef] [Google Scholar]
  8. Yu X H, Xue Y S. Smart grids: A cyber-physical systems perspective[J]. Proceedings of the IEEE, 2016, 104(5):1058-1070. [CrossRef] [Google Scholar]
  9. Niall O M, Sean C, Anderson C, et al. Deep learning vs. traditional computer vision[C]// Computer Vision Conference. Las Vegas: CVC, 2020, 943:128-144. [Google Scholar]
  10. Sun Y, Hu Y X, Zhang X Y, et al. Emotional dimension PAD prediction for emotional speech recognition [J]. Journal of Zhejiang University (Engineering Science),2019, 53(10):2041-2048(Ch). [Google Scholar]
  11. Duan R, Wang Y L, Qin H X. Artificial intelligence speech recognition model for correcting spoken English teaching[J]. Journal of Intelligent & Fuzzy Systems, 2021,40(2):3513-3524. [CrossRef] [Google Scholar]
  12. Wu H P, Liu Y L, Wang J W. Review of text classification methods on deep learning[J]. Computers, Materials and Continua, 2020, 63(3):1309-1321. [CrossRef] [Google Scholar]
  13. Manickavasagam R, Selvan S, Selvan M. CAD system for lung nodule detection using deep learning with CNN[J]. Medical & Biological Engineering & Computing, 2022, 60(1): 221-228. [Google Scholar]
  14. Mikolov T, Karafiat M, Burget L, et al. Recurrent neural network based language model [C]//11th Annual Conference of the International Speech Communication Association. Florence: ISCA, 2011: 2877-2880. [Google Scholar]
  15. Liu Z Q, Wang H F, Cao J, et al. Research on text classification model of power equipment defects based on convolutional neural networks[J]. Power Grid Technology, 2018, 42(2): 644-651. [Google Scholar]
  16. Athiwaratkun B, Stokes J W. Malware classification with LSTM and GRU language models and a character-level CNN [C]//2017 IEEE International Conference on Acoustics, Speech and Signal Processing. New York: IEEE, 2017: 2482- 2486. [Google Scholar]
  17. Zennaki O, Semmar N, Besacier L. Inducing multilingual text analysis tools using bidirectional recurrent neural networks[C]//26th International Conference on Computational Linguistics. Osaka: COLING, 2016: 450-460. [Google Scholar]
  18. Wei D Q, Wang B, Lin G, et al. Research on unstructured text data mining and fault classification based on RNN-LSTM with malfunction inspection report[J]. Energies, 2017, 10(3): 406. [CrossRef] [Google Scholar]
  19. Peng H P, Li J X, He Y, et al. Large-scale hierarchical text classification with recursively regularized deep graph-CNN[C]// Proceedings of the 2018 World Wide Web Conference-WWW'18. New York: ACM Press, 2018: 1063-1072. [Google Scholar]
  20. Yao L, Mao C S, Luo Y. Graph convolutional networks for text classification[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2019, 33(1):7370-7377. [CrossRef] [Google Scholar]
  21. Hu L M, Yang T C, Shi C, et al. Heterogeneous graph attention networks for semi-supervised short text classification [C]// Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: Association for Computational Linguistics, 2021, 39(3):4821-4830. [Google Scholar]
  22. Ye Z H, Jiang Y L, Li Z Y, et al. Document and word representations generated by graph convolutional network and bert for short text classification[C]// 24th European Conference on Artificial Intelligence. Spain: ECAI, 2020: 2275-2281. [Google Scholar]
  23. Li J C, Li L. A hybrid genetic algorithm based on information entropy and game theory[J]. IEEE ACCESS, 2020, 8:36602-36611. [CrossRef] [Google Scholar]
  24. Jiao L L, Luo S L, Liu W T, et al. A genetic algorithm-based fuzzing method for binary programs [J]. Journal of Zhejiang University (Engineering Science), 2018, 52(5):1014-1019(Ch). [Google Scholar]
  25. Fadel I A, Alsanabani H, Öz C, et al. Hybrid fuzzy-genetic algorithm to automated discovery of prediction rules[J]. Journal of Intelligent & Fuzzy Systems, 2021, 40(1):43-52. [Google Scholar]
  26. Kim Y. Convolutional neural networks for sentence classification[C]//Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Stroudsburg: Association for Computational Linguistics, 2014: 1746-1751. [Google Scholar]
  27. Liu P F, Qiu X P, Huang X J, et al. Recurrent neural network for text classification with multi-task learning [C] // IJCAI'16: Proceedings of the 25th International Joint Conference on Artificial Intelligence. New York: AAAI Press, 2016: 2873-2879. [Google Scholar]
  28. Joulin A, Grave E, Bojanowski P, et al. Bag of tricks for efficient text classification[C]// Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. Stroudsburg: Association for Computational Linguistics, 2017, 2: 427-431. [Google Scholar]
  29. Devlin J, Chang M W, Lee K, et al. BERT: Pre-training of deep bidirectional transformers for language understanding [C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: Association for Computational Linguistics, 2019: 4171-4186. [Google Scholar]
  30. Zhou P, Shi W, Tian J, et al. Attention-based bidirectional long short-term memory networks for relation classification [C]//Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: Association for Computational Linguistics, 2016: 207-221. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.