Open Access
Issue
Wuhan Univ. J. Nat. Sci.
Volume 28, Number 4, August 2023
Page(s) 299 - 308
DOI https://doi.org/10.1051/wujns/2023284299
Published online 06 September 2023
  1. Hamdi A, Carel E, Joseph A, et al. Information extraction from invoices[J]. Document Analysis and Recognition–ICDAR, 2021, 2: 699-714. [Google Scholar]
  2. Alwaneen T H, Azmi A M, Aboalsamh H A, et al. Arabic question answering system: A survey[J]. Artificial Intelligence Review, 2022: 1-47. [Google Scholar]
  3. Kouris P, Alexandridis G, Stafylopatis A. Abstractive text summarization based on deep learning and semantic content generalization[C]// Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Florence: Association for Computational Linguistics, 2019: 5082-5092. [Google Scholar]
  4. Li Z, Qu D, Xie C J, et al. Language model pre-training method in machine translation based on named entity recognition[J]. International Journal on Artificial Intelligence Tools, 2020, 29(7n08): 2040021. [CrossRef] [Google Scholar]
  5. Cotterell R, Duh K. Low-resource named entity recognition with cross-lingual, character-level neural conditional random fields[J]. International Joint Conference on Natural Language Processing, 2017, 2: 91-96. [Google Scholar]
  6. Feng X, Feng X, Qin B, et al. Improving low resource named entity recognition using cross-lingual knowledge transfer[J]. IJCAI, 2018, 1: 4071-4077. [Google Scholar]
  7. Dai X, Adel H. An analysis of simple data augmentation for named entity recognition[EB/OL]. [2022-12-18]. https://arxiv.org/abs/2010.11683. [Google Scholar]
  8. Chen S Q, Pei Y J, Ke Z W, et al. Low-resource named entity recognition via the pre-training model[J]. Symmetry, 2021, 13(5): 786. [NASA ADS] [CrossRef] [Google Scholar]
  9. Arkhipov M, Trofimova M, Kuratov Y, et al. Tuning multilingual transformers for language-specific named entity recognition[C]// Proceedings of the 7th Workshop on Balto-Slavic Natural Language Processing. Stroudsburg: Association for Computational Linguistics, 2019: 89-93. [Google Scholar]
  10. Wei J, Zou K. EDA: Easy data augmentation techniques for boosting performance on text classification tasks[EB/OL]. [2022-11-06]. https://arxiv.org/abs/1901.11196. [Google Scholar]
  11. Kumar V, Choudhary A, Cho E. Data augmentation using pre-trained transformer models[EB/OL]. [2021-12-26]. https://arxiv.org/abs/2003.02245. [Google Scholar]
  12. Chen J, Wang Z, Tian R, et al. Local additivity-based data augmentation for semi-supervised NER[EB/OL].[2022-10-26]. https://arxiv.org/abs/2010.01677. [Google Scholar]
  13. Liu L, Ding B, Bing L, et al. MulDA: A multilingual data augmentation framework for low-resource cross-lingual NER[C]// The 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. Stroudsburg: Association for Computational Linguistics, 2021, 1: 5834-5846. [Google Scholar]
  14. Kobayashi S. Contextual augmentation: Data augmentation by words with paradigmatic relations[EB/OL]. [2022-10-26]. https://arxiv.org/abs/1805.06201. [Google Scholar]
  15. Wu X, Lv S W, Zang L J, et al. Conditional BERT contextual augmentation[C]// International Conference on Computational Science. Cham: Springer-Verlag, 2019, 4(19): 84-95. [Google Scholar]
  16. Tsai C T, Mayhew S, Roth D. Cross-lingual named entity recognition via wikification[C]// Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning. Stroudsburg: Association for Computational Linguistics, 2016: 219-228. [Google Scholar]
  17. Li X, Bing L, Zhang W, et al. Unsupervised cross-lingual adaptation for sequence tagging and beyond[EB/OL]. [2022-10-26]. https://arxiv.org/abs/2010.12405. [Google Scholar]
  18. Li K, Chen C B, Quan X J, et al. Conditional augmentation for aspect term extraction via masked sequence-to-sequence generation[C]// Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: Association for Computational Linguistics, 2020: 7056-7066. [Google Scholar]
  19. Song K T, Tan X, Qin T, et al. MASS: Masked sequence to sequence pre-training for language generation[EB/OL]. [2022-12-21]. https://arxiv.org/abs/1905.02450. [Google Scholar]
  20. Lin H Y, Lu Y J, Tang J L, et al. A rigorous study on named entity recognition: Can fine-tuning pretrained model lead to the promised land?[C]// Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Stroudsburg: Association for Computational Linguistics, 2020: 7291-7300. [Google Scholar]
  21. Zhou R, Li X, He R, et al. MELM: Data augmentation with masked entity language modeling for low-resource NER[C]// The 60th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: Association for Computational Linguistics, 2022, 1: 2251-2262. [Google Scholar]
  22. Ding B S, Liu L L, Bing L D, et al. DAGA: Data augmentation with a generation approach for low-resource tagging tasks[C]// Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Stroudsburg: Association for Computational Linguistics, 2020: 6045-6057. [Google Scholar]
  23. Liu Y H, Gu J T, Goyal N, et al. Multilingual denoising pre-training for neural machine translation[J]. Transactions of the Association for Computational Linguistics, 2020, 8: 726-742. [CrossRef] [Google Scholar]
  24. Liu Z H, Xu Y, Yu T Z, et al. CrossNER: Evaluating cross-domain named entity recognition[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2021, 35(15): 13452-13460. [CrossRef] [Google Scholar]
  25. Fu J L, Liu P F, Zhang Q. Rethinking generalization of neural models: A named entity recognition case study[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(5): 7732-7739. [CrossRef] [Google Scholar]
  26. Sang T K, Erik F. Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition[J]. Conference on Natural Language Learning. Stroudsburg: Association for Computational Linguistics, 2002. [Google Scholar]
  27. Tjong Kim Sang E F, De Meulder F. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition[C]// Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003. Morristown: Association for Computational Linguistics, 2003:142–147. [Google Scholar]
  28. Conneau A, Khandelwal K, Goyal N, et al. Unsupervised cross-lingual representation learning at scale[C]// The 58th Annual Meeting of the Association for Computational Linguistics.Stroudsburg: Association for Computational Linguistics, 2020: 8440-8451. [Google Scholar]
  29. Kingma D P, Ba J. Adam: A method for stochastic optimization[EB/OL]. [2022-11-28]. https://arxiv.org/abs/1412.6980. [Google Scholar]
  30. Wang B Y, Shang L F, Lioma C, et al. On position embeddings in bert[C]// International Conference on Learning Representations. Vienna: ICLR2021, 2021. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.