Open Access
Issue
Wuhan Univ. J. Nat. Sci.
Volume 29, Number 2, April 2024
Page(s) 125 - 133
DOI https://doi.org/10.1051/wujns/2024292125
Published online 14 May 2024
  1. Lafferty J, McCallum A, Pereira F C N. Conditional random fields: Probabilistic models for segmenting and labeling sequence data[C]//Proceedings of the 18th Conference of the International Conference on Machine Learning. Washington D C: AAAI Press, 2001: 282-289. [Google Scholar]
  2. Fritzler A, Logacheva V, Kretov M. Few-shot classification in named entity recognition task[C]//Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing. New York: ACM SIGGRAPH, 2019: 993-1000. [CrossRef] [Google Scholar]
  3. Yang Y, Katiyar A. Simple and effective few-shot named entity recognition with structured nearest neighbor learning[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: EMNLP, 2020: 6365-6375. [CrossRef] [Google Scholar]
  4. Das S S S, Katiyar A, Passonneau R J, et al. CONTaiNER: Few-shot named entity recognition via contrastive learning[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistic. Stroudsburg: ACL, 2022: 6338-6353. [Google Scholar]
  5. Snell J, Swersky K, Zemel R. Prototypical networks for few-shot learning[C]//Proceedings of the 2020 Conference in Neural Information Processing Systems. Cambridge: NIPS, 2020: 4077-4087. [Google Scholar]
  6. Wang P Y, Xu R X, Liu T Y, et al. An enhanced span-based decomposition method for few-shot sequence labeling[C]//Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics. Stroudsburg: NAACL, 2022: 5012-5024. [Google Scholar]
  7. Ma T T, Jiang H Q, Wu Q H, et al. Decomposed meta-learning for few-shot named entity recognition[C]//Proceedings of the 2022 Conference in Annual Meeting of the Association for Computational Linguistic. Stroudsburg: ACL, 2022: 1584-1596. [Google Scholar]
  8. Wang J N, Wang C Y, Tan C Q. SpanProto: A two-stage span-based prototypical network for few-shot named entity recognition[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: EMNLP, 2022: 3466-3476. [CrossRef] [Google Scholar]
  9. Li Y Q, Yu Y, Qian T Y. Type-aware decomposed framework for few-shot named entity recognition [EB/OL]. [2023-10-16]. https://arxiv.org/pdf/2302.06397.pdf. [Google Scholar]
  10. Zhu E W, Li J P. Boundary smoothing for named entity recognition[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2022: 7096-7108. [Google Scholar]
  11. He K M, Zhang X Y, Ren S Q, et al. Deep residual learning for image recognition[C]//Proceedings of the 2016 IEEE conference on Computer Vision and Pattern Recognition. New York: IEEE, 2016: 770-778. [Google Scholar]
  12. Qiao S Y, Liu C X, Shen W, et al. Few-shot image recognition by predicting parameters from activations[C]//Proceedings of the 2018 IEEE conference on Computer Vision and Pattern Recognition. New York: IEEE, 2018: 7229-7238. [CrossRef] [Google Scholar]
  13. Finn C, Abbeel P, Levine S. Model-agnostic meta-learning for fast adaptation of deep networks[C]//Proceedings of the 34th International Conference on Machine Learning. New York: ICML, 2017: 1126-1135. [Google Scholar]
  14. Li Z G, Zhou F W, Chen F, et al. Meta-SGD: Learning to learn quickly for few-shot learning [EB/OL]. [2017-09-28]. https://arxiv.org/pdf/1707.09835.pdf. [Google Scholar]
  15. Jiang X, Havaei M, Chartrand G, et al. On the importance of attention in meta-learning for few-shot text classification [EB/OL]. [2018-06-03]. https://arxiv.org/pdf/1806.00852.pdf. [Google Scholar]
  16. Gu J T, Wang Y, Chen Y, et al. Meta-learning for low-resource neural machine translation[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: EMNLP, 2018: 3622-3631. [Google Scholar]
  17. Zhan R Z, Liu X B, Wong D F, et al. Meta-curriculum learning for domain adaptation in neural machine translation[C]//Proceedings of the AAAI Conference on Artificial Intelligence. Washington: AAAI, 2021: 14310-14318. [Google Scholar]
  18. Sun S L, Sun Q F, Zhou K, et al. Hierarchical attention prototypical networks for few-shot text classification[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Stroudsburg: EMNLP, 2019: 476-485. [Google Scholar]
  19. Geng R Y, Li B H, Li Y B, et al. Dynamic memory induction networks for few-shot text classification[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2020: 1087-1094. [CrossRef] [Google Scholar]
  20. Han C C, Fan Z Q, Zhang D X, et al. Meta-learning adversarial domain adaptation network for few-shot text classification[C]//Proceedings of the 2021 Conference in Annual Meeting of the Association for Computational Linguistic. Stroudsburg: ACL, 2021: 1664-1673. [Google Scholar]
  21. Hou Y Y, Che W X, Lai Y K, et al. Few-shot slot tagging with collapsed dependency transfer and label-enhanced task-adaptive projection network[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2020: 1381-1393. [CrossRef] [Google Scholar]
  22. Ji B, Li S S, Gan S D, et al. Few-shot named entity recognition with entity-level prototypical network enhanced by dispersedly distributed prototypes[C]//Proceedings of the 29th International Conference on Computational Linguistics. Berlin: Springer-Verlag, 2022: 1842-1854. [Google Scholar]
  23. Chen Y F, Huang Z, Hu M H, et al. Decoupled two-phase framework for class-incremental few-shot named entity recognition[J]. Tsinghua Science and Technology, 2023, 28(5): 976-987. [CrossRef] [Google Scholar]
  24. Wang H M, Cheng L Y, Zhang W X, et al. Enhancing few-shot NER with prompt ordering based data augmentation [EB/OL]. [2023-05-19]. https://arxiv.org/pdf/2305.11791.pdf. [Google Scholar]
  25. Chen S G, Aguilar G, Neves L, et al. Data augmentation for cross-domain named entity recognition[C]//Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: EMNLP, 2021: 5346-5356. [CrossRef] [Google Scholar]
  26. Zhou R, Li X, He R D, et al. MELM: Data augmentation with masked entity language modeling for low-resource NER[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2022: 2251-2262. [Google Scholar]
  27. Zhang M Z, Yan H, Zhou Y Q, et al. PromptNER: A prompting method for few-shot named entity recognition via k nearest neighbor search[EB/OL]. [2023-05-19]. https://arxiv.org/pdf/2305.12217.pdf. [Google Scholar]
  28. Yu J T, Bohnet B, Poesio M. Named entity recognition as dependency parsing[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Stroudsburg: ACL, 2020: 6470-6476. [Google Scholar]
  29. Ding N, Chen Y L, Cui G Q, et al. Few-shot classification with hypersphere modeling of prototypes[C]//Proceedings of the 2023 Conference in Annual Meeting of the Association for Computational Linguistic. Stroudsburg: ACL, 2023: 895-917. [CrossRef] [Google Scholar]
  30. Ding N, Xu G W, Chen Y L, et al. Few-NERD: A few-shot named entity recognition dataset[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. Stroudsburg: ACL, 2021: 3198-3213. [Google Scholar]
  31. Sang E F T K, De Meulder F. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition[C]//Proceedings of the 17th Conference on Natural Language Learning at HLT-NAACL 2003. Stroudsburg: ACL, 2003:142-147. [Google Scholar]
  32. Derczynski L, Nichols E, Van Erp M, et al. Results of the WNUT2017 shared task on novel and emerging entity recognition[C]//Proceedings of the 3rd Workshop on Noisy User-generated Text. Stroudsburg: EMNLP, 2017: 140-147. [CrossRef] [Google Scholar]
  33. Pradhan S, Moschitti A, Xue N W, et al. Towards robust linguistic analysis using OntoNotes[C]//Proceedings of the 17th Conference on Computational Natural Language Learning. Stroudsburg: CoNLL, 2013: 143-152. [Google Scholar]
  34. Zeldes A. The GUM corpus: Creating multilayer resources in the classroom[J]. Language Resources and Evaluation, 2017, 51(3): 581-612. [CrossRef] [Google Scholar]
  35. Kenton J D M W C, Toutanova L K. BERT: Pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the 2019 Conference on Natural Language Learning at HLT-NAACL. Stroudsburg: ACL, 2019: 4171-4186. [Google Scholar]
  36. Loshchilov I, Hutter F. Decoupled weight decay regularization[EB/OL]. [2017-11-14]. https://arxiv.org/pdf/1711.05101.pdf. [Google Scholar]
  37. Vinyals O, Blundell C, Lillicrap T, et al. Matching networks for one shot learning[C]//Proceedings of the 30th International Conference on Neural Information Processing Systems. San Cambridge: NIPS, 2016: 3630-3638. [Google Scholar]
  38. Van der Maaten L J P, Hinton G E. Visualizing high-dimensional data using t-SNE[J]. Journal of Machine Learning Research, 2008, 9(11): 2579-2605. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.