Open Access
Issue |
Wuhan Univ. J. Nat. Sci.
Volume 30, Number 1, February 2025
|
|
---|---|---|
Page(s) | 21 - 31 | |
DOI | https://doi.org/10.1051/wujns/2025301021 | |
Published online | 12 March 2025 |
- Weld H, Huang X, Long S, et al. A survey of joint intent detection and slot filling models in natural language understanding[J]. ACM Computing Surveys, 2022, 55(8): 1-38. [Google Scholar]
- Qin L B, Xie T B, Che W X, et al. A survey on spoken language understanding: Recent advances and new frontiers[EB/OL]. [2021-03-12].https://arxiv.org/abs/2103.03095v2. [Google Scholar]
- Reimers N, Gurevych I. Optimal hyperparameters for deep LSTM-networks for sequence labeling tasks[EB/OL]. [2020-01-12]. https://arxiv.org/abs/1707.06799v2. [Google Scholar]
- McCallum A, Freitag D, Pereira F C N. Maximum entropy Markov models for information extraction and segmentation[C]//ICML. 2000, 17: 591-598. [Google Scholar]
- Cortes C, Vapnik V. Support-vector networks[J]. Machine Learning, 1995, 20(3): 273-297. [Google Scholar]
- Ma Q W, Yuan C Y, Zhou W, et al .Label-specific dual graph neural network for multi-label text classification[C]//Proc of the 59th Ann Meeting of the Assoc for Computational Ling and the 11th Int Joint Conf on Nat Language Processing (Vol 1: Long Papers). Stroudsburg: Association for Computational Linguistics, 2021: 3855-3864. [Google Scholar]
- Liu W F, Pang J M, Li N, et al. Research on multi-label text classification method based on tALBERT-CNN[J]. International Journal of Computational Intelligence Systems, 2021, 14(1): 201. [CrossRef] [Google Scholar]
- Goo C W, Gao G, Hsu Y K, et al .Slot-gated modeling for joint slot filling and intent prediction[C]//Proc of the 2018 Conf of the North American Chapter of the Assoc for Comp Ling: Human Language Technologies, Vol 2 (Short Papers). Stroudsburg: Association for Computational Linguistics, 2018: 753-757. [Google Scholar]
- Qin L B, Che W X, Li Y M, et al. A stack-propagation framework with token-level intent detection for spoken language understanding[EB/OL]. [2019-02-12].https://arxiv.org/abs/1909.02188v1. [Google Scholar]
- Wang L H, Yang W Z, Yao M, et al. A bidirectional association model for intention recognition and semantic slot filling[J]. Computer Engineering and Applications, 2021, 57 (3): 196-202. [Google Scholar]
- Liu Y J, Meng F D, Zhang J C, et al. CM-net: A novel collaborative memory network for spoken language understanding[EB/OL]. [2019-02-12].https://arxiv.org/abs/1909.06937v1. [Google Scholar]
- Teng D C, Qin L B, Che W X, et al .Injecting word information with multi-level word adapter for Chinese spoken language understanding[C]//IEEE Inter Conf on Acoustics, Speech and Signal Processing (ICASSP). New York: IEEE, 2021: 8188-8192. [Google Scholar]
- Hou C N, Li J P, Yu H, et al .Prior knowledge modeling for joint intent detection and slot filling[C]//Machine Learning, Multi Agent and Cyber Physical Systems. Singapore: World Scientific, 2023: 3-10. [Google Scholar]
- Qin L B, Xu X, Che W X, et al .AGIF: An adaptive graph-interactive framework for joint multiple intent detection and slot filling[C]//Findings of the Association for Computational Linguistics: EMNLP 2020. Stroudsburg: Association for Computational Linguistics, 2020: 1807-1816. [Google Scholar]
- Qin L B, Wei F X, Xie T B, et al .GL-GIN: Fast and accurate non-autoregressive model for joint multiple intent detection and slot filling[C]//Proc of the 59th Ann Meeting of the Asso for Comp Ling and the 11th Inter Joint Conf on Natural Language Processing (Vol 1: Long Papers). Stroudsburg: Association for Computational Linguistics, 2021: 178-188. [Google Scholar]
- Chen L S, Zhou P L, Zou Y X. Joint multiple intent detection and slot filling via self-distillation[C]//IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). New York: IEEE, 2022: 7612-7616. [Google Scholar]
- Cai F Y, Zhou W H, Mi F, et al .Slim: Explicit slot-intent mapping with Bert for joint multi-intent detection and slot filling[C]//IEEE Inter Conf on Acoustics, Speech and Signal Processing (ICASSP). New York: IEEE, 2022: 7607-7611. [Google Scholar]
- Devlin J, Chang M, Lee K, et al .BERT: Pre-training of deep bidirectional transformers for language understanding[C]//Proc of the 2019 Conf of the North American Chapter of the Asso for Comp Ling: Human Language Technologies. Stroudsburg: Association for Computational Linguistics,2019: 4171-4186. [Google Scholar]
- Chen L S, Chen N, Zou Y X, et al .A transformer-based threshold-free framework for multi-intent NLU[C]//Proceedings of the 29th International Conference on Computational Linguistics. Stroudsburg: Association for Computational Linguistics, 2022: 7187-7192. [Google Scholar]
- Vaswani A, Shazeer N, Parmar N, et al .Attention is all you need[C]. Proceedings of the 31th International Conference on Neural Information Processing Systems. New York: AMC, 2017: 6000-6010. [Google Scholar]
- Hochreiter S, Schmidhuber J. Long short-term memory[J]. Neural Computation, 1997, 9(8): 1735-1780. [CrossRef] [Google Scholar]
- Zhong V, Xiong C M, Socher R. Global-locally self-attentive encoder for dialogue state tracking[C]//Proc of the 56th Ann Meeting of the Assoc for Comp Ling (Vol 1: Long Papers). Stroudsburg: Association for Computational Linguistics, 2018: 1458-1467. [Google Scholar]
- Li W B, Wang Z Y, Wu Y F. A unified neural network model for readability assessment with feature projection and length-balanced loss[EB/OL]. [2020-01-12].https://arxiv.org/abs/2210.10305v2. [Google Scholar]
- Coucke A, Saade A, Ball A, et al. Snips voice platform: An embedded spoken language understanding system for private-by-design voice interfaces[EB/OL]. [2020-01-12].https://arxiv.org/abs/1805.10190v3. [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.