Open Access
Issue
Wuhan Univ. J. Nat. Sci.
Volume 27, Number 4, August 2022
Page(s) 331 - 340
DOI https://doi.org/10.1051/wujns/2022274331
Published online 26 September 2022

© Wuhan University 2022

Licence Creative CommonsThis is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

0 Introduction

Education Informatization 2.0 Action Plan emphasizes the establishment and improvement of sustainable development mechanism of education informatization, and the construction of a networked, digital, intelligent, personalized and lifelong education system. Artificial intelligence is an important driving force for the innovation and development of education in the future. The new application of artificial intelligence in education will strongly promote the formation of educational Artificial Intelligence (eAI), a new research and application paradigm[1]. Artificial intelligence technology represented by knowledge graph provides technical guarantee for model construction in the field of education. Knowledge construction, knowledge fusion, knowledge discovery and knowledge reasoning are important methods to realize intelligent education.

For teachers, artificial intelligence will change their working mode greatly[2]. In traditional teaching, teachers are always the absolute core of the classroom and the imitator of knowledge, while students are the recipients of knowledge. There is a one-to-many one-way radiation relationship between teachers and students, so it is difficult for students to get timely feedback from teachers. In recent years, mobile Internet has driven the application and development of swarm intelligence, which is characterized by solving various problems by using the mass of people on the Internet and their wisdom. Integrating massive, diverse, and high-quality open-source resources generated by the wisdom of crowds into the teaching process can solve the outstanding problems of insufficient quantity and low quality of resources in current teaching. Open-source data, technology trends and knowledge questions from the Internet can help students answer various technical problems encountered in the learning process and accumulate teaching and learning experience and skills[3, 4].

The teaching method based on open source and group intelligence has strong effectiveness. Its essence is to help students overcome the shortcomings of the current methods and solve the practical problems with the help of the massive groups on the Internet and open-source resources. Based on open-source swarm intelligence and knowledge graph, we build a smart brain model to represent some repetitive knowledge teaching work of teachers.

1 Research Status and Development Trends

Li et al[5] pointed out that as an important enabling technology for the application of artificial intelligence education, educational knowledge graph can represent the multi-level and multi-granularity knowledge pedigree, and cognitive process of each discipline based on the rich multi-source heterogeneous data and resources in the teaching process, which provides the possibility to meet the modeling requirements of discipline knowledge ontology in education and teaching. At present, the research on knowledge mapping in curriculum construction can be summarized as follows: constructing knowledge mapping for specific courses based on electronic teaching materials, using various deep learning or machine learning technologies, and realizing visualization, retrieval and recommendation based on the knowledge mapping. Zhong[6] established the educational knowledge atlas model supported by artificial intelligence. However, the current educational knowledge map still has the following deficiencies in data source, knowledge discovery and knowledge fusion:

1) At present, many researchers construct curriculum knowledge maps based on electronic textbooks, which leads to deficiencies in frontier, timeliness, and scale. When constructing the knowledge maps around the programming courses, we found that the deficiences result from the following aspects:

(ⅰ) Wide data sources

How to coordinate data sources so that they can play their respective roles without conflict? Most of the predecessors built the curriculum knowledge atlas based on a single data source. Hu[7] built the atlas based on electronic teaching materials and directories in PDF and Word format or vertical website. Guo[8] constructed the atlas based on open knowledge base and online encyclopedia.

(ⅱ) Large data scale

How to study a variety of recognition methods that can coordinate with each other according to the characteristics of entities and relations is worth exploring.

(ⅲ) Multiple data conflicts

Zhou[9] pointed out that the heterogeneous characteristics of big data could easily cause conflicts among data sources. Therefore, how to sort out the data of various data sources, evaluate their advantages and disadvantages, and how to deal with conflicts must be solved.

(ⅳ) High data update frequence

Resources evolve dynamically and update continuously. There are huge daily updates to the Q&A documents in open-source software resources. How to use the new data to update the existing knowledge graph in real time is a concern academic circle.

Therefore, we construct the course and open source knowledge graph from the bottom up, taking into account the four relationships of synonymous, contextual, attribute and others. Multi-data source, multi-feature, multi-method coordination, conflict handling and dynamic update are included in the framework. The characteristics of data source coordination, data identification automation and training set marking automation are most suitable for the introduction of bootstrapping in the framework[10]. The idea is to recognize the new data automatically, and then use the new data to reversely mark the training set of the recognizer to improve the training quality of the recognizer.

2) The difficulties of sub-graph fusion under the same semantic representation of heterogeneous knowledge graph are as follows:

(ⅰ) Semantic unification is difficult. Subgraph fusion is not the rough accumulation of subgraph entities, but the fusion of semantically similar entities and relations. Wu[11] considered the text semantics of entity labels in the fusion of student subgraphs in the same course. Kristiadi et al[12] proposed that it is important to study semantic representation of heterogeneous knowledge subgraph and semantically based similarity calculation.

(ⅱ) The weight is subjective. Subgraph fusion depends largely on the weights of entities and relations between each knowledge subgraph. The study found that the more important the course knowledge, the higher its frequency in open-source software resources. The hot topics in open-source software resources often reflect the importance of knowledge points in practice. Guo[13] allowed students to assign weight to entities according to their perceived difficulty and evaluate other subgraphs and entities in subgraphs. Wu[11] allowed students to score the relationships in the subgraph subjectively. Therefore, it is feasible to assign objective weights to the subgraph using open-source software resources.

(ⅲ) The fusion is monotonous. Gu[14]used binomial matrix to consider both structural information and semantic information of tag text. Because the subgraph to be processed is originally isomorphic semantic representation, the label text semantics only remain at the level of the general dictionary. Li et al[15] believed that integrating structural and semantic information is a problem that must be solved. The domain semantic information of heterogeneous subgraphs should be considered when using binary neighborhood matrix to fuse subgraphs.

(ⅳ) The fusion granularity is coarse. The integration of curriculum subgraph gets through the curriculum group horizontally. How to explore the overlapping area between curriculum subgraphs at a finer granularity is a practical problem. We explore the potential relationship between knowledge points with finer granularity within the curriculum, that is, overlapping areas, and find the backbone of all courses.

3) The difficulties of knowledge discovery and natural representation based on knowledge graph are as follows:

(ⅰ) Low creativity. Most predecessors found new entities and relations through formal logic reasoning. We should study how to realize knowledge discovery through prediction in data thinking. First, assume the existence of a new entity or a new relationship, and then predict the probability of its existence by the prediction model. If the probability is high, new knowledge is discovered.

(ⅱ) Low naturalness. In the knowledge graph discovery of the program design course, Wu[16] searched the 10 Java knowledge points with the highest similarity. Huang[17] retrieved the top 10 Python knowledge points. Retrieval of discrete entities and relationships is only a category in the field of information retrieval. If we want to integrate the retrieval results and express them in natural language, we need to involve the category of natural language processing.

2 The Construction of the Curriculum Intelligent Brain Model

In order to realize the knowledge atlas fusion of multiple open-source swarm intelligence courses and the function of knowledge discovery, this study designs the curriculum intelligent brain spectrum model, as shown in Fig. 1. The model construction is carried out in three stages: knowledge learning, knowledge fusion and knowledge application.

thumbnail Fig. 1 Course intelligent brain model supported by Group Intelligence

The knowledge learning stage is shown in Fig 1-①. Data are obtained from open knowledge base, Wikipedia, interactive encyclopedia, webpage text and electronic documents to construct the knowledge map of the course. Smart brains will learn knowledge in a way that mimics humans. As shown in Fig. 1-②, the open-source knowledge map is constructed by obtaining relevant open-source data from open-source software sources, defect reports, mailing lists, Q&A documents, and other open-source software resources according to the content in the course map. This brain model can simulate human tracking frontier, establish the relationship between curriculum knowledge map and open-source knowledge map, and assign weights to each other. This model can also simulate human learning for applications, contribute to the open-source community, and apply the learned knowledge to programming, error correction, communication, and discussion.

The knowledge fusion stage is shown in Fig. 1-③. The knowledge graph of sub-courses has been fused into a general knowledge graph, which simulates the horizontally connected curriculum group.

The knowledge application stage is shown in Fig. 1-④. Knowledge discovery is made based on the above general knowledge map to discover new knowledge points or establish new knowledge associations. The evolution of the general knowledge map can simulate human independent thinking and innovation. When the intelligent brain is asked questions, as shown in Fig. 1-⑤, the knowledge related to the questions is searched in the reinforcement general graph, and summarized through the memory attention network to generate natural language feedback. It will simulate human responses to questions based on experience.

We built an updated bootstrapping framework for the curriculum and open-source knowledge graph. The specific steps are as follows:

1) Explore the automatic construction strategy of the curriculum knowledge map, and form an automated ecosystem of tasks such as data mining, pre-processing, data table design (including open-source knowledge table, encyclopedia, domain dictionary, corpus, and relational knowledge base), entity and relationship recognition, conflict handling, and storage.

2) Optimize big data resources and technologies based on bootstrapping. In the face of the challenges brought by the diversity of big data resources and technologies, the bootstrapping is used to coordinate data and technologies, so that they can not only play their characteristics, but also coordinate and cooperate.

3) Put forward conflict handling and dynamic update strategies in the big data environment. Detect conflicts between concept and entity, between upper and lower position and between attributes, study the treatment methods and the corresponding update scheme for updating concept, entity, and rule base, respectively.

4) Construct open-source knowledge map. The construction framework of software knowledge graph is designed, and the construction and fusion methods for open-source knowledge graph are proposed. The knowledge graph is constructed by identifying entities and relationships in different types of open-source software resources. With code structure as the core and reference association and user association as the auxiliary way, entity relationship is established in four types of resources.

3 Realization Technology of Course Intelligent Brain Model Supported by Group Intelligence

3.1 Bootstrapping Framework for Curriculum and Open-Source Knowledge Graph Construction

The framework is mainly used to construct curriculum knowledge map and open-source knowledge map. According to the principle of "adjust from bottom to top and center", the automatic build process is designed as shown in Fig. 2. The whole process is divided into four steps. Step 1: Data mining. Automatically download data from the above-mentioned data sources, preprocess these data, and store them into the development knowledge table, encyclopedia, domain dictionary, corpus, and relational knowledge base; Step 2: Automatically identify entities, synonymous relations, upper-lower relations, attribute relations, and other relations; Step 3: Automatically handle conflicts and dynamically update them; Step 4: Automatically store the data into the Neo4J graph database.

thumbnail Fig. 2 Course knowledge map automatic construction path

1) Design of bootstrapping framework

Bootstrapping is a typical weakly supervised machine learning idea trained by the "seed" data-driven model[18]. Specifically, a small number of instances (rules or relational words) are manually selected as seed data. Then the seed data is used to label the training set automatically. Then we train the machine learning model and use the model to identify new instances. Then the seed bank is expanded with new instances, the training set and training model are automatically marked and new instances are identified. Iterate over and over until no new instance is generated. This idea is helpful to obtain a wide range of examples and improve the recognition rate of the model. After incorporating this idea into the framework, data acquisition is automated, as shown in Fig. 3.

thumbnail Fig. 3 Bootstrapping technology framework

Bootstrapping framework supports the coordination of multiple data sources and divides data into four types according to usage: a) Function as data in relational knowledge base, and such data as conceptual entity table, entity relation table and attribute table stored in MySql can be obtained from open knowledge base and online encyclopedia. b) Function as the seed data of bootstrapping algorithm, from which machine learning algorithms such as Sopport Vector Machine (SVM) or Conditional Random Fields (CRF) can automatically learn new rules and identify more entities and relationships. This kind of data mainly comes from open knowledge base and online encyclopedia. c) Function as a domain dictionary, which is used for word segmentation effect of jieba, SnowNLP, NLPIR and LTP, especially for all the data. d) Function as constructing word2VEc corpus, especially all data.

As shown in Fig. 3, the data is obtained from open knowledge base and encyclopedia, and stored in relational knowledge base. At this point, the manual recognition of the training is set, and the weakly supervised learning algorithm is used. Accordingly, the data in the relational knowledge base is automatically used as the "seed", and the seed is used to automatically label the training set, train the model, and identify the new data with the model. Finally, the data is perfected to the related database. At the same time, lexical analysis is performed for the sentence where the new data is located, and part of speech combinations are extracted as new rules to achieve the goal of automatic rule generation. In the next automatic annotation training set, both seed data and rules can be used to realize the set of rules.

2) Recognition method based on frame

Based on bootstrapping framework, the downloaded data is used to construct the knowledge graph of programming courses. After repeated experiments and adjustments, all methods can be coordinated under the unified guidance of framework bootstrapping.

3) Conflict handling

According to the analysis of the sequence of conflict occurrence point, conflict detection and resolution methods, and the conflict handling countermeasures, the update strategies are studied in the order of conceptual update, entity update, seed, or rule dynamic update.

4) Construction of open-source knowledge graph

The construction of open-source knowledge graph is shown in Fig. 4. Entities are mainly extracted from open-source software source code, defect report, mailing list, Q&A document and other open-source software resources, and entity relationship is established to construct four software knowledge graphs. Then the software knowledge fusion module is responsible for organically organizing the independent software knowledge graph from different types of open-source software resources together to form the software knowledge general graph.

thumbnail Fig. 4 Technical route of software knowledge graph

3.2 Fusion Method of Knowledge Subgraph Based on Open-Source Software Resources and Domain Semantics

In this stage, the above 5 course knowledge maps and 1 open-source knowledge map are processed and refined to form a general knowledge map. This strategy, as shown in Fig. 5, consists of five key steps:

thumbnail Fig. 5 Technical route of subgraph fusion

1) Semantic representation. A knowledge semantic representation and similarity calculation based on knowledge graph are studied. Inter-entity relation words are used to mine semantic information of entities, and research the vectorization representation of entities and relationships in knowledge graph and similarity calculation based on entity semantics.

2) Pretreatment. The knowledge points with low word frequency are removed, and the entities with the same semantics are combined to determine the entities and relations that can participate in the fusion.

3) Weight evaluation. This paper designs a weight evaluation method of knowledge graph under open-source software resource mode. This method is the focus of the research. On the one hand, open-source software resources are used to evaluate the influence degree of curriculum subgraph and entity knowledge point of subgraph. On the other hand, the degree of participation and internal and external correlation of knowledge subgraph are evaluated by comparing the number of entities in the subgraph with the number of entities participating in the fusion.

4) Subgraph fusion. we implement the integration and weight renewal of entities, and focus on dynamic update characteristics. Then the entity association fusion and the weight update are implemented, and the structure information and semantic information are balanced. Firstly, the knowledge semantic representation method based on knowledge graph is used to represent the entities in five curriculum graph and one open-source graph. Then a semantic matrix for entity semantic information is constructed by similarity calculation method based on knowledge semantics. Then the structural connection matrix is constructed for the connection information of each node. Finally, combined with the bi-adjacency matrix, the individual knowledge subgraph is fused into the knowledge general graph containing "collective wisdom".

5) Mining the potential relationship between course knowledge points. Firstly, the membership degree of curriculum community is formally defined. Then, the integrated knowledge graph is transformed into graph structure, and the knowledge points belonging to two or more curriculum communities are found by community algorithm, and the membership degree between knowledge points and community is calculated. The potential relationships between courses are quantified in terms of membership.

Of the five steps above, the most important is to study the weight evaluation method of knowledge graph under the open-source software resource mode. This method involves two key indicators: the degree of influence of curriculum subgraph on open-source knowledge, and the degree of influence of open-source knowledge on curriculum subgraph. These two indicators represent the first attempt to introduce open-source software resources into the construction and integration of curriculum knowledge map which are described in detail.

1) Influence degree of course subgraph on open-source knowledge

The more important the course points, the more frequent they appear in open-source software resources. The hot topics in open-source software resources often reflect the importance of knowledge points in practical practice. The study found that the knowledge points of one course appeared in a code section in the code base, were either discussed in forums or emails, or were pointed out in bug reports. There is a typical many-to-many relationship between curriculum knowledge and open-source knowledge.

As shown in Fig. 6, course knowledge points are stored in the course atlas knowledge point table. Open-source data is stored in open-source knowledge tables. When data comes from mail, defect reports, source code, and Q&A documents, the weights are set to 1, 2, 3, and 4, respectively. If it is judged that the label semantics of open-source knowledge point A and course knowledge point B are similar by the previously designed similarity calculation method based on knowledge semantics, A is considered to influence B, and the weight of A is assigned to B.

thumbnail Fig. 6 MySQL relational table

The weight score set of all open-source knowledge points Gk related to the course knowledge subgraph Gk is denoted as: PGk={PGk1,PGk2,,PGkn}, where PGkj indicats the weight value of open-source knowledge points j pairs Gk. The influence degree of the curriculum subgraph on open-source knowledge is the average sum of weights of all open-source knowledge points related to the curriculum map, as shown in Formula (1).

W G k = P G ¯ k (1)

Here, WGk represents the impact degree score of knowledge subgraph Gk on open-source software resources, and PG¯k represents the weight average of all open-source knowledge points related to the course map k.

2) Influence degree of open-source knowledge on curriculum subgraph

This index reflects the extent to which open-source knowledge points are more related to courses, and affects their contribution to the construction of group knowledge graph. The more courses open-source knowledge involves, the higher the quality of course subgraph is, indicating that the open-source knowledge is more valuable and involved by multiple high-quality courses. The influence degree of open-source knowledge on the course subgraph is calculated as shown in Formula (2).

{ R i , 0 = μ                                                      R i , t = R i , t - 1 + P G i , G i   g e n e r a t e d   b y   i (2)

where Ri,t is the average quality score of all course subgraphs involved in the open-source knowledge points i at the time t. Ri,t-1 is the average quality score of all course subgraphs involved in the open-source knowledge pointsi at the previous moment t-1. It is assumed that each open-source knowledge point has the same impact at the initial moment, with initial value μ=0; PG'¯i represents the average quality score of all course subgraphs involved in open-source knowledge points i.

3.3 Knowledge Discovery and Natural Representation Based on Knowledge Graph

The research is mainly implemented in two ways: (ⅰ) New knowledge associations, i.e., new triples, are predicted from the basic knowledge maps by combining the graphical attention representation and contextual text attention representation; (ⅱ) Based on the memory attention network, the relevant entities can be predicted according to the input questions and the natural language can be written (Fig. 7).

thumbnail Fig. 7 Knowledge discovery technology roadmap

1) Curriculum knowledge discovery based on domain semantics

After fusing the knowledge graph, the graph is strengthened by predicting new entity-relation triples. As shown in Fig. 7-①, the fused knowledge graph is represented as a list of tuples numbered from 0 to k. Each tuple (eih,ri,eit) consists of header entities eih, tail entities eit, and their relationships ri. Each entity ei can involve multiple tuples, representing their one-hop associated neighbor entities as Nei=[ni1,ni2,. si is also related to the context description Nei=[ni1,ni2,,si], which can be selected randomly from the found sentence ei, and randomly initializes the vector representation ei and si, respectively.

Graphic structure encoder: as shown in Fig. 7-②, to capture the importance of each neighbor feature pair, weight distribution is calculated by self-attention mechanism

e i ' = W e e i , n i j ' = W e n i j

c i j = L e a k R e L U ( W f ( e i ' n i j ' ) )

c i ' = S o f t m a x ( c i )

where We is the linear transformation matrix applied to each entity, Wf is the parameter of a single-layer positive feedback network, represents a series operation between two matrices. The structure-based context representation is then computed using ci'  and Nei, where nijNei. In order to capture the various types of relations between the entities ei and their neighbors, the multi-headed attention mechanism is further adopted for each entity based on multiple linear transformation matrices to obtain a structure-based context representation e˜i=[εi0εiM], which εiM refers to the context representation obtained with the M-th head, and e˜i is a linked representation based on the attention of all M-heads.

Context text encoder: as shown in Fig. 7-③, each entity e is associated with the context sentence [w1,,wl]. To merge context information, bidirectional long short-term memory (LSTM)[19] network is applied to obtain encoder hidden state Hs=[h1,,hl], where hi represents the hidden state wi. And then we compute the bilinear terms of each of the words and remember the weights wi:μi=eTWshi, μ'=Softmax(μ), Ws is bilinear term. We get the context representation ê=μ'Thi.

Gated combination: As shown in Fig. 7-④, in order to combine graph-based representation and local context-based representation, a gate function is designed to balance these two types of information:

g e = σ ( g ˜ e ) ,     e = g e e ˜ + ( 1 - g e ) e ̂

where ge is the entity-dependent gate function, each element is in [0,1], g˜e is the learnable parameter of each entity e, σ is the Sigmoid function, and ⊙ is element-by-element multiplication.

Training and prediction: As shown in Fig. 7-⑤, TransE algorithmis adopted in order to optimize entity and relationship representation. It is assumed that the relationship between two entities can be interpreted as a translation running on the entity representation, i.e., triplet satisfaction. Therefore, for each triplet, their distance score is calculated as follows:

F ( e i h , r i , e i t ) = e i h + r i + e i t 2 2

Using marginal losses to train models:

L o s s = ( e i h , r i , e i t ) K ( e ¯ i h , r ¯ i , e ¯ i t ) K ¯ m a x ( 0 , γ + F ( e i h , r i , e i t ) - F ( e ¯ i h , r ¯ i , e ¯ i t ) )

where (eih,ri,eit) is a positive tuple, (e¯ih,r¯i,e¯it) is a negative tuple, and γ is the margin. Negative tuples are generated by replacing the head or tail entities of a positive tuple with different randomly selected entities. As shown in Fig. 7-⑥, after training, for each pair of indirectly connected entities ei, ej and relationship types r, scores y were calculated to indicate the probability (ei,ri,ej) of holding and the enhanced knowledge graph K=[(ek+1h,rk+1,ek+1t,yk+1)] was obtained.

2) Memory attention network oriented to knowledge association

Given a reference question τ=[w1,,wl], we can extract the entity from τ using the entity and relationship recognition method mentioned earlier. As shown in Fig.8, for each entity, a group of related entities is retrieved from the enhanced knowledge graph according to semantic similarity calculation based on the knowledge graph.

thumbnail Fig. 8 Technical roadmap of memory attention network

Rank all relevant entities Eτ=[e1τ,,eyτ] by confidence score and select 10 most relevant entities. And then they are fed τ and Eτ into the memory attention network together to balance three types of sources for each time step during decoding: ① probability of generating tokens from the entire word lexicon based on language model; ② the probability of copying words by quoting questions; ③ the possibility of merging related entities, the output is paragraph Y=[y1,,yo].

4 Conclusion

In the face of challenges such as diverse data sources and different methods in big data, bootstrapping framework is used to assign all kinds of data and methods (based on statistics, rules, supervision and weakly supervised learning) in appropriate positions to form a sustainable recycling ecology. The "intelligent brain" can track the frontier like human beings and learn the knowledge of multiple courses vertically, providing new ideas for the scientific problem of "how to use big data to construct knowledge map".

The method of knowledge subgraph fusion and potential relationship mining based on open-source software resources and domain semantics and knowledge discovery and natural representation based on knowledge graph are studied. In the process of subgraph fusion, we consider the representation of domain semantics of heterogeneous subgraphs, the influence of open source software resources on the weight of course graph, and the potential relationship mining of knowledge point level. In this way, the "smart brain" can digest knowledge like a human, get through the curriculum group horizontally, integrate the knowledge points of various courses, and provide new ideas for the scientific problem of "how to integrate heterogeneous knowledge graph". When constructing the knowledge discovery model, both map and context attention are considered, and the knowledge map is transformed into memory attention network model. It will enable the contents of the knowledge graph to be expressed in natural language, and enable the "intelligent brain" to discover knowledge and work like human beings, making a new attempt to address the scientific issue of how to lift the knowledge graph from intelligence to wisdom. Related research results are available in https://gitee.com/eighteam/Demo.

Reference

  1. Zhu Z T, Han Z M, Huang C Q. eAI: A new paradigm of human-oriented artificial intelligence [J]. Electronic Education Research, 2021, 42(1): 5-15(Ch). [Google Scholar]
  2. Ren B. Strategies for teacher development in open education under the background of "artificial intelligence +"[J]. Journal of Hebei Radio & TV University, 2020, 25(1): 40-43(Ch). [Google Scholar]
  3. Mei H, Zhou M H. Challenges brought by open source to software talent training [J]. Computer Education, 2017, 5(1): 2-5(Ch). [Google Scholar]
  4. MAO X J. Software engineering course practice: A method based on crowds and open source software [J]. Software Guide, 2020, 19(1): 1-6(Ch). [Google Scholar]
  5. Li Z, Zhou D D, Wang Y. Research of educational knowledge graph from the perspective of "artificial intelligence+": Connotation, technical framework and application [J]. Journal of Distance Education, 2019(4): 42-53(Ch). [Google Scholar]
  6. Zhong Z. Research on the construction of educational knowledge atlas model supported by artificial intelligence[J]. Electronic Education Research, 2020, 41(4): 62-70(Ch). [Google Scholar]
  7. Hu F H. Chinese Knowledge Graph Construction Method Based on Multiple Data Sources [D]. Shanghai: East China University of Science and Technology, 2015(Ch). [Google Scholar]
  8. Guo X Y. Entity Relation Extraction for Open Domain Text [D]. Wuhan: Central China Normal University, 2016(Ch). [Google Scholar]
  9. Zhou A Y. Understanding on the big data: Beyond the data management and analytics [J]. Big Data Research, 2017, 3(2): 3-18(Ch). [Google Scholar]
  10. He X Y. Key Technology Research on Knowledge Entity Recognition and Its Relation Extraction for Specific Domains Text [D]. Shijiazhuang: Hebei University of Science and Technology, 2018(Ch). [Google Scholar]
  11. Wu J P. Research on Fusion and Evaluation Algorithms of Curriculum Knowledge Graph for Learners [D]. Zhengzhou: Zhengzhou University, 2019(Ch). [Google Scholar]
  12. Kristiadi A, Khan M A, Lukovnikov D, et al. Incorporating Literals into Knowledge Graph Embeddings [M]. Berlin: Springer-Verlag, 2019: 347-363. [Google Scholar]
  13. Guo F. Research on Construction of Education Knowledge Graph Based on Crowdsourcing [D]. Zhengzhou: Zhengzhou University, 2017(Ch). [Google Scholar]
  14. Ji S W. Analysis of literature cited in Journal of Wuhan University (Natural Science Edition) [J]. Wuhan University Journal (Natural Science Edition), 1998, 44(4): 525-528(Ch). [Google Scholar]
  15. Li Y F, Jia C Y, Kong X N, et al. Locally weighted fusion of structural and attribute information in graph clustering [J]. IEEE Transactions on Cybernetics, 2019, 49(1): 247-260. [CrossRef] [PubMed] [Google Scholar]
  16. Wu S J. Design and Application Research of the JAVA Programming Course Answering System Based on Knowledge Graph [D]. Chengdu: Sichuan Normal University, 2019(Ch). [Google Scholar]
  17. Huang J. The Construction and Application Research of Knowledge Graph of Middle School Python Course [D]. Wuhan: Central China Normal University, 2019(Ch). [Google Scholar]
  18. Zhan Z J, Yang X P. Measuring semantic similarity in short texts through complex network [J]. Journal of Chinese Information Processing, 2016, 30(4): 71-80(Ch). [Google Scholar]
  19. Wang Q. The Research of Biomedical Name Entity Recognition by Combining Dictionary Based and Machine Learning Based Method [D]. Dalian: Dalian University of Technology, 2009(Ch).□ [Google Scholar]

All Figures

thumbnail Fig. 1 Course intelligent brain model supported by Group Intelligence
In the text
thumbnail Fig. 2 Course knowledge map automatic construction path
In the text
thumbnail Fig. 3 Bootstrapping technology framework
In the text
thumbnail Fig. 4 Technical route of software knowledge graph
In the text
thumbnail Fig. 5 Technical route of subgraph fusion
In the text
thumbnail Fig. 6 MySQL relational table
In the text
thumbnail Fig. 7 Knowledge discovery technology roadmap
In the text
thumbnail Fig. 8 Technical roadmap of memory attention network
In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.