Open Access
Issue
Wuhan Univ. J. Nat. Sci.
Volume 30, Number 6, December 2025
Page(s) 589 - 599
DOI https://doi.org/10.1051/wujns/2025306589
Published online 09 January 2026

© Wuhan University 2025

Licence Creative CommonsThis is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

0 Introduction

Research evaluation is a crucial component of scientific activities. Scientific, reasonable, and fair evaluation outcomes not only effectively motivate researchers and foster innovation but also enhance the efficiency and accuracy of allocating limited research resources[1-2]. As the primary drivers and core forces behind scientific innovation, scholars are undoubtedly the most important subjects of research evaluation. Consequently, the academic community has long attached great importance to—and continuously explored—how to construct a more scientific, reasonable, and equitable evaluation system for scholars, with the aim of objectively and comprehensively assessing their actual academic contributions and level of innovation[3-5].

The commonly used indicators for evaluating scholars include total number of publications, total citations, average citations per paper, number of important papers, scholar impact factor, and the h-index[6-8]. The widely adopted h-index, for instance, takes into account both the quantity and quality of publications and is easy to compute, making it popular across disciplines[9]. However, the h-index also has several notable shortcomings: (i) It disadvantages scholars who publish fewer papers but receive high citations per paper; (ii) It is difficult to apply in cross-disciplinary comparisons; (iii) It ignores the value of low-citation papers; (iv) It lacks sufficient granularity, often resulting in identical scores for different scholars[10-12]. Moreover, these commonly used metrics generally fail to account for disciplinary differences in publication and citation. For example, publication rates in biochemistry are significantly higher than those in mathematics, and citation patterns differ greatly between clinical medicine and basic medical sciences[13-14]. Therefore, relying solely on absolute publication and citation counts makes it difficult to achieve fair comparisons across disciplines.

To address these issues, the academic community has proposed a series of improved indicators, such as the Relative Citation Ratio (RCR), the f-index, and the Normalized Citation Score (NCS)[15-18]. These indicators attempt to correct for disciplinary citation differences through normalization methods. However, they still present limitations. Most do not adjust for publication volume and fail to distinguish between the academic value of publication venues and citation sources. In response to these shortcomings, this paper proposes the concept of academic equilibrium value and, based on this concept, constructs the Academic Equilibrium Value index (AEV-index). By incorporating weights for publication and citation tiers and integrating average publication and citation levels within disciplines, the AEV-index aims to provide a more comprehensive and equitable assessment of scholarly performance. This study conducts an empirical analysis using three subfields within computer science to validate the feasibility and advantages of the AEV-index.

The remainder of this paper is organized as follows: Section 1 describes the research methods, including the datasets, determination of research subjects, and the design of the AEV-index. Section 2 presents the analytical results and discusses the applicability of the AEV-index. Section 3 provides the conclusion.

1 Materials and Methods

This paper selects the discipline of computer science as the focus of empirical analysis for two main reasons. First, there are substantial differences in publication volume, publication difficulty, and academic influence between top-tier journals or conferences and regular ones within the field of computer science. Second, the discipline encompasses approximately ten subfields, which vary significantly in terms of total publication volume, publication cycle and difficulty, number of journals or conferences, and citation. From the perspective of publication alone, certain subfields produce several times more publications than others, making it inappropriate to evaluate scholars from different subfields solely based on absolute publication and citation counts. The research approach is illustrated in Fig. 1.

thumbnail Fig. 1 Research approach

1.1 Datasets

In 2019, the China Computer Federation (CCF) released the 5th edition of the "Catalog of International Academic Conferences and Journals", which classified computer science into ten subfields and recommended A-, B-, and C-level journals and conferences for each subfield. Preliminary research revealed substantial differences in the developmental stages and research scales of these subfields. For example, the "Artificial Intelligence" subfield produces about 10 times more publications annually than the "Network and Information Security" subfield. Statistics show that academic output in computer science is predominantly composed of conference papers, and the academic value of such papers is widely recognized by institutions and individuals. High-level conference papers have an impact comparable to that of journals of the same caliber, and can serve as an important basis for academic evaluation. In order to exclude citation lags, this study considers research published up to 2021. Specifically, this study selects conference papers from 2012 to 2021 in the subfields of "Network and Information Security", "Computer Graphics and Multimedia", and "Artificial Intelligence", as listed in the Catalog of International Academic Conferences and Journals. These papers were sourced from the Scopus database. Additionally, citation references for each paper were exported, and following data matching, deduplication, and cleaning, comprehensive citation datasets for each subfield over the past decade were constructed.

1.2 Determination of Core Authors

In general, the evaluation of core authors is an important part of scholar evaluation research. Therefore, this paper uses the core authors from each subfield as examples to verify the feasibility of the AEV-index. While publication is a basic prerequisite for becoming a core author, high publication count alone is insufficient. Core authors must have outstanding importance and influence within the discipline, i.e., the quality of their publications. Citation analysis is an effective means of measuring the quality of publications. Therefore, this paper refers to Price's law to determine the minimum publications and minimum citations for core authors[19]. The calculation formula is as follows:

{ M p = 0.749 × N p   m a x M c = 0.749 × N c   m a x (1)

where Mp is the minimum publication for core authors,Mc is the minimum citation for core authors, Np max is the number of papers of the author with the maximum publication, and Nc max is the number of citations of the papers of the author with the maximum citation.

In each subfield, the first three authors in terms of conference paper publications were selected as candidates. Based on Eq. (1), the minimum publication and citation for core authors were determined, thereby identifying the core authors for each subfield, as shown in Table 1. The dataset exported from the Scopus database includes a unique author ID for each author, which can be used to disambiguate authors with the same name.

Table 1

Core authors in each subfield

1.3 Design of the AEV-index

1.3.1 Guiding principles

The design of the AEV-index in this paper follows the following principles:

(i) Principle of considering both quantity and quality. The index considers both the number of publications and citations, as well as their quality. Specifically, the academic level of published papers and the scholarly value of cited references.

(ii) Principle of disciplinary consistency. The index accounts for differences in the difficulty of publishing papers and being cited across disciplinary fields. It aims to ensure fair and meaningful comparisons among scholars from different disciplines.

(iii) Principle of maintaining discriminative power. The index is designed to maintain a certain degree of discriminative power and strive to avoid identical or similar evaluation scores that are difficult to distinguish.

1.3.2 Algorithm design for AEV-index

The average number of publications and citations varies significantly across different disciplines. Therefore, any cross-disciplinary scholar evaluation must account for these differences. For a particular disciplinary field, if X¯ and Y¯ represent the average number of publications and citations of scholars, respectively, and a particular scholar's publications and citations are X and Y, then X/X¯ and Y/Y¯ can be used to represent the scholar's relative publication level and relative citation level. By assigning appropriate weights to publication and citation, the relative academic level of a scholar in the field can be calculated. The same approach applies to scholars in any disciplinary field. By normalizing against discipline-specific averages for publications and citations, the relative academic level within a field can not only facilitate ranking scholars within the same discipline but also enable cross-disciplinary evaluation of scholars.

The academic value and influence of different levels of publications and citations are different and should be distinguished. Therefore, the AEV-index proposed in this paper utilizes a scholar's average number of publications and citations to normalize the differences across disciplinary fields, and assigns different value weights to different levels of publications and citations. The relative publication levels within a particular disciplinary field are calculated separately, and assigned different value weights, and then summed[20-21]. The same approach is applied to citations. Finally, by assigning weights to both publication and citation, the "new" relative academic level of the scholar in a particular disciplinary field can be obtained, which is the AEV-index proposed in this paper. The calculation formula is as follows:

A E V = i ( X i X ¯ i × V i ) × q + i ( Y i Y ¯ i × V i ) × ( 1 - q ) (2)

where i represents different journal or conference levels. Xi and Yi are the the number of publications and the number of citations received by the scholar's papers at level i, respectively. X¯i is the average publication index at level i, i.e., the average number of publications of scholars' papers at a certain level in that field. Similarly, Y¯i is the average citation index at level i, i.e., the average number of citations received by scholars' papers at a certain level in that field. Vi is the value weight of papers at a certain level. q is the weight for publication, and 1-q is the weight for citation.

2 Results and Discussion

2.1 Determination of Relevant Indicators

2.1.1 Value of conference level

Although the "Catalog of International Academic Conferences and Journals" recommended by CCF rates conferences in different subfields, it is difficult to determine the value of papers in each conference level using this rating method. This paper adopted the Delphi method, and the questionnaire respondents were selected from the computer science departments of 14 universities in China that were selected for the "Double First-Class" discipline construction project. A total of 106 valid questionnaires were collected. The research fields of the teachers covered all subfields in the "Catalog of International Academic Conferences and Journals", and their academic ranks included professor/researcher, associate professor/associate researcher/senior engineer, and lecturer/assistant researcher. Using the Spearman coefficient for correlation analysis, the influence of research field and position on the teachers' evaluation of conference level value was excluded. The average value was used for value evaluation, and the conclusion was that if the value of a C-level conference paper (VC) is 1, then the value of a B-level conference paper (VB) is 2.637, and the value of an A-level conference paper (VA) is 4.324.

2.1.2 Weight of publication and citation

Publication and citation are two of the most basic indicators in the scholar evaluation system. If only publication is used for evaluation, it would only focus on the accumulation of quantity and ignore the assessment of paper quality. Therefore, it is necessary to objectively allocate the weights of these two indicators in order to reduce bias. From a statistical perspective, these two are a binary population. Whether there is a correlation between them and the degree of correlation are worthy of attention. If they are highly correlated, then using publication as the single indicator would be sufficient for the calculation of the AEV-index, and the examination of citation would be redundant. Based on the method in Ref. [21], this paper calculated the correlation coefficients between publications and citations of the core authors in the three subfields from 2012 to 2021, which were 0.247, 0.324, and 0.198 respectively. This indicates that the publication and citation of core authors in the three subfields are weakly correlated; specifically, high publication output does not necessarily correspond to high citation counts, nor does low output imply low counts. We believe that publication and citation reflect the academic level of scholars from both the aspects of "quantity" and "quality" and hold equal importance. Therefore, when allocating weights, we assign equal importance to both publication quantity and citation count, setting the weights for both at 0.5.

2.1.3 Average publication and citation indices for different subfields and levels

When calculating the average publication and citation indices across different subfields and conference levels, it is observed that the distributions are heavily skewed. A small number of authors contribute the majority of publications, and likewise, a small number of authors receive the majority of citations. Each subfield and level include a significant number of "tail scholars" whose publication and citation are almost negligible. Therefore, this study primarily focuses on the "core authors", taking inspiration from the Price's law Eq. (1) to determine the average publication and citation indices, as shown in Fig. 2.

thumbnail Fig. 2 Average publication and citation indices for different subfields and levels

Figure 2 reveals significant differences in the average publication and citation indices of core authors across different computer science subfields and conference levels. These differences reflect not only the varying degrees of recognition for the importance and impact of academic research in different fields and levels, but also the unique characteristics of each discipline. In terms of the average publication indices in Fig. 2(a), the subfield of Artificial Intelligence demonstrates strong overall performance, with average publication indices of 11.613, 9.123, and 10.821 for A, B, and C-level conferences. This indicates that research outcomes in Artificial Intelligence maintain high quality regardless of conference tier. In contrast, the subfield of Computer Graphics and Multimedia exhibits an unusual pattern, where the average publication index at B-level conferences (10.288) surpasses those at C-level (8.216) and A-level (5.414) conferences, suggesting that B-level venues in this field attract more impactful work. For the subfield of Network and Information Security, the average publication indices are relatively lower overall, with A-level (7.353) exceeding C-level (5.680), which in turn exceeds B-level (4.266), indicating some variability in publication quality across tiers. Regarding the average citation index shown in Fig. 2(b), both the Artificial Intelligence and Network and Information Security subfields exhibit citation trends consistent with their respective publication index rankings. However, in the subfield of Computer Graphics and Multimedia, a reverse trend emerges: the lower the citing tier, the higher the average citation index—63.725 for C-level, 47.677 for B-level, and only 6.807 for A-level. This suggests that research in this subfield receives more attention in mid- and lower-level citing literature.

In summary, the subfield and conference tier have a significant impact on the distribution of publications and citations. The Artificial Intelligence subfield holds a clear advantage in both publication quality and citation impact, particularly in high-tier conferences. The Computer Graphics and Multimedia subfield demonstrates unique influence in mid- and lower-level venues, while the Network and Information Security subfield shows relatively stable but modest performance. Therefore, when evaluating scholars' contributions, it is essential to account for disciplinary characteristics and pay close attention to their publication and citation performance across different conference tiers.

2.2 Results of the AEV-Index

We used Eq. (2) to calculate the AEV-index for the core authors in each subfield, and the number of publications, citations, and h-index of the top ten scholars of AEV-index are shown from Table 2 to Table 4.

Based on the results shown in Tables 2-4, we compared the number of publications, citations, h-index, and AEV-index of the scholars. The results indicate that the AEV-index has significant advantages over the h-index. Specifically, the advantages are as follows:

(i) The AEV-index can more comprehensively reflect the academic contributions and influence of scholars. Although the h-index considers both the number of papers and the number of citations, it only focuses on the performance of the "core paper set" and ignores those papers outside the core set that also have a certain impact. In contrast, the AEV-index assigns value weights to the publication tier and citation tier of the scholar's papers, making the evaluation more comprehensive. For example, in the Network and Information Security field, Daniel Gruss has an h-index of only 5, which is not very high, but his total citations reach 191, mostly from a few papers with outstanding influence. The traditional h-index cannot fully reflect this situation, while the AEV-index can well demonstrate his academic value.

(ii) The AEV-index can better distinguish scholars with the same h-index but different academic contributions. The h-index is just a threshold, and cannot effectively differentiate the various situations under the same h-index. However, the AEV-index, through the differences in publication counts and total citations, can perform more refined ranking and evaluation of the academic achievements of these scholars. For instance, in the Computer Graphics and Multimedia field, Shinji Watanabe and Tara N. Sainath both have an h-index of 13, but Shinji Watanabe's publication count (74) is significantly higher, while his total citations (555) are slightly lower than Tara N. Sainath's (597), so Shinji Watanabe's AEV-index (17.956 9) is higher than Tara N. Sainath's (16.846 6), better reflecting the gap between them.

(iii) The AEV-index can better evaluate scholars with a small number of publications but high impact. The h-index often underestimates the academic value of such scholars, as it overly emphasizes the quantity of papers. In contrast, the AEV-index, by comprehensively considering the total publication count and total citations, can more fairly evaluate their contributions. For example, in the Artificial Intelligence field, Wei Liu has only 10 papers but 1 973 citations, resulting in a relatively high AEV-index of 21.610 2. If relying solely on the h-index of 3, it would be impossible to accurately assess his academic achievements. Dong Yu in the Computer Graphics and Multimedia field is a similar case—his publication count of 36 is not particularly outstanding in the field, but his total citations of 788 are much higher than other scholars, and the AEV-index well captures this, allowing Dong Yu to rank among the top in the field with an AEV-index of 17.826 7.

(iv) The AEV-index can better reflect the characteristics and preferences of different subject areas as well as different paper tiers. For example, in the Computer Graphics and Multimedia subfield, scholar Wu Liu has relatively low publication and citation counts (19 and 37 respectively), resulting in a low h-index of 4. However, among his 19 papers, 13 were published in A-level conferences, and the remaining 6 were published in B-level conferences. Moreover, his A-level conference papers have been cited 22 times, while his B-level conference papers have been cited 7 times. These factors allow Wu Liu to rank 8th in the field with an AEV-index of 13.204 2.

In summary, the AEV-index has significant advantages over the traditional h-index in terms of comprehensiveness, discriminative power, balance, and applicability. In academic evaluation, the AEV-index can more objectively and comprehensively reflect the academic achievements of scholars, helping to further refine and improve the quality and fairness of academic evaluation.

Table 2

The top ten scholars of AEV-index in the subfield of Network and Information Security

Table 3

The top ten scholars of AEV-index in the subfield of Computer Graphics and Multimedia

Table 4

The top ten scholars of AEV-index in the subfield of Artificial Intelligence

2.3 Correlation Test

To verify the feasibility of the AEV-index, this paper conducted correlation analysis between it and other scholar evaluation indicators such as total publication, total citation, and h-index. Spearman correlation coefficients were used because bibliometric indicators are unlikely to follow a normal distribution[22]. These coefficients for the correlations between the AEV-index and other indicators in the three subfields are shown in Table 5.

The p-values (Sig) for the correlations with publication, citation, and the h-index are all below 0.05. This indicates that the AEV-index is statistically significant and positively correlated with these indicators. This suggests that the AEV-index is overall a feasible scholar evaluation indicator. All the indices are strongly correlated with each other.

Table 5

Correlation coefficient between the AEV-index and other indicators

2.4 Applicability Analysis of the AEV-Index

To address the shortcomings of the h-index, the academic community has conducted extensive research and introduced a series of h-index variants, further improving and optimizing the h-index. The g-index[23], hg-index[24], A-index[25], and R-index[26] are used to overcome the problem of the h-index being overly influenced by the number of publications; the hT-index[27] can reduce the impact of the h-index neglecting lowly cited papers outside the h-core; the hrat-index[28] and hm-index[29] solve the problem of the h-index having a relatively low discriminative power. Most existing studies have improved the h-index in a single aspect, but there has been less research that comprehensively addresses the issues of the h-index being overly influenced by the number of publications, neglecting lowly cited papers, and having low discriminative power. The AEV-index proposed in this paper is positively correlated with the h-index, indicating that the AEV-index is not a subversion of existing indicators, and the AEV-index attempts to optimize the h-index from multiple aspects.

2.4.1 The AEV-index reduces the impact of lower publication output on scholar evaluation

The h-index is not highly sensitive to highly cited papers. Even if the citation counts of a paper double (or even more), the subsequent h-index will not be affected if there are not enough additional publications. In contrast, the AEV-index considers the subsequent citations of papers, and also incorporates the quality of citations (the level of the citing literature), which reduces the impact of low publication output on scholar evaluation.

For example, in the field of Artificial Intelligence, Tsungyi Lin has published only 9 papers in the past decade. However, one of them, "Feature pyramid networks for object detection", has been cited 1 097 times, including with 844 citations from A-level conference papers. Most of his other papers also have at least 30 citations (Table 6). However, due to the low publication output, his h-index is only 8 (ranked 131st). In contrast, the AEV-index fully considers the citation frequency and citation quality of this paper. Tsungyi Lin's AEV-index is 17.117 2 (ranked 13th), significantly higher than that of some scholars in the subfield who have published more than 30 papers but have an average citation count per paper of less than 10, and an h-index higher than 10.

Table 6

Paper titles and citation frequencies for Tsungyi Lin

2.4.2 The AEV-index emphasizes low-cited papers

The h-index neglects lowly cited literature. Some lowly cited papers may have accumulated fewer citations due to a shorter time span. However, these lowly cited papers may contain high-quality papers that will gradually be discovered and become highly cited over time. The AEV-index addresses this by weighting both high-level conference papers and citations from high-impact literature, thus highlighting papers with potential for future impact.

For example, in the subfield of Computer Graphics and Multimedia, two papers with their publication levels and citation information are shown in Table 7. When excluding the impact of publication tiers on the AEV-index calculation and focusing solely on the citation component, the first low-citation paper has an AEV-index more than twice that of the second (0.698 3 vs. 0.302 7). Moreover, based on their publication tiers, the first paper is more likely to become highly cited in the future than the second.

Table 7

Publication level and citation status of two papers in the subfield of Computer Graphics and Multimedia

2.4.3 The AEV-index has high discrimination

Although Hirsch[9] believes that two people with similar h-indices are comparable in their overall scientific impact, even if their total publications or total citations differ greatly, in reality, if ranking of candidates is necessary for certain purposes (such as promotion, research funding allocation, etc.), using only the h-index as a standard may result in a large number of tied rankings, making the evaluation results meaningless.

In the calculation of the AEV-index, since the publication level and citation level are introduced, even if the publication volume and total citations of two researchers are very close, the evaluation results may still have a large gap. Table 8 shows the relevant indicators of two scholars in the Network and Information Security subfield. Although their publications, total citations, and h-index are completely the same, the AEV-index can still distinguish them. In addition, the AEV-index retains three decimal places during calculation, making the differences between each scholar more obvious, and rarely results in the same index values, which facilitates comparison and ranking.

Table 8

Metrics of two scholars in the subfield of Network and Information Security

2.4.4 The AEV-index is applicable for scholar evaluation across different academic disciplines

The h-indices of scholars from different academic disciplines are not suitable for direct comparison. Based on the calculations from the dataset in this paper, the average h-index in the Artificial Intelligence subfield is around twice that of the other two subfields. Particularly in the Network and Information Security subfield, the highest h-index among top scholars does not exceed 8, while in the Artificial Intelligence subfield, there are 203 authors with an h-index greater than or equal to 8. Evaluating scholars solely by h-index without accounting for disciplinary characteristics may overlook the scholars who are leading in disciplines with lower average publication output and average citations.

The AEV-index can better address these issues arising from disciplinary differences. On one hand, the calculation of the AEV-index has a mitigating effect on disciplinary differences. The average AEV-index for the three subfields is 5.497 4, 5.581 4, and 5.671 6 respectively, which are relatively close. Additionally, when performing the Kolmogorov-Smirnov (K-S) test on the AEV-index of scholars from the Network and Information Security subfield and the Artificial Intelligence subfield[30], the two-sided significance is 0.073 (greater than 0.05), indicating no significant difference in the AEV-index distributions between these two subfields. The AEV-index has the effect of narrowing the differences across academic disciplines. On the other hand, the AEV-index also supports joint evaluation of top scholars across different disciplines. Table 9 shows the scholars in the top 1% of AEV-index in the three computer science subfields in this paper. The AEV-index of the top scholars in each subfield are relatively close, allowing them to be evaluated together.

Table 9

Top 1% scholars by AEV-index in each subfield

3 Conclusion

From the perspective of academic equilibrium value, this paper proposes the AEV-index as an improved scholar evaluation index. Based on the average publication output and average citation counts in a scholar's academic discipline, it introduces two indicators: publication level and citation level. Using scholars from three subfields of computer science as research subjects, an empirical study shows that the AEV-index correlates strongly with conventional metrics (e.g., publication count, citation frequency, and h-index). Meanwhile, it mitigates cross-disciplinary variations, thereby facilitating more equitable comparisons of scholarly impact across different fields. Additionally, the AEV-index mitigates the negative impact of lower publication output on scholar evaluation, provides recognition for lowly cited but high-quality papers, and has better discriminative power, improving existing scholar evaluation indices in multiple aspects. This also demonstrates that the AEV-index proposed in this paper can, to a certain extent, promote the development of quantitative academic evaluation theory in a more scientific and reasonable direction.

The theoretical framework developed in this paper has broad potential applications beyond computer science. In natural sciences such as physics, chemistry, and biology, where publication patterns and citation behaviors vary dramatically between theoretical and experimental subfields, the AEV-index could provide more balanced evaluations by accounting for these structural differences. For medical sciences, where translational research may produce fewer but highly impactful publications compared to clinical studies, the equilibrium approach could better recognize contributions across the research spectrum. Furthermore, in social sciences and humanities, where monographs and books often carry significant weight alongside journal articles, the theoretical principles of the AEV-index could be extended to accommodate these diverse publication types by assigning appropriate value weights.

The theoretical advancements proposed in this paper contribute to moving quantitative academic evaluation theory in a more scientifically rigorous and equitable direction. However, several limitations warrant acknowledgment and provide directions for future theoretical development: First, the empirical validation in this study covers only three subfields of computer science, and more comprehensive data across diverse fields is needed to fully validate the theoretical constructs. Second, the current framework fails to differentiate the contribution levels among all authors, as it only takes into account the first three authors of a paper. However, different papers may feature co-first authors and corresponding authors in varying author positions. Future research could incorporate all authors (including the identification of co-first authors and corresponding authors) along with their contribution weights into the index design, thereby enhancing fairness and accuracy.

References

  1. Xu Z H, Li X L, Shi J, et al. Research on key core technology identification based on multi-source heterogeneous data: Taking lithography technology as an example[J]. China Science and Technology Forum, 2024(12): 127-136, 164(Ch). [Google Scholar]
  2. Wei F Y, Yuan M J, Yang L, et al. Redefining research metrics: Introducing the inverse-H-index and efficacy equation in scholarly publication analysis[EB/OL]. [2025-03-16].https://doi.org/10.1016/j.dim.2025.100100. [Google Scholar]
  3. Xu Z H, Cai H Y, Zhang W, et al. The construction of the index system and model for assessing the national science and technology security risk[J]. Information Science, 2023, 41(12): 165-173, 182(Ch). [Google Scholar]
  4. Wang X. Evaluation of the discourse power in Chinese academic journals: A multi-fusion perspective[J]. Data and Information Management, 2023, 7(4): 100026. [Google Scholar]
  5. Lahiani R. Recreating relevance: Translated Arabic idioms through a relevance theory lens[J]. Humanities and Social Sciences Communications, 2024, 11(1): 459. [Google Scholar]
  6. Keshavarz-Fathi M, Yazdanpanah N, Kolahchi S, et al. Universal research index: An inclusive metric to quantify scientific research output[J]. The Journal of Academic Librarianship, 2023, 49(3): 102714. [Google Scholar]
  7. Lyu W J, Huang Y H, Liu J. The multifaceted influence of multidisciplinary background on placement and academic progression of faculty[J]. Humanities and Social Sciences Communications, 2024, 11(1): 350. [Google Scholar]
  8. Wang R Z, Lewis M, Zheng-Pywell R, et al. Using the H-index as a factor in the promotion of surgical faculty[J]. Heliyon, 2022, 8(4): e09319. [Google Scholar]
  9. Hirsch J E. An index to quantify an individual's scientific research output[J]. Proceedings of the National Academy of Sciences of the United States of America, 2005, 102(46): 16569-16572. [Google Scholar]
  10. Anand B, Sudhakar T, Akshay D. A review on h-index and its alternative indices[J]. Journal of Information Science, 2023, 49(3): 624-665. [Google Scholar]
  11. Cova T F G G, Jarmelo S, Nunes S C C, et al. Seeing is believing: A graphical reference framework for multi-criteria evaluation[J]. Evaluation, 2017, 23(4): 479-494. [Google Scholar]
  12. Gingras Y, Khelfaoui M. Do we need a book citation index for research evaluation?[J]. Research Evaluation, 2019, 28(4): 383-393. [Google Scholar]
  13. Thelwall M, Sud P. Mendeley readership counts: An investigation of temporal and disciplinary differences[J]. Journal of the Association for Information Science and Technology, 2016, 67(12): 3036-3050. [Google Scholar]
  14. Seglen P O. Why the impact factor of journals should not be used for evaluating research[J]. BMJ, 1997, 314(7079): 498-502. [Google Scholar]
  15. Schubert A, Braun T. Relative indicators and relational charts for comparative assessment of publication output and citation impact[J]. Scientometrics, 1986, 9(5): 281-291. [Google Scholar]
  16. Katsaros D, Akritidis L, Bozanis P. The f index: Quantifying the impact of coterminal citations on scientists' ranking[J]. Journal of the American Society for Information Science and Technology, 2009, 60(5): 1051-1056. [Google Scholar]
  17. Waltman L, van Eck N J, van Leeuwen T N, et al. Towards a new crown indicator: Some theoretical considerations[J]. Journal of Informetrics, 2011, 5(1): 37-47. [Google Scholar]
  18. Fassin Y. The compound F2-index and the compound H-index as extension of the f2 and h-indexes from a dynamic perspective[J]. Journal of Data and Information Science, 2020, 5(3): 71-83. [Google Scholar]
  19. Ruocco G, Daraio C, Folli V, et al. Bibliometric indicators: The origin of their log-normal distribution and why they are not a reliable proxy for an individual scholar's talent[J]. Palgrave Communications, 2017, 3(1): 17064. [Google Scholar]
  20. Xu Z H. Coupling coordination development and driving Factors of new energy vehicles and ecological environment in China[J]. Wuhan University Journal of Natural Sciences, 2025, 30(1): 79-90. [Google Scholar]
  21. Xu Z H. Machine learning-based quantitative structure-activity relationship and ADMET prediction models for ERα activity of anti-breast cancer drug candidates[J]. Wuhan University Journal of Natural Sciences, 2023, 28(3): 257-270. [CrossRef] [EDP Sciences] [Google Scholar]
  22. Xu Z H, Lin Y, Cai H Y, et al. Risk assessment and categorization of terrorist attacks based on the Global Terrorism Database from 1970 to 2020[J]. Humanities and Social Sciences Communications, 2024, 11(1): 1103. [CrossRef] [Google Scholar]
  23. Manjareeka M. Evaluation of researchers: H-index or G-index which is better?[J]. Journal of Integrative Medicine and Research, 2023, 1(1): 34-36. [Google Scholar]
  24. Alonso S, Cabrerizo F J, Herrera-Viedma E, et al. hg-index: A new index to characterize the scientific output of researchers based on the h- and g-indices[J]. Scientometrics, 2010, 82(2): 391-400. [Google Scholar]
  25. Cheng Q J, Kwok C L, Cheung F T W, et al. Construction and validation of the Hong Kong altruism index (A-index)[J]. Personality and Individual Differences, 2017, 113: 201-208. [Google Scholar]
  26. Bannai H, Gagie T, Tomohiro I. Refining the r-index[J]. Theoretical Computer Science, 2020, 812: 96-108. [Google Scholar]
  27. Wang H Y, Chien T W, Kan W C, et al. Authors who contributed most to the fields of hemodialysis and peritoneal dialysis since 2011 using the hT-index: Bibliometric analysis[J]. Medicine, 2022, 101(38): e30375. [Google Scholar]
  28. Frittelli M, Mancini L, Peri I. Scientific research measures[J]. Journal of the Association for Information Science and Technology, 2016, 67(12): 3051-3063. [Google Scholar]
  29. Tietze A, Hofmann P. The h-index and multi-author hm-index for individual researchers in condensed matter physics[J]. Scientometrics, 2019, 119(1): 171-185. [Google Scholar]
  30. Khatib A, Ahmed R, Niaz S, et al. Sticky floor, broken ladder, and glass ceiling in internal medicine academic ranking, leadership, and research productivity[J]. Journal of General Internal Medicine, 2025, 40(2): 354-360. [Google Scholar]

All Tables

Table 1

Core authors in each subfield

Table 2

The top ten scholars of AEV-index in the subfield of Network and Information Security

Table 3

The top ten scholars of AEV-index in the subfield of Computer Graphics and Multimedia

Table 4

The top ten scholars of AEV-index in the subfield of Artificial Intelligence

Table 5

Correlation coefficient between the AEV-index and other indicators

Table 6

Paper titles and citation frequencies for Tsungyi Lin

Table 7

Publication level and citation status of two papers in the subfield of Computer Graphics and Multimedia

Table 8

Metrics of two scholars in the subfield of Network and Information Security

Table 9

Top 1% scholars by AEV-index in each subfield

All Figures

thumbnail Fig. 1 Research approach
In the text
thumbnail Fig. 2 Average publication and citation indices for different subfields and levels
In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.