STI 2018 Conference Proceedings. Proceedings of the 23rd International Conference on Science and Technology Indicators
Proceedings of the science and technology indicators conference 2018 Leiden.
Several recent bibliometrics studies have reignited the well-known debates initiated more than twenty years ago by vivid works of Per Seglen (1992; 1994; 1997). The question is whether impact factor may represent not only the citedness of a journal as a whole, but also give some estimate of individual papers’ quality published in it (different views: Larivière et al., 2016; Zhang, Rousseau & Sivertsen, 2017; Waltman & Traag, 2017; Pudovkin, 2018). This is an important and profound theme of interrelation between a part and a whole, their mutual dependency and the limits of this dependency.
To explore this research question, we analyze correlation between the average (in our case, journal impact factor, IF) and the amplitude of oscillations/deviations around this average (citations received by individual papers in the journal). This is, so to say, “indicators of the second order”, we measure the digression of the citations received by individual papers from the journal’s average.
We find that evaluation of research activity by RISC Core publications is good for all types of Russian universities and prevents different misleading practices with data manipulation in RISC like uploading to database poorly cited and viewed article collections and proceedings of student conferences via Science Index. On the other hand, Science Index is a useful instrument for correction of errors in database and linkage of missing references, so using of RISC Core for research assessment of Russian universities and research institutions can help to use it in proper way for enhancing the data quality in Russian Index of Science Citation.
The problem of identifying the leading universities in a country is rather easy to solve, one may focus, for example, on highly cited papers (e.g. Tijssen, Visser & van Leeuwen, 2002; Pislyakov & Shukshina, 2014; Abramo & D’Angelo, 2015) or other indicators of excellence. Sometimes it is more challenging to find the universities of "the second wave" which deserve to receive additional governmental help and budget because then they may become the most prominent ones and, so to say, enter the “Eredivisie”, the highest football league. It is a more difficult task to find first among the seconds than to find the firsts among all.
We apply five majority-rule-based ordinal ranking methods to data on economic, management and political science journals in order to produce aggregate journal rankings. First, we calculate aggregates for the set of rankings based on seven popular bibliometric indicators (impact factor, 5-year impact factor, immediacy index, article influence score, h-index, SNIP and SJR). Then, we exclude the Hirsch index and repeat the calculations. We perform the comparative correlation analysis of the aggregates and the initial rankings. We use two rank measures of correlation, Kendall’s tau-b and the share of coinciding pairs r. The analysis demonstrates that aggregate rankings represent the set of single-indicator-based rankings better than any of the seven rankings themselves. Among the single-indicator-based rankings themselves, the best representations of their set are produced by the 5-year impact factor. The least representative are rankings based on the immediacy index. The exclusion of the Hirsch index from the set of indicators does not change these results.