Russian Index of Science Citation: Overview and Review
At early 2016 the new index was launched on Web of Science platform — Russian Science Citation Index (RSCI). The database is free for all Web of Science subscribers except those from the former Soviet Union countries. This database includes publications from 652 best Russian journals and is based on the data from Russian national citation index — Russian Index of Science Citation (RISC). RISC was launched in 2005 but there is very limited information about it available in English-language scholarly literature by now. The aim of this paper is to describe the history, actual structure and user possibilities of RISC. We focus on the novel features of RISC which are crucial to bibliometrics and unavailable in international citation indices.
This paper introduces a systematic technology trend monitoring (TTM) methodology based on an analysis of bibliometric data. Among the key premises for developing a methodology are: (1) the increasing number of data sources addressing different phases of the STI development, and thus requiring a more holistic and integrated analysis; (2) the need for more customized clustering approaches particularly for the purpose of identifying trends; and (3) augmenting the policy impact of trends through gathering future-oriented intelligence on emerging developments and potential disruptive changes. Thus, the TTM methodology developed combines and jointly analyzes different datasets to gain intelligence to cover different phases of the technological evolution starting from the ‘emergence’ of a technology towards ‘supporting’ and ‘solution’ applications and more ‘practical’ business and market-oriented uses. Furthermore, the study presents a new algorithm for data clustering in order to overcome the weaknesses of readily available clusterization tools for the purpose of identifying technology trends. The present study places the TTM activities into a wider policy context to make use of the outcomes for the purpose of Science, Technology and Innovation policy formulation, and R&D strategy making processes. The methodology developed is demonstrated in the domain of “semantic technologies”.
Several recent bibliometrics studies have reignited the well-known debates initiated more than twenty years ago by vivid works of Per Seglen (1992; 1994; 1997). The question is whether impact factor may represent not only the citedness of a journal as a whole, but also give some estimate of individual papers’ quality published in it (different views: Larivière et al., 2016; Zhang, Rousseau & Sivertsen, 2017; Waltman & Traag, 2017; Pudovkin, 2018). This is an important and profound theme of interrelation between a part and a whole, their mutual dependency and the limits of this dependency.
To explore this research question, we analyze correlation between the average (in our case, journal impact factor, IF) and the amplitude of oscillations/deviations around this average (citations received by individual papers in the journal). This is, so to say, “indicators of the second order”, we measure the digression of the citations received by individual papers from the journal’s average.
We apply five majority-rule-based ordinal ranking methods to data on economic, management and political science journals in order to produce aggregate journal rankings. First, we calculate aggregates for the set of rankings based on seven popular bibliometric indicators (impact factor, 5-year impact factor, immediacy index, article influence score, h-index, SNIP and SJR). Then, we exclude the Hirsch index and repeat the calculations. We perform the comparative correlation analysis of the aggregates and the initial rankings. We use two rank measures of correlation, Kendall’s tau-b and the share of coinciding pairs r. The analysis demonstrates that aggregate rankings represent the set of single-indicator-based rankings better than any of the seven rankings themselves. Among the single-indicator-based rankings themselves, the best representations of their set are produced by the 5-year impact factor. The least representative are rankings based on the immediacy index. The exclusion of the Hirsch index from the set of indicators does not change these results.
Proceedings of the science and technology indicators conference 2018 Leiden.
Proceedings of the 21 International Conference on Science and Technology Indicators: Peripheries, frontiers and beyond.
14-16 September 2016
Universitat Politècnica de València
Nowadays the question of interregional comparison in case of innovation development and identification of causes of differentiation between the Russian regions in this aspect has very high priority. The section devoted to publication activity is one of the crucial parts of different innovation ratings and scoreboards. Publication activity is very significant factor which prepares ground for innovations on regional and country levels. Also publication activity has a special value because it is used as one of the important criteria for government investments in this sphere, which as is well known not always have very high effectiveness. Today there is the necessity of creation of special rating which will be focused primary on publication activity of the Russian regions. So this article is devoted to this theme. The indicators of publication activity for 79 Russian regions in the period from 2010 to 2015 were gathered and analyzed in order to achieve the main aim of this article. The largest electronic library in Russia (eLIBRARY) was used as the main source of specific information. The rating of publication activity is based on 2 sections: “size of region” and ”quality of publications”. It was done in order to take into account not only quantitative, but also qualitative indexes. This approach makes it possible to analyze aggregate publication activity of regions which is one of the important factors to estimate innovation potentials of regions. Previous ratings were mainly concentrated on publication activity of HEIs and separate scientists. Moreover this methodology gives additional understanding of regional innovation processes and creates opportunities to monitor and evaluate local governments’ actions in stimulating of innovation development in regions.
An analysis of journals’ rankings based on five commonly used bibliometric indicators (impact factor, article influence score, SNIP, SJR, and H-index) has been conducted. It is shown that despite the high correlation, these single-indicatorbased rankings are not identical. Therefore, new approach to ranking academic journals is proposed based on the aggregation of single bibliometric indicators using several ordinal aggregation procedures. In particular, we use the threshold procedure, which allows to reduce opportunities for manipulations.