I have posted a new bibliometric working paper , which investigates how well we can predict future cumulative citations from the first citations received by a paper in the disciplines of economics and political science.
It is usually assumed that citations accumulate too slowly in social sciences apart from psychology to be useful for short-term research assessment. For this reason, the Australian Government’s Excellence in Research for Australia (ERA) exercise, which attempts to assess the research quality of universities in the previous 5 years, uses peer review in social science disciplines apart from psychology for this reason but uses citation analysis for psychology and all natural sciences. This peer review process seems to me to be a wasteful duplication of effort to review research outputs that have already passed through a peer review process once.
I show that, surprisingly, citations received by journal articles in the social sciences in the first one to two years after publication are strongly predictive for citations received in future years. By contrast, I show that journal impact factors are mostly useful in the year of publication and their contribution to predicting citations declines rapidly thereafter.
If it is actually possible to predict citations fairly reliably in social science disciplines, then it should also be easy to predict them in the natural sciences. This means that it should be possible to expand bibliometric analysis in research evaluation exercises to all disciplines apart from the humanities and arts. It also means that we should pay attention to the early citations received by papers when we evaluate individual academics for hiring and promotion. Impact factors are reflective of journal selectivity, which we frequently do not have easily available data on. But they only explain about 16-17% of the variation in rankings of papers six years later conditional on the citations already received in the year of publication. The latter explain 13-14% of the variation. But at the end of the year following publication, accumulated citations explain 52-53% of the variation in cumulative citations after 6 years and 73% at the end of the second year after publication.
These models could be improved by adding information on the characteristics of the articles themselves and their authors, but that was much too time consuming to do for the almost 12,000 articles in my sample.
I have submitted a copy of my paper to the HEFCE inquiry on the use of metrics in research assessment.
It is usually assumed that citations accumulate too slowly in social sciences apart from psychology to be useful for short-term research assessment. For this reason, the Australian Government’s Excellence in Research for Australia (ERA) exercise, which attempts to assess the research quality of universities in the previous 5 years, uses peer review in social science disciplines apart from psychology for this reason but uses citation analysis for psychology and all natural sciences. This peer review process seems to me to be a wasteful duplication of effort to review research outputs that have already passed through a peer review process once.
I show that, surprisingly, citations received by journal articles in the social sciences in the first one to two years after publication are strongly predictive for citations received in future years. By contrast, I show that journal impact factors are mostly useful in the year of publication and their contribution to predicting citations declines rapidly thereafter.
If it is actually possible to predict citations fairly reliably in social science disciplines, then it should also be easy to predict them in the natural sciences. This means that it should be possible to expand bibliometric analysis in research evaluation exercises to all disciplines apart from the humanities and arts. It also means that we should pay attention to the early citations received by papers when we evaluate individual academics for hiring and promotion. Impact factors are reflective of journal selectivity, which we frequently do not have easily available data on. But they only explain about 16-17% of the variation in rankings of papers six years later conditional on the citations already received in the year of publication. The latter explain 13-14% of the variation. But at the end of the year following publication, accumulated citations explain 52-53% of the variation in cumulative citations after 6 years and 73% at the end of the second year after publication.
These models could be improved by adding information on the characteristics of the articles themselves and their authors, but that was much too time consuming to do for the almost 12,000 articles in my sample.
I have submitted a copy of my paper to the HEFCE inquiry on the use of metrics in research assessment.
No comments:
Post a Comment