A new paper with Stephan Bruns on carrying out research assessment like the UK REF and the Australian ERA using citations data rather than peer review. We did a lot of the work of processing the data (doing fancy things with R and manually checking names of universities in Excel) when I visited Stephan in Kassel in November.
The problem with research assessment as carried out in Britain and in the social sciences in Australia is that publications that have already passed through a peer review process are again peer reviewed by the assessment panels. This involves a significant workload for many academics who are supposed to read these papers as well as the effort a each university put into selecting the publications that will be reviewed. However, this second peer review though is inferior to the first. If instead citation based metrics were used the whole process could be done much faster and cheaper. In Australia the natural sciences and psychology are assessed using citation analysis. I think this can be extended to at least some other social sciences including economics.
UK REF panels can also put some weight on citations data in some disciplines including most natural sciences and economics, but only as a positive indicator of academic significance and in very much a secondary role to peer review. This represents a change from the previous RAE, which prohibited the use of citations data by panels. This paper provides additional evidence on the potential effectiveness of citation analysis as a method of research assessment. We hope our results can inform the future development of assessment exercises such as the REF and ERA.
One reason why citations analysis is less accepted in the social sciences than in the natural sciences is the belief that citations accumulate too slowly in most social sciences such as economics to be useful for short-term research assessmen.
My 2014 paper in PLoS ONE shows that long-run citations to articles in economics and political science are fairly predictable from the first few years of citations to those articles. However, research assessment evaluates universities rather than single articles. In this new paper, we show that rank correlations are greatly increased when we aggregate over the economics publications of a university and also when we aggregate publications over time. The rank correlation for UK universities for citations received till the end of 2004 (2005) by economics articles published in 2003 and 2004 with total citations to those articles received through 2014 is 0.91 (0.97). These are high correlations. Correlations for Australia are a bit lower.
Our results here show that at the department or university level citations definitely accumulate fast enough in economics in order to be able to predict longer run citation outcomes of recent publications. It's not true that citations accumulate too slowly in the social sciences to be used in research assessment.
On the other hand, the rank correlation between our early citations indicators and the outcome of research assessment exercises in the UK and Australia ranges from 0.67-0.76. These results suggest that citation analysis is useful for research assessment in economics if the assessor is willing to use cumulative citations as a measure of research strength, though there do appear to be some systematic differences between peer-review based research assessment and our citation analysis, especially in the UK. Part of the difference will emerge due to the differences between the sample of publications we selected to assess and the publications actually selected in the 2010 ERA and 2008 RAE.
The problem with research assessment as carried out in Britain and in the social sciences in Australia is that publications that have already passed through a peer review process are again peer reviewed by the assessment panels. This involves a significant workload for many academics who are supposed to read these papers as well as the effort a each university put into selecting the publications that will be reviewed. However, this second peer review though is inferior to the first. If instead citation based metrics were used the whole process could be done much faster and cheaper. In Australia the natural sciences and psychology are assessed using citation analysis. I think this can be extended to at least some other social sciences including economics.
UK REF panels can also put some weight on citations data in some disciplines including most natural sciences and economics, but only as a positive indicator of academic significance and in very much a secondary role to peer review. This represents a change from the previous RAE, which prohibited the use of citations data by panels. This paper provides additional evidence on the potential effectiveness of citation analysis as a method of research assessment. We hope our results can inform the future development of assessment exercises such as the REF and ERA.
One reason why citations analysis is less accepted in the social sciences than in the natural sciences is the belief that citations accumulate too slowly in most social sciences such as economics to be useful for short-term research assessmen.
My 2014 paper in PLoS ONE shows that long-run citations to articles in economics and political science are fairly predictable from the first few years of citations to those articles. However, research assessment evaluates universities rather than single articles. In this new paper, we show that rank correlations are greatly increased when we aggregate over the economics publications of a university and also when we aggregate publications over time. The rank correlation for UK universities for citations received till the end of 2004 (2005) by economics articles published in 2003 and 2004 with total citations to those articles received through 2014 is 0.91 (0.97). These are high correlations. Correlations for Australia are a bit lower.
Our results here show that at the department or university level citations definitely accumulate fast enough in economics in order to be able to predict longer run citation outcomes of recent publications. It's not true that citations accumulate too slowly in the social sciences to be used in research assessment.
On the other hand, the rank correlation between our early citations indicators and the outcome of research assessment exercises in the UK and Australia ranges from 0.67-0.76. These results suggest that citation analysis is useful for research assessment in economics if the assessor is willing to use cumulative citations as a measure of research strength, though there do appear to be some systematic differences between peer-review based research assessment and our citation analysis, especially in the UK. Part of the difference will emerge due to the differences between the sample of publications we selected to assess and the publications actually selected in the 2010 ERA and 2008 RAE.
No comments:
Post a Comment