Following up on a post a few months ago here are some more comments on recent articles on journal impact factors:
Van Raan (2012) shows that there is a strong correlation (0.87) between the log of average citations per article for a research group and the average impact factor of the journals they publish in across 157 chemistry research groups in the Netherlands. But correlations are much lower between the citations to individual articles and the impact factors of the journals they publish in.
Vanclay (2012) strongly criticizes the impact factor in the lead paper in a special issue of Scientometrics on journal impact factors. In the opening paragraph he likens it to phrenology. Many of the remaining papers are invited responses including from Eugene Garfield (Pudovkin and Garfield, 2012) the originator of the impact factor and from David Pendlebury and Jonathan Adams at Thomson Reuters the current publishers of the Web of Science and the Journal Citation Reports (Pendlebury and Adams, 2012).
Vanclay does suggest that impact factors would be improved if confidence intervals were reported. Pudovkin and Garfield (2012) argue contrary to Vanclay (2012) that as the impact factor uses the full sample of data it has no attendant uncertainty in its calculation. But this depends on how we frame the question. If we model the number of citations that the articles published in a journal receive in a given subsequent year using a probability distribution, then the impact factor is simply an estimate of the expected value or first moment of the distribution. As an estimate of an unknown underlying parameter it is uncertain and an uncertainty measure should be reported. By contrast, Moed et al. (2012) “agree with Vanclay that ‘‘error bars’’ are urgently needed in journal metrics.” (372)
As for the nature of the citation distribution function, Stringer et al. (2008) show that the distribution of citations to papers published in a given year in a given journal is lognormal for papers cited at least one. A journal’s proportion of uncited papers is tightly negatively correlated with the mean of its log citations. Though other distributions also fit the data well (e.g. Glänzel et al., 2009), publishing the mean of log citations, the standard deviation, and the uncited fraction would be very informative.
References:
Glänzel, W. (2009) The multi-dimensionality of journal impact, Scientometrics 78(2): 355-374.
Moed, H. F., L. Colledge, J., Reedijk, F. Moya-Anegon, V. Guerrero-Bote, A. Plume, and M. Amin (2012) Citation-based metrics are appropriate tools in journal assessment provided that they are accurate and used in an informed way, Scientometrics 92: 367-376.
Pendlebury, D. A. and J. Adams (2012) Comment on a critique of the Thomson Reuters journal impact factor, Scientometrics 92: 395-401.
Pudovkin, A. I. and E. Garfield (2012) Rank normalizartion of impact factors will resolve Vanclay’s dilemma with TRIF: Comments on the paper by Jerome Vanclay, Scientometrics 92: 409-412.
Stringer, M. J., M. Sales-Pardo, L. A. Nunes Amaral (2008) Effectiveness of journal ranking schemes as a tool for locating information, PLoS One 3(2): e1683.
van Raan, A. F. J. (2012) Properties of journal impact in relation to bibliometric research group performance indicators, Scientometrics 92: 457-469.
Vanclay, J. K. (2012) Impact factor: Outdated artefact or stepping-stone to journal certification, Scientometrics 92: 211-238.
Van Raan (2012) shows that there is a strong correlation (0.87) between the log of average citations per article for a research group and the average impact factor of the journals they publish in across 157 chemistry research groups in the Netherlands. But correlations are much lower between the citations to individual articles and the impact factors of the journals they publish in.
Vanclay (2012) strongly criticizes the impact factor in the lead paper in a special issue of Scientometrics on journal impact factors. In the opening paragraph he likens it to phrenology. Many of the remaining papers are invited responses including from Eugene Garfield (Pudovkin and Garfield, 2012) the originator of the impact factor and from David Pendlebury and Jonathan Adams at Thomson Reuters the current publishers of the Web of Science and the Journal Citation Reports (Pendlebury and Adams, 2012).
Vanclay does suggest that impact factors would be improved if confidence intervals were reported. Pudovkin and Garfield (2012) argue contrary to Vanclay (2012) that as the impact factor uses the full sample of data it has no attendant uncertainty in its calculation. But this depends on how we frame the question. If we model the number of citations that the articles published in a journal receive in a given subsequent year using a probability distribution, then the impact factor is simply an estimate of the expected value or first moment of the distribution. As an estimate of an unknown underlying parameter it is uncertain and an uncertainty measure should be reported. By contrast, Moed et al. (2012) “agree with Vanclay that ‘‘error bars’’ are urgently needed in journal metrics.” (372)
As for the nature of the citation distribution function, Stringer et al. (2008) show that the distribution of citations to papers published in a given year in a given journal is lognormal for papers cited at least one. A journal’s proportion of uncited papers is tightly negatively correlated with the mean of its log citations. Though other distributions also fit the data well (e.g. Glänzel et al., 2009), publishing the mean of log citations, the standard deviation, and the uncited fraction would be very informative.
References:
Glänzel, W. (2009) The multi-dimensionality of journal impact, Scientometrics 78(2): 355-374.
Moed, H. F., L. Colledge, J., Reedijk, F. Moya-Anegon, V. Guerrero-Bote, A. Plume, and M. Amin (2012) Citation-based metrics are appropriate tools in journal assessment provided that they are accurate and used in an informed way, Scientometrics 92: 367-376.
Pendlebury, D. A. and J. Adams (2012) Comment on a critique of the Thomson Reuters journal impact factor, Scientometrics 92: 395-401.
Pudovkin, A. I. and E. Garfield (2012) Rank normalizartion of impact factors will resolve Vanclay’s dilemma with TRIF: Comments on the paper by Jerome Vanclay, Scientometrics 92: 409-412.
Stringer, M. J., M. Sales-Pardo, L. A. Nunes Amaral (2008) Effectiveness of journal ranking schemes as a tool for locating information, PLoS One 3(2): e1683.
van Raan, A. F. J. (2012) Properties of journal impact in relation to bibliometric research group performance indicators, Scientometrics 92: 457-469.
Vanclay, J. K. (2012) Impact factor: Outdated artefact or stepping-stone to journal certification, Scientometrics 92: 211-238.
No comments:
Post a Comment