I wrote a long comment on this blogpost by Ludo Waltman but it got eaten by their system, so I'm rewriting it in a more expanded form as a blogpost of my own. Waltman argues, I think, that for those that reject the use of journal impact factors to evaluate individual papers, such as Lariviere et al., there should be then no legitimate uses for impact factors. I don't think this is true.
The impact factor was first used by Eugene Garfield to decide which additional journals to add to the Science Citation Index he created. Similarly, librarians can use impact factors to decide on which journals to subscribe or unsubscribe from and publishers and editors can use such metrics to track the impact of their journals. These are all sensible uses of the impact factor that I think no-one would disagree with. Of course, we can argue about whether the mean number of citations that articles receive in a journal is the best metric and I think that standard errors - as I suggested in my Journal of Economic Literature article - or the complete distribution as suggested by Lariviere et al., should be provided alongside them.
I actually think that impact factors or similar metrics are useful to assess very recently published articles, as I show in my PLoS One paper, before they manage to accrue many citations. Also, impact factors seem to be a proxy for journal acceptance rates or selectivity, which we only have limited data on. But ruling these out as legitimate uses doesn't mean rejecting the use of such metrics entirely.
I disagree with the comment by David Colquhoun that no working scientists look at journal impact factors when assessing individual papers or scientists. Maybe this is the case in his corner of the research universe but it definitely is not the case in my corner. Most economists pay much, much more attention to where a paper was published than how many citations it has received. And researchers in the other fields I interact with also pay a lot of attention to journal reputations, though they usually also pay more attention to citations as well. Of course, I think that economists should pay much more attention to citations too.
The impact factor was first used by Eugene Garfield to decide which additional journals to add to the Science Citation Index he created. Similarly, librarians can use impact factors to decide on which journals to subscribe or unsubscribe from and publishers and editors can use such metrics to track the impact of their journals. These are all sensible uses of the impact factor that I think no-one would disagree with. Of course, we can argue about whether the mean number of citations that articles receive in a journal is the best metric and I think that standard errors - as I suggested in my Journal of Economic Literature article - or the complete distribution as suggested by Lariviere et al., should be provided alongside them.
I actually think that impact factors or similar metrics are useful to assess very recently published articles, as I show in my PLoS One paper, before they manage to accrue many citations. Also, impact factors seem to be a proxy for journal acceptance rates or selectivity, which we only have limited data on. But ruling these out as legitimate uses doesn't mean rejecting the use of such metrics entirely.
I disagree with the comment by David Colquhoun that no working scientists look at journal impact factors when assessing individual papers or scientists. Maybe this is the case in his corner of the research universe but it definitely is not the case in my corner. Most economists pay much, much more attention to where a paper was published than how many citations it has received. And researchers in the other fields I interact with also pay a lot of attention to journal reputations, though they usually also pay more attention to citations as well. Of course, I think that economists should pay much more attention to citations too.
Thanks for the useful information. Your article is beneficial for us and those who are searching for Energy Advisor.
ReplyDelete