Two articles in the August issue of the

David Laband's article "On the use and abuse of economics journal rankings" (also see his recent blogpost) explicitly looks at the issue of the dispersal of citations in a journal as important information in addition to the mean or median citations. The major table in his paper includes mean, median, and standard deviation of citations to 2011 of articles published in 248 economics journals in 2001-5. Also provided is the Hirsch index, the fraction of the journal's articles included in the Hirsch index, the percent of articles with zero citations, at least 15 citations, and at least 40 citations. Finally, an interesting concept is the number of articles in each journal which were included in the 408 articles that made up the "global Hirsch index" of the 248 journals. Most journals do not have any articles in this highly cited group and a small number of the "usual suspects" have large numbers of them.

John Hudson's article "Ranking journals" compares four different subjective journal rankings including the Keele list and the ARC ERA 2010 list. He regresses their rankings on various objective citation based measures and measures intended to capture bias factors such as home country bias. The model is then used to predict the probability that a given journal belongs in a given quality rank, of which there are four (for example ABDC uses A*, A, B, C). There is a lot of ambiguity with many journals with low probabilities based on the objective criteria being allocated to a particular category by the subjective list-makers. The message is that there is a lot of uncertainty in the rankings assigned by such lists. A simpler way of making the same point is in the table of correlations that Hudson presents. The correlation between the ARC and Keele lists is only 0.67.

*Economic Journal*make similar points to my*Journal of Economic Literature*article on the uncertainty in economics journal rankings, which I discussed in this blog here.David Laband's article "On the use and abuse of economics journal rankings" (also see his recent blogpost) explicitly looks at the issue of the dispersal of citations in a journal as important information in addition to the mean or median citations. The major table in his paper includes mean, median, and standard deviation of citations to 2011 of articles published in 248 economics journals in 2001-5. Also provided is the Hirsch index, the fraction of the journal's articles included in the Hirsch index, the percent of articles with zero citations, at least 15 citations, and at least 40 citations. Finally, an interesting concept is the number of articles in each journal which were included in the 408 articles that made up the "global Hirsch index" of the 248 journals. Most journals do not have any articles in this highly cited group and a small number of the "usual suspects" have large numbers of them.

John Hudson's article "Ranking journals" compares four different subjective journal rankings including the Keele list and the ARC ERA 2010 list. He regresses their rankings on various objective citation based measures and measures intended to capture bias factors such as home country bias. The model is then used to predict the probability that a given journal belongs in a given quality rank, of which there are four (for example ABDC uses A*, A, B, C). There is a lot of ambiguity with many journals with low probabilities based on the objective criteria being allocated to a particular category by the subjective list-makers. The message is that there is a lot of uncertainty in the rankings assigned by such lists. A simpler way of making the same point is in the table of correlations that Hudson presents. The correlation between the ARC and Keele lists is only 0.67.