Friday, November 24, 2017

Data and Code for "Energy and Economic Growth: The Stylized Facts" and an Erratum

Following a request for our estimation code we have now completed a full replication package for our 2016 Energy Journal paper and uploaded it to Figshare.

While we were putting this together we noticed some minor errors in the tables in the published paper. The reported standard errors of the coefficients of lnY/P in Tables 2 and 3 for the results without outliers are incorrect. We accidentally pasted the standard errors from Table 5 into Tables 2 and 3. The correct versions of Tables 2 and 3 should look like this:

The standard errors for unconditional convergence in Tables 4 and 6 are also incorrect. The reported standard errors are not robust and one was completely wrong. The tables should look like:

None of these errors results in the significance level in terms of 1%, 5% etc. changing.

Thursday, November 9, 2017

Distribution of SNIP and SJR

My colleague Matt Dornan asked me about the distribution of the journal impact factors SNIP and SJR. Crawford School has used SNIP to compare journal rankings across disciplines. It is a journal impact factor that is normalized for the differences in citation potential in different fields. This makes it reasonable to compare Nature and the Quarterly Journal of Economics, for example. Nature looks like it has much higher impact using the simple Journal Impact Factor that just counts how many citations articles in a journal get. But taking citation potential into account, these journals look much more similar. SJR is an iterative indicator that takes into account how highly cited the journals which cite the journal of interest are. It is similar to Google's pagerank algorithm. It also can be compared across disciplines. SJR is more an indicator of journal prestige or importance rather than simple popularity. I've advocated SJR as a better measure of journal impact as some journals in my area have high citations but those are not in the really highly cited journals.

I assumed the distribution of these indicators was highly skewed with most journals having low impact. But I also assumed that the log of these indicators might be normally distributed as citations to individual papers is roughly log-normally distributed. It turns out that the log of SNIP is still a bit skewed, but not that far from normal:

On the other hand the log of SJR remains highly non-normal:

There is a small tail of high prestige journals and then a big bulk of low prestige journals and a huge spike at SJR = 0.1. It makes sense that it is harder to be prestigious rather than just popular, but still I am surprised by how extreme the skew is. The distribution appears closer to an exponential distribution than a normal distribution.