Wednesday, October 19, 2022

What Changed in the World Bank's Adjusted Net Saving Measure?

In August, I showed that using the World Development Indicators' current Adjusted Net Saving (ANS) data there is no relationship between ANS and the share of mining rents in GDP. I now know the main reason why this relationship appeared to change but I don't know yet why the World Bank made the changes that they did. 

In 2006 and earlier, the World Bank measured mineral and energy depletion using mining rents – the difference between mining revenues and the cost of production not including a return to the resource stock. This is based on Hartwick's Rule – resource rents should all be invested in produced capital in order to achieve sustainability. 

In recent years, they have used a different method. First, they estimate the net present value of resource rents (assuming that they remain constant in the future) using a 4% discount rate. Then they divide that amount by the number of years, T, that they assume the resource would last. The ratio of the current rent to this quantity is given by:

So, for example, if the resource has an expected 30 year lifetime then resource depletion is about 58% of current rents. Energy depletion for Saudi Arabia is around 1/3 of reported rents. This would imply that the lifetime of the resource is around 70 years.* This could explain in general why adjusted net saving is now estimated to be much higher for resource rich countries than it used to be.**

What I don't know yet is why they made this change. I haven't been able to find a rationale in the relevant World Bank publications. It is similar to but different from the El-Serafy (1989) method of measuring depletion. According to El-Serafy, the ratio of depletion to rent should be (1/(1+r))^(T+1). For a 30-year life span and a 4% discount rate, this is equal to 30%.

* The notes downloaded with the WDI data say that the lifetime is capped at 25 years. But this isn't mentioned in the relevant reports and makes the gap between rents and depletion harder to explain.

** There are a lot of other issues with assuming that the lifetime of a resource equals the expected lifetime of reserves and that rents will not change over time. There are also apparent inconsistencies between the stated methods and the results...

Thursday, August 11, 2022

Do Mining Economies Save Too Little?

I'm currently teaching Agricultural and Resource Economics for the first time. This week we started covering non-renewable resources focusing on minerals. One of the topics I covered is the resource curse. One of my sources is van der Ploeg's article "Natural Resources: Curse or Blessing?" published in the Journal of Economic Literature in 2011. In the paper, he reproduces this graph from a 2006 World Bank publication that apparently uses 2003 data from the World Development Indicators:

Genuine saving – now known as "adjusted net saving" – is equal to saving minus capital depreciation and various forms of resource depletion with expenditure on education added on. The idea is to measure the net change in all forms of "capital" in an economy. Mineral and energy rents are the pre-tax economic profits of mining. They are supposed to represent the return to the resource stock. The graph tells a clear story: Countries whose GDP depends heavily on mining tend to have negative genuine saving. So, they are not adequately replacing their non-renewable resources with other forms of capital. Van der Ploeg states that this is one of the characteristics of the resource curse.

Preparing for an upcoming tutorial on adjusted net saving and sustainability, I downloaded WDI data for recent years for some mining intensive countries, expecting to show the students how those countries still aren't saving enough. But this wasn't the case. Most of the mining economies had positive adjusted net saving. So, I wondered whether they had improved over time and downloaded the data for all available countries for 2003:

I've added a linear regression line.* There seems to be little relationship between these variables. The correlation coefficient is -0.017. Presumably, this is because of revisions to the data since 2006.

* I dropped countries with zero mining rents from the graph. The three countries at  top right with positive adjusted net saving are Saudi Arabia, Kuwait, and Libya. Oman and then does Democratic Republic of Congo have the next highest levels of mining rents and negative adjusted net savings.

Sunday, August 7, 2022

Trends in RePEc Downloads and Abstract Views

For the first time in a decade, I updated my spreadsheet on downloads and abstract views per person and per item on RePEc.

The downward trends I identified ten years ago have continued, though there was an uptick during the pandemic, which has now dissipated. There was more of an increase in abstract views than in downloads in the pandemic.

Since the end of 2011 both abstract views and downloads per paper have fallen by about 80%. Total papers rose by around 260%, while total downloads fell 38% and total abstract views 27%. 

I'd guess that a mixture of the explanatory factors I suggested last time has continued to be in play.

Thursday, June 2, 2022

Confidence Intervals for Recursive Journal Impact Factors

I have a new working paper coauthored with Johannes König and Richard Tol. It's a follow up to my 2013 paper in the Journal of Economic Literature, where I computed standard errors for simple journal impact factors for all economics journals and tried to evaluate whether the differences between journals were significant.* In the new paper, we develop standard errors and confidence intervals for recursive journal impact factors, which take into account that some citations are more prestigious than others, as well as for the associated ranks of journals. We again apply these methods to the all economics journals included in the Web of Science.

Recursive impact factors include the popular Scimago Journal Rank, or SJR, and Clarivate's Article Influence score. We use Pinski and Narin's invariant method, which has been used in some rankings of economics journals

As simple impact factors are just the mean citations an article published in a journal in a given period receives in a later year, it is easy to compute standard errors for them using the formula for the standard error of the mean. But the vector of recursive impact factors is the positive eigenvector of a matrix and its variance does not have a simple analytical form.

So, we use bootstrapping to estimate the distribution of each impact factor. Taking all 88,928 articles published in 2014-18 in the economics journals included in the Web of Science, we resample from this dataset and compute the vector of recursive impact factors from the new dataset.** Repeating this 1,000 times we pick the 2.5% or 97.5% range of values for each journal to get a 95% confidence interval:

95% confidence intervals of the recursive impact factor, arithmetic scale (left axis) and logarithmic scale (right axis).

The graph repeats the same data twice with different scales so that it's possible to see some detail for both high- and low-ranked journals. Also, notice that while the confidence intervals for the highest ranked journals are quite symmetric, confidence intervals become increasingly asymmetric as we go down the ranks. 

The top ranked journal, the Quarterly Journal of Economics, clearly stands out above all others. The confidence intervals of all other journals overlap with those of other journals and so the ranks of these journals are somewhat uncertain.*** So, next we construct confidence intervals for the journals' ranks.

It turns out that there are a few ways to do this. We could just construct a journal ranking for each iteration of the bootstrap and then derive the distribution of ranks for each individual journal across the 1,000 iterations. Hall and Miller (2009), Xie et al. (2009), and Mogstad et al. (2020) show that this procedure may not be consistent when some of the groups (here journals) being ranked are tied or close to tied. The corrected confidence intervals are generally broader than the naive bootstrap approach.

We compute confidence intervals for ranks using the simple bootstrap, the Xie et al. method, and the Mogstad et al. method:


95% confidence intervals of the rank based on the recursive impact factor. The inner intervals are based on Goldstein’s bootstrap method, the middle intervals use Xie’s correction to the bootstrap, and the outer intervals follow Mogstad’s pairwise comparison.

The simple bootstrap under-estimates the true range of ranks, while it seems that the Mogstad et al. method might be overly conservative. On the other hand, Xie et al.' s approach depends on choosing a couple of "tuning parameters".  

All methods agree that the confidence interval of the rank of the Quarterly Journal of Economics only includes one. Based on the simple bootstrap, the remainder of the "Top-5'' journals are in the top 6 together with the Journal of Finance, while the Xie et al. method and the Mogstad et al. methods generally broaden estimated confidence intervals, particularly for mid-ranking journals. All methods agree that most apparent differences in journal quality are, in fact, mostly insignificant. We think that impact factors, whether simple or recursive should always be published together with confidence intervals.

* The latter exercises were a bit naive. As pointed out by Horrace and Parmeter (2017), we need to account for the issue of multiple comparisons. 

** Previous research on this topic resampled at the journal level, missing most of the variation in citation counts.

*** Overlapping confidence intervals don't neccessarily mean that there is no signficant difference between two means. Goldstein and Harvey (1995) show that the correct confidence intervals for such a test of the difference between two means are narrower than the conventional 95% confidence intervals. On the other hand, for multiple comparisons we would want wider confidence intervals.

Wednesday, January 26, 2022

Typo in Directed Technical Change and the British Industrial Revolution

I hate reading my papers after they're published as there is usually some mistake somewhere. Unfortunately, I have to read them to do more research. I just found a typo in our 2021 paper in JAERE. Equation (8) should look like this:

In the published paper, there is a missing Gamma in the second term. 

I also noticed a couple of issues in the text of "Energy quality" published in Ecological Economics in 2010. One is in the introduction and is debatable: "Fuel and energy quality is not neccessarily fixed". This should be or instead of and or are instead of is. But it really isn't important. Then on p1475 we have "How does these measures". Again, not important.

Of course, the error in JAERE is not very important as the third term above is correct in the published paper.