Thursday, April 5, 2018

Buying Emissions Reductions

This semester I am teaching environmental economics, a course I haven't taught since 2006 at RPI. Last week we covered environmental valuation. I gave my class an in-class contingent valuation survey. I tried to construct the survey according to the recommendations of the NOAA panel. Here is the text of the survey:

Emissions Reduction Fund Survey

In order to meet Australia’s international commitments under the Paris Treaty, the government is seeking to significantly expand the Emissions Reduction Fund, which pays bidders such as farmers to reduce carbon emissions. To fully meet Australia’s commitment to reduce emissions by 26-28% below 2005 levels by 2030 the government estimates that the fund needs to be expanded to $2 billion per year. The government proposes to fund this by increasing the Medicare Levy.

1. Considering other things you need to spend money, and other things the government can do with taxes do you agree to a 0.125% increase in the Medicare levy, which is equivalent to $100 per year in extra tax for someone on average wages. This is expected to only meet half of Australia’s commitment, reducing emissions to 13-14% below 2005 levels or by a cumulative 370 million tonnes by 2030.

Yes No

2. Considering other things you need to spend money, and other things the government can do with taxes do you agree to a 0.25% increase in the Medicare levy, which is equivalent to $200 per year in extra tax for someone on average wages. This is expected to meet Australia’s commitment, reducing emissions to 26-28% below 2005 levels or by a cumulative 740 million tonnes by 2030.

Yes No

3. If you said yes to either 1 or 2, why? And how did you decide on whether to agree to the 0.125% or 0.25% tax?

4. If you said no to both 1. and 2. why?

***********************************************************************************


85% voted in favour of the 0.125% Medicare tax option and 54% voted in favour of 0.25% - So both would have passed. A few people voted against 0.125 and for 0.25, so I changed their votes to for 0.125 as well as 0.25.


Reasons for voting for both:

  • $200 not much, willing to do more than just pay that tax 
  • We should meet the target
 
  • Tax is low compared to other taxes - can reduce government spending on health in future
 
  • Can improve my health
 
  • Benefit is much greater than cost to me
 
  • I pay low tax as I'm retired, so can pay more
 
  • I'm willing to pay so Australia can meet commitment
 
  • Only $17 a month
 
  • Tax is small
 
  • Because reducing emissions is the most important environmental issue
 
 

Reasons for voting for 0.125 but against 0.25:

  • Can afford 0.125 but not 0.25
  • Government can cover the rest with other measures like incentives
 
 

Reasons for voting against both:

  • There are other ways to reduce emissions - give incentives to firms rather than tax the middle class... 
  • Government should tax firms
  • Don't believe in emissions reduction fund because it is inefficient

  • I prefer to spend my money rather than pay tax and reduction in emissions is not very big for tax paid


Mostly the reasons for voting for both are ones we would want to see if we are really measuring WTP - can afford to pay and it is a big issue. Those thinking it will increase their personal health or reduce health spending were made to think about health by the payment vehicle. I chose the Medicare Levy as the payment vehicle as the Australian government has a track record of increasing the Medicare Levy for all kinds of things, like repairing flood damage in Brisbane!
 I chose the emissions reduction fund because it actually exists and actually buys emissions reductions.

Most people who voted for 0.125% but against 0.25% have valid reasons - they can't afford the higher tax. However, one person said the government should cover the rest by other means. So that person may really be willing to pay 0.25% if the government won't do that.


When we get to the people who voted against both tax rates, most are against the policy vehicle rather than not being willing to pay for climate change mitigation. So, from the point of view of measuring WTP these votes would result in an under estimate. These "protest votes" are a big problem for CVM. Only one person said they weren't willing to pay anything given the bang for the buck.

Saturday, February 10, 2018

A Multicointegration Model of Global Climate Change

We have a new working paper out on time series econometric modeling of global climate change. We use a multicointegration model to estimate the relationship between radiative forcing and global surface temperature since 1850. We estimate that the equilibrium climate sensitivity to doubling CO2 is 2.8ºC – which is close to the consensus in the climate science community – with a “likely” range from 2.1ºC to 3.5ºC.* This is remarkably close to the recently published estimate of Cox et al. (2018).

Our paper builds on my previous research on this topic. Together with Robert Kaufmann, I pioneered the application of econometric methods to climate science – Richard Tol was another early researcher in this field. Though we managed to publish a paper in Nature early on (Kaufmann and Stern, 1997), I became discouraged by the resistance we faced from the climate science community. But now our work has been cited in the IPCC 5th Assessment Report and recently there is also a lot of interest in the topic among econometricians. This has encouraged me to get involved in this research again.

We wrote the first draft of this paper for a conference in Aarhus, Denmark on the econometrics of climate change in late 2016 and hope it will be included in a special issue of the Journal of Econometrics based on papers from the conference. I posted some of our literature review on this blog back in 2016.

Multicointegration models, first introduced by Granger and Lee (1989), are designed to model long-run equilibrium relationships between non-stationary variables where there is a second equilibrium relationship between accumulated deviations from the first relationship and one or more of the original variables. Such a relationship is typically found for flow and stock variables. For example, Granger and Lee (1989) examine production, sales, and inventory in manufacturing, Engsted and Haldrup (1999) housing starts and unfinished stock, Siliverstovs (2006) consumption and wealth, and Berenguer-Rico and Carrion-i-Silvestre (2011) government deficits and debt. Multicointegration models allow for slower adjustment to long-run equilibrium than do typical econometric time series models because of the buffering effect of the stock variable.

In our model there is a long-run equilibrium between radiative forcing, f, and surface temperature, s:
The equilibrium climate sensitivity is given by 5.35*ln(2)/lambda. But because of the buffering effect of the ocean, surface temperature takes a long time to reach equilibrium. The deviations from equilibrium, q, represent a flow of heat from the surface to (mostly) the ocean. The accumulated flows are the stock of heat in the Earth system, Q. Surface temperature also tends towards equilibrium with this stock of heat:
where u is a (serially correlated but stationary) random error. Granger and Lee simply embedded both these long-run relations in a vector autoregressive (VAR) time series model for s and f. A somewhat more recent and much more powerful approach (e.g. Engsted and Haldrup, 1999) notes that:
where F is accumulated f and S is accumulated s. In other words, S(2) = s(1)+s(2), S(3) = s(1)+s(2)+s(3) etc. This means that we can estimate a model that takes into account the accumulation of heat in the ocean without using any actual data on ocean heat content (OHC) ! One reason that this is exciting, is because OHC data is only available since 1940 and data for the early decades is very uncertain. Only since 2000 is there a good measurement network in place. This means that we can use temperature and forcing data back to 1850 to estimate the heat content. Another reason that this is exciting is that F and S are so-called second order integrated variables (I(2)) and estimation with I(2) variables, though complicated, is super-super consistent – it is easier to get an accurate estimate of a parameter despite noise and measurement error issues in a relatively small sample. The I(2) approach combines the 2nd and 3rd equations above into a single relationship which it embeds in a VAR model that we estimate using Johansen's maximum likelihood method. The CATS package which runs on top of RATS can estimate such models as can the Oxmetrics econometrics suite. The data we used in the paper is available here.

This graph compares our estimate of OHC (our preferred estimate is the partial efficacy estimate) with an estimate from an energy balance model (Marvel et al., 2016) and observations of ocean heat content (Cheng et al, 2017):


We think that the results are quite good, given that we didn't use any data on OHC to estimate it and that the observed OHC is very uncertain in the early decades. In fact, our estimate cointegrates with these observations and the estimated coefficient is close to what is expected from theory. The next graph shows the energy balance:


The red area is radiative forcing relative to the base year. This is now more than 2.5 watts per square meter – doubling CO2 is equivalent to a 3.7 watt per square meter increase. The grey line is surface temperature. The difference between the top of the red area and the grey line is the disequilibrium between surface temperature and radiative forcing according to the model. This is now between 1 and 1.5 watts per square meter and implies that, if radiative forcing was held constant from now on, that temperature would increase by around 1ºC to reach equilibrium.** This gap is exactly balanced by the blue area, which is heat uptake. As you can see, heat uptake kept surface temperature fairly constant during the last decade and a half despite increasing forcing. It's also interesting to see what happens during large volcanic eruptions such as Krakatoa in 1883. Heat leaves the ocean, largely, but not entirely, offsetting the fall in radiative forcing due to the eruption. This means that though the impact of large volcanic eruptions on radiative forcing is short-lived, as the stratospheric sulfates emitted are removed after 2 to 3 years, they have much longer-lasting effects on the climate as shown by the long period of depressed heat content after the Krakatoa eruption in the previous graph.

We also compare the multicointegration model to more conventional (I(1)) VAR models. In the following graph, Models I and II are multicointegration models and Models IV to VI are I(1) VAR models. Model IV actually includes observed ocean heat content as one of its variables, but Models V and VI just include surface temperature and forcing. The graph shows the temperature response to permanent doubling of radiative forcing:


The multicointegration models have both a higher climate sensitivity and respond more slowly due to the buffering effect. This mimics, to some degree, the response of a general circulation model. The performance of Model IV is actually worse than the bivariate I(1) VARs. This is because it uses a much shorter sample period than Models V and VI. In simulations that are not reported in the paper, we found that a simple bivariate I(1) VAR estimates the climate sensitivity correctly if the time series is sufficiently long - much longer than the 165 years of annual observations that we have. This means that ignoring the ocean doesn't strictly result in omitted variables bias as I previously claimed. Estimates are biased in a small sample, but not in a sufficiently large sample. That is probably going to be another paper :)

* "Likely" is the IPCC term for a 66% confidence interval. This confidence interval is computed using the delta method and is a little different to the one reported in the paper.
** This is called committed warming. But, if emissions were actually reduced to zero, it's expected that forcing would decline and that the decline in forcing would about balance the increase in temperature towards equilibrium.

References

Berenguer-Rico, V., Carrion-i-Silvestre, J. L., 2011. Regime shifts in stock-flow I(2)-I(1) systems: The case of US fiscal sustainability. Journal of Applied Econometrics 26, 298—321.

Cheng L., Trenberth, K. E., Fasullo, J., Boyer, T., Abraham, J., Zhu, J., 2017. Improved estimates of ocean heat content from 1960 to 2015. Science Advances 3(3), e1601545.

Cox, P. M., Huntingford, C., Williamson, M. S., 2018. Emergent constraint on equilibrium climate sensitivity from global temperature variability. Nature 553, 319–322.

Engsted, T. Haldrup, N., 1999. Multicointegration in stock-flow models. Oxford Bulletin of Economics and Statistics 61, 237—254.

Granger, C. W. J., Lee, T. H., 1989. Investigation of production, sales and inventory relationships using multicointegration and non-symmetric error correction models. Journal of Applied Econometrics 4, S145—S159.

Kaufmann R. K. and D. I. Stern (1997) Evidence for human influence on climate from hemispheric temperature relations, Nature 388, 39-44.

Marvel, K., Schmidt, G. A., Miller, R. L., Nazarenko, L., 2016. Implications for climate sensitivity from the response to individual forcings. Nature Climate Change 6(4), 386—389.

Siliverstovs, B., 2006. Multicointegration in US consumption data. Applied Economics 38(7), 819–833.

Saturday, February 3, 2018

Data and Code for "Modeling the Emissions-Income Relationship Using Long-run Growth Rates"

I've posted on my website the data and code used in our paper "Modeling the Emissions-Income Relationship Using Long-run Growth Rates" that was recently published in Environment and Development Economics. The data is in .xls format and the econometrics code is in RATS. If you don't have RATS, I think it should be fairly easy to translate the commands into another package like Stata. If anything is unclear, please ask me. I managed to replicate all the regression results and standard errors in the paper but some of the diagnostic statistics are different. I think only once does that make a difference, and then it's in a positive way. I hope that providing this data and code will encourage people to use our approach to model the emissions-income relationship.

Tuesday, January 16, 2018

Explaining Malthusian Sluggishness

I'm adding some more intuition to our paper on the Industrial Revolution. I have a sketch of the math but still need to work out the details. Here is the argument:

"In this section, we show that both Malthusian Sluggishness and Modern Economic Growth are characterized by a strong equilibrium bias of technical change (Acemoglu, 2009). This means that in both regimes the long-run demand curve for the resource is upward sloping – higher relative prices are associated with higher demand. Under Malthusian Sluggishness the price of wood is rising relative to coal and technical change is relatively wood-augmenting. At the same time, wood use is rising relative to coal use. Because when the elasticity of substitution is greater than one the market size effect dominates the price effect, technical change becomes increasingly wood-augmenting. As a result, the economy increasingly diverges from the region where an industrial revolution is possible or inevitable. Modern Economic Growth is the mirror image. The price of coal rises relative to the price of wood, but coal use increases relative to wood use and technical change is increasingly coal-augmenting."

Saturday, January 6, 2018

How to Count Citations If You Must

That is the title of a paper in the American Economic Review by Motty Perry and Philip Reny. They present five axioms that they argue a good index of individual citation performance should conform to. They show that the only index that satisfies all five axioms is the Euclidean length of the list of citations to each of a researcher's publications – in other words, the square root of the sum of squares of the citations to each of their papers.* This index puts much more weight on highly cited papers and much less on little cited papers than simply adding up a researcher's total citations would. This is a result of their "depth relevance" axiom. A citation index that is depth relevant always increases when some of the citations of a researcher's less cited papers are instead transferred to some of the researcher's more cited papers. In the extreme, it rewards "one hit wonders" who have a single highly cited paper, over consistent performers who have a more extensive body of work with the same total number of citations.

The Euclidean index is an example of what economists call constant elasticity of substitution, or CES, functions. Instead of squaring each citation number, we could raise it to a different power, such as 1.5, 0.5, or anything else. Perry and Reny show that the rank correlation between the National Research Council peer-reviewed ranks of the top 50 U.S. economics departments and the CES citation indices of the faculty employed in those departments is at a maximum for a power of 1.85:



This is close to 2 and suggests that the market for economists values citations in a similar way to the Euclidean index.

RePEc acted unusually quickly to add this index to their rankings. Richard Tol and I have a new working paper that discusses this new citation metric. We introduce an alternative axiom: "breadth relevance", which rewards consistent achievers. This axiom states that a citation index always increases when some citations from highly cited papers are shifted to less cited papers. We also reanalyze the dataset of economists at the top 50 U.S. departments that Perry and Reny looked at and a much larger dataset that we scraped from CitEc for economists at the 400 international universities ranked by QS. Unlike Perry and Reny, we take into account the fact that citations accumulate over a researcher's career and so junior researchers with few citations aren't necessarily weaker researchers than senior researchers with more citations. Instead, we need to compare citation performance within each cohort of researchers measured by the years since they got their PhD or published their first paper.

We show that a breadth relevant index that also satisfies Perry and Reny's other axioms is a CES function with exponent of less than one. Our empirical analysis finds that the distribution of economists across departments is in fact explained best by the simple sum of their citations, which is equivalent to a CES function with exponent of one, that favors neither depth nor breadth. However, at lower ranked departments – departments ranked by QS from 51 to 400 – the Euclidean index does explain the distribution of economists better than does total citations.


In this graph, the full sample is the same dataset that Perry and Reny used in their graph. The peak correlation is for a lower exponent – tau or sigma** – simply because we take into account cohort effects by computing the correlation for a researcher's citation index relative to the cohort mean.*** While the distribution across the top 25 departments is similarly to the full sample, with a peak at a slightly lower exponent that is very close to one, we don't find any correlation between citations and department rank for the next 25 departments. It seems that there aren't big differences between them.

Here are the correlations for the larger dataset that uses CitEc citations for the 400 universities ranked by QS:


For the top 50 universities, the peak correlation is for an exponent of 1.39 but for the next 350 universities the peak correlation is for 2.22. The paper also includes parametric maximum likelihood estimates that come to similar conclusions.

Breadth per se does not explain the distribution of researchers in our sample, but the highest ranked universities appear to weight breadth and depth equally, while lower-ranked universities do focus on depth, giving more weight to a few highly cited papers.

A possible speculative explanation of behavior across the spectrum of universities could be as follows. Lowest-ranked universities, outside of the 400 universities ranked by QS, might simply care about publication without worrying about impact. Having more publications would be better than having fewer at these institutions, suggesting a breadth relevant citation index. Our exploratory analysis that includes universities outside of those ranked by QS supports this. We found that breadth was inversely correlated with average citations in the lower percentiles.

Middle-ranked universities, such as those ranked between 400 and 50 in the QS ranking, care about impact; having some high-impact publications is better than having none and a depth-relevant index describes behavior in this interval. Finally, among the top-ranked universities such as the QS top 50 or NRC top 25, hiring and tenure committees wish to see high-impact research across all of a researcher's publications and the best-fit index moves towards. Here, adding lower-impact publications to a publication list that contains high-impact ones is seen as a negative.

* As monotonic transformations of the index also satisfy the same axioms, the simplest index that satisfies the axioms is simply the sum of squares.

** In the paper, we refer to an exponent of less than one as tau and an exponent greater than one as sigma.

*** The Ellison dataset that Perry and Reny use, uses Google Scholar data and truncates each researcher's publication list at 100 papers. With all working paper variants, it's not hard to exceed 100 items. This could bias the analysis in favor of depth rather than breadth. We think that the correlation computed for researchers with 100 papers or less only is a better way to test whether depth or breadth best explains the distribution of economists across departments. The correlation peaks very close to one for this dataset.