Wednesday, August 5, 2020

Abandoning a Paper

Now and then it's time to give up on a project. In September 2018, I attended a climate econometrics conference at Frascati near Rome. For my presentation, I did some research on the performance of different econometric estimators of the equilibrium climate sensitivity (ECS) including the multicointegrating vector autoregression (MVAR) that we used in our paper in the Journal of Econometrics. The paper included estimates using historical time series observations (from 1850 to 2014), a Monte Carlo analysis, estimates using output of 16 Global Circulation Models (GCMs), and a meta-analysis of the GCM results.


The historical results, which are mostly also in the Journal of Econometrics paper, appear to show that taking energy balance into account increases the estimated climate sensitivity. By energy balance, we mean that if there is disequilibrium between radiative forcing and surface temperature the ocean must be heating or cooling. Surface temperature is in equilibrium with ocean heat, and in fact follows ocean heat much more closely than it follows radiative forcing. Not taking this into account results in omitted variables bias. Multicointegrating estimators model this flow and stock equilibirum. The residuals from a cointegrating relationship between the temperature and radiative forcing flows are accumulated into a heat stock, which in turn cointegrates with surface temperature. If we have actual observations on ocean heat content or radiative imbalances we can use them. But available time series are much shorter than those for surface temperature or radiative forcing. The results also suggested that using a longer time series increases the estimated climate sensitivity.

The Monte Carlo analysis was supposed to investigate these hypotheses more formally. I used the estimated MVAR as the model of the climate system and simulated the radiative forcing series as a random walk. I made 2000 different random walks and estimated the climate sensitivity with each of the estimators. This showed that, not surprisingly, the MVAR was an unbiased estimator. The other estimators were biased using a random walk of just 165 periods. But when I used a 1000 year series all estimators were unbiased. In other words, they were all consistent estimators of the ECS. This makes sense, because in the end equilibrium is reached between forcing and surface temperature. But it takes a long time.

Each of the GCMs I used has an estimated ECS ("reported ECS") from an experiment where carbon dioxide is suddenly increased fourfold. I was using data from a historical simulation of each GCM, which uses the estimated historical forcings over the period 1850 to 2014. A major problem in this analysis is that the modelling teams do not report the forcing that they used. This is because the global forcing that results from applying aerosols etc depends on the model and the simulation run. So, I used the same forcing series that we used to estimate our historical models. This isn't unprecedented, Marvel et al. (2018) do the same.

In general, the estimated ECS were biased down relative to the reported ECS for the GCMs, but again, the estimators that took energy balance into account seemed to do better. In an meta-analysis of the results, I compared how much the reported radiative imbalance (=ocean heat uptake roughly) from each GCM increased to how much the energy balance equation said it should increase using the reported temperature series, reported ECS, and my radiative forcing series. A regression analysis showed, that where the two matched, the estimators that took energy balance into account were unbiased, while those that did not match, under-estimated the ECS.

These results seemed pretty nice and I submitted the paper for publication. Earlier this year, I got a revise and resubmit. But when I finally got around to working on the paper post-lockdown and post-teaching things began to fall apart.

First, I came across the Forster method of estimating the radiative forcing in GCMs. This uses the energy balance equation:

where F is radiative forcing, T is surface temperature, and N is radiative imbalance. Lambda is the feedback parameter. ECS is inversely proportional to it. The deltas indicate the change since some baseline period. Then, if we know N and T, both of which are provided in GCM results, we can find F! So, I used this to get the forcing specific to each GCM. The results actually looked nicer than in the originally submitted paper. These are the results for the MVAR for 15 CMIP5 GCMs:


The rising line is a 45 degree line, which marks equality between reported and estimated ECSs. The multicointegrating estimators were still better than the other estimators. But there wasn't any systematic variation in the degree of underestimation that would allow us to use a meta-analysis to derive an adjusted estimate of the ECS.

This is still OK. But then I read and re-read more research on under-estimation of the ECS from historical observations. The recent consensus is that estimates from recent historical data will inevitably under-estimate the ECS because feedbacks change from the early stages after an increase in forcing to the latter stages as a new equilibrium is reached. The effective climate sensitivity is lower at first and greater later.

OK, even if we have to give up on estimating the long-run ECS, my estimates are estimates of the historical sensitivity. Aren't they? The problem is that I used the long-run ECS to derive the forcing from the energy balance equation. So, the forcing I derived is wrong. It is too low. I could go back to using the forcing I used previously, I guess. But now I don't believe the meta-analysis of that data is meaningful. So, I have a bunch of estimates using the wrong forcing with no way to further analyse them.

I also revisited the Monte Carlo analysis. By the way I had an on-and-off again coauthor through this research. He helped me a lot with understanding how to analyse the data. But he didn't like my overly bullish conclusions on the submitted paper and so withdrew his name from it. But he was maybe going to get back on the revised submission. He thought that the existing analysis which used an MVAR to produce the simulated data was maybe biased unfairly in favour of the MVAR. So, I came up with a new data-generating process. Instead of starting with a forcing series I would start with the heat content series. From that I would derive temperature, which needs to be in equilibrium with heat content and then using the energy balance equation derive the forcing. To model the heat content I fitted a unit root autoregressive model (stochastic trend) to the heat content reported from the Community GCM with the addition of a volcanic forcing explanatory variable. The stochastic trend represents anthropogenic forcing. The Community GCM is one of the 15 GCMs I was using and it has temperature and heat content series that look a lot like the observations. I then fitted a stationary autoregressive model for temperature with the addition of the heat content as an explanatory variable. The simulated model used normally distributed shocks with the same variance as these fitted models and volcanic shocks.

As an aside, the volcanic shocks were produced by the model:
where rangamma(0.05) are random numbers drawn from a standard gamma distribution with shape parameter 0.05. This is supposed to produce the stratospheric sulfur radiative forcing, which decays over a few years following an eruption. Here is an example realisation:

The dotted line is historical volcanic forcing and the solid line a simulated forcing. My coauthor said it looked "awesome".

So, again, I produced two sets of 2000 datasets. One with a sample size of 165 and one with a sample size of 1000. Now, even in the smaller sample, all four estimators I was testing produced essentially identical and unbiased results! I ran this yesterday. So, our Monte Carlo result disappears. I can't see anything unreasonable about this data generating process, which produces completely different results to the one in the submitted paper. So, I don't see anything to justify one over the other. So, this was the point where I gave up on this project.

My coauthor, who is based in Europe, is on vacation. Maybe he'll see a way to save it when he comes back, but I am sceptical.

Friday, July 17, 2020

How Large is the Economy-Wide Rebound Effect?


Last year, I published a blogpost about our research on the economy-wide rebound effect. The post covers the basics of what the rebound effect is and presents our results. We found that energy efficiency improvements do not save energy. In other words, the rebound effect is 100%. This doesn't mean that improving energy efficiency is a bad thing. It's a good thing, because consumers get more energy services as a result. But it probably doesn't help the environment very much.

I now have a new CAMA working paper, which surveys the literature on this question. Contributions to the literature are broadly theoretical or quantitative. Theory provides some guidance on the factors affecting rebound but does not impose much constraint on the range of possible responses. There aren't very many econometric studies. Most quantitative studies are either calculations using previously estimated parameters and variables or simulations.

Theory shows that the more substitutable other inputs are for energy in production the greater the rebound effect. This means that demand for energy services by producers is more elastic and so reducing the unit costs of energy services increases the amount used by more.

The most comprehensive theoretical examination of the question is Derek Lemoine's new paper in the European Economic Review: "General Equilibrium Rebound from Energy Efficiency Innovation." Lemoine provides the first mathematically consistent analysis of general equilibrium rebound, where all prices across the economy can adjust to a change in energy efficiency in a specific production sector. He shows that the elasticity of substitution in consumption plays the same role as the elasticity of substitution in production: the greater the elasticity, the greater the rebound, ceteris paribus.

Beyond that, the predictions of the model depend on parameter values. The most likely case, assuming a weak response labor to changes in the wage rate, is that the general equilibrium effects increase energy use relative to the partial equilibrium direct rebound effect for energy intensive sectors and reduce it for labor intensive sectors.

Lemoine uses his framework and previously estimated elasticities and other parameters to compute the rebound to an economy-wide energy efficiency improvement in the US. The result is 38%. There are two main reasons why the real rebound might be higher than this. First, most of the elasticities of substitution in production that he uses are quite low because of how they were estimated. Second, an energy efficiency improvement in any sector apart from the energy supply sector does not trigger a fall in the price of energy. A fall in the price of energy would boost rebound. This is because there are no fixed inputs and there are constant returns to scale in energy production.

There are similar issues with simulations from computable general equilibrium models (CGE). The assumptions that modellers make and the parameter values they choose make a huge difference to the results. Depending on these choices, any result from super-conservation, where more energy is saved than the energy efficiency improvement alone would save, to backfire, where energy use increases, is possible.

Rausch and Schwerin estimate the rebound using a small general equilibrium model calibrated to US data. This is somewhere between the typical CGE model and econometric models. They use the putty-clay approach to measuring and modeling energy efficiency. Increases in the price of energy relative to capital are 100% translated into improvements in the energy efficiency of new capital equipment. Once capital is installed, energy and capital must be used in fixed proportions. Rebound in this model depends on why the relative price changes. If the price of energy rises, energy use falls. However, if the price of capital falls energy use increases. These are very strong assumptions, which determine how the data are interpreted. Are they realistic? Rausch and Schwerin find that historically rebound has been around 100% in the US.

>Historical evidence also hints that the economy-wide rebound effect could be near 100%. Energy intensity in developing countries today isn't lower than it was in the developed countries when they were at the same level of income. This is despite huge gains in energy efficiency in all kinds of technologies from lighting to car engines. This makes sense if consumers have shifted to more energy intensive consumption goods and services over time. Commuters and tourists on trains in the 19th and early 20th centuries have been replaced by commuters and tourists in cars and on planes in the late 20th and early 21st centuries.

I only found three fully empirical econometric analyses. One of them is our own paper. The others are by Adetutu et al. (2016) and Orea et al. (2015). Both use stochastic production frontiers to estimate energy efficiency. This is a potentially promising approach. Adetutu et al. then model the effect of this energy efficiency one energy use, using an autoregressive model. This includes the lagged value of energy use as an explanatory variable, which means that the long-run effect of all variables is greater in absolute value than the short-run effect. As in the short run, energy efficiency reduces energy use, in the long run it reduces it even more. The result is super-conservation even though short-run rebound is 90%. In Orea et al.'s model, the purely stochastic inefficiency term is multiplied by [1-R(γ'z)] where z is a vector of variables including GDP per capita, the price of energy, and average household size. R(γ'z) is then supposed to be an estimate of the rebound effect. But really this is just a reformulation of the inefficiency term – nothing specifically identifies R(γ'z) as the rebound effect.

In conclusion, the economy-wide rebound effect might be near 100%. But I wouldn't describe the evidence as conclusive. Both our research and the historical investigations might be missing some important factor that has moved energy use in a way that makes us think it is due to changes in energy efficiency, and Rausch and Schwerin make very strong assumptions about analysing the data.

Saturday, April 18, 2020

Designing Future Electricity Markets: Evolution Rather than Revolution

We have a new working paper on designing wholesale electricity markets with high levels of zero marginal cost intermittent energy sources. The paper reports on the findings of a session on this topic at the Future of Electricity Markets Summit that was held in November last year in Sydney. Gordon Leslie led the writing with contributions from myself, Akshay Shanker, and Mike Hogan. The session itself featured papers by Paul Simshauser, Mike Hogan, Chandra Krishnamurthy (a paper coauthored with Akshay and myself), and Simon Wilkie.

Imagine the future where fossil fuel generation has disappeared and generation consists of intermittent sources like solar and wind. The cost of producing more electricity with these technologies is essentially zero – the marginal cost of production is zero – once we have invested in the solar panels or wind turbines. But that investment is costly. How would markets for electricity work in this situation? The naive view is that introductory economics tells us that the price in competitive markets, like many wholesale electricity markets (e.g. Australia's National Electricity Market – the "NEM"), is equal to marginal cost. If renewables have zero marginal cost, then the price would be zero. So, an "energy only market" where generators are paid for the electricity they provide to the grid won't be viable and we need a total redesign.


But the consensus in our session was that zero marginal cost generation doesn't mean that energy only markets can't work. Marginal cost of production isn't as important as marginal opportunity cost. Hydropower plants have a near zero marginal cost of production but if they release water when the price of electricity is low, they forego generating electricity when the price is high. Releasing water has an opportunity cost. Mike Hogan pointed out that the same is true of reserves more generally. There will also be electricity storage including pumped hydropower as well as batteries and other technologies in the future. The Krishnamurthy paper focused in particular on the role of storage. The price these receive should also be determined by opportunity cost. Even without storage and reserves there should be a market equilibrium with a non-zero price as long as there is sufficient flexibility of demand for electricity:

Intermittent renewables supply to the market however much power, Rt, that they can generate. They can't supply unlimited power at zero marginal cost... The market clearing price is p, where the demand curve, Dt, crosses the vertical supply curve. Price does not equal marginal cost of production. Of course, the quantity and price is continually fluctuating. But, if, on average, the revenues are insufficient to cover the costs of investment, generation capacity will shut down and exit the market until long-run equilibrium is restored. Storage has the effect of making the supply curve more elastic like the St supply curve in this graph:

In this example, storage lowers the price and increases the quantity of electricity consumed. This is when storage is discharging electricity. When it is charging, it will raise the price relative to the case with no storage. Storage has the effect of reducing volatility.

This is of course very simplified. The real world is more complicated. The paper shows that increasing intermittent generator penetration increases the importance of adequately pricing scarcity and all network constraints and services. Wind generation tends to colocate in the best wind resource areas overloading the gird. Locational marginal pricing pricing is required to deliver investment incentives for the right technologies to locate at the right locations to efficiently maintain a stable and reliable electrical network as discussed by Katzen and Leslie (2020). The NEM currently only has a spot market that schedules generation for each 5 minute interval. A day-ahead market could help in valuing the provision of flexible generation and storage. The current charges for transmission costs in the NEM also disincentivize grid-scale storage. Electricity users pay for these costs. This makes sense when the user is an end-user. But storage is treated as a customer and also pays these charges.  So "fresh electricity" where only one set of charges is paid has a cost advantage over stored electricity.

In conclusion, electricity markets need to evolve to provide the correct incentives to generation and storage. A total rethink isn't needed.

Wednesday, December 25, 2019

Annual Review 2019

I've been doing these annual reviews since 2011. They're mainly an exercise for me to see what I accomplished and what I didn't in the previous year. This year continued to feel lke a struggle at times, so it's a good idea to remind myself of what I did manage to accomplish. It felt like I was just trying to finish things this year and not succeeding but we actually started new things too. The big personal news of the year is that our second child Isaac Daniel was born:


As a result I didn't travel much. I gave seminars at Monash and Macquarie Universities and went to the Future of Electricity Markets Summit in Sydney.

We only published two papers with a 2019 date:

Bruns S. B., J. König, and D. I. Stern (2019) Replication and robustness analysis of 'Energy and economic growth in the USA: a multivariate approach', Energy Economics 82, 100-113. Working Paper Version | Blogpost

Bruns S. B. and D. I. Stern (2019) Lag length selection and p-hacking in Granger causality testing: Prevalence and performance of meta-regression models, Empirical Economics 56(3), 797-830. Working Paper Version | Blogpost

and one with a 2020 date:

Bruns S. B., Z. Csereklyei, and D. I. Stern (2020) A multicointegration model of global climate change, Journal of Econometrics 214(1), 175-197. Working Paper Version | Blogpost

We posted three working papers, but only one that is really new:

Estimating the economy-wide rebound effect using empirically identified structural vector autoregressions. August 2019. With Stephan Bruns and Alessio Moneta.

We have three papers under review at the moment (one a resubmission), two revise and resubmits we are working on, and three or four we are trying to finish. So, hopefully the number of publications in the next couple of years will increase. 

Google Scholar citations exceeded 17,000 with an h-index of 52. The trend to fewer blogposts continued – this is only the 3rd blogpost this year. Twitter followers rose from 950 to 1250 over the year.

I taught environmental economics and the masters research essay course again. This was the second time teaching the environmental economics course and things went a lot smoother.

Debasish Das started as my PhD student. He is a lecturer at Khulna University in Bangladesh. We are exploring different research topics like electricity use in Bangladesh and infrastructure and growth.



Looking forward to 2020, a few things can be predicted:
  • In February I am going to the IAEE conference in Auckland, New Zealand. I will be giving a plenary on energy efficiency and the rebound effect.
  • Xueting Zhang will start as a PhD student. Like Debasish, she won an RTP scholarship, which is very competitive for foreign students. 
  • We will be submitting a paper based on the session on zero marginal cost electricity at the Future of Electricity Markets Summit to a special issue of the Electricity Journal. There are some other likely submissions and resubmissions early in the new year, but nothing is 100%.
  • I'll be teaching environmental economics and the masters research essay course again in the first semester.

Tuesday, April 16, 2019

Emissions Reduction Survey 2019

I again carried out a contingent valuation study of climate change using my environmental economics class as respondents. The survey was exactly as in 2018. Participants could vote yes or no on proposals to raise the Medicare levy by 0.125% or 0.25% to help fund the Emissions Reduction Fund. I designed the survey to follow the NOAA panel guidelines. I also asked the students to explain why they voted the way they did.


The results differ from 2018. Only 42% voted for a 0.125% increase in the Medicare levy, while 53% voted for a 0.25% increase. Five people voted against the smaller tax and for the larger tax. So there was quite a lot of irrational behavior where the perfect could have been the enemy of the good if one person had voted differently on the higher tax. This kind of thinking is in large part, IMO, why Australia doesn't now have a carbon price...

Of those voting no on both proposals, there were a mix of responses. Only one seemed to be saying that they couldn't afford the tax given the benefit! And that is what such a survey is supposed to measure. Others objected to the payment vehicle, by suggesting that the government should price carbon or reduce the diesel rebate etc. or borrow/print money instead. I agree with the first two of these, but again that leads here to nothing happening on the climate front if that is what you care about. Others worried about the distributional impact. That is a valid criticism of the Medicare levy proposal, which is a tax on all ones income rather than a progressive or marginal tax. One person incorrectly thought the Medicare levy was unethical, as it was a tax on healthcare. Actually, it is just an extra income tax.

Of those voting yes to the lower tax and no to the higher tax, only one mentioned the cost. The others said that the government should find other funding (borrowing?) or polluters should pay – of course in the end it is the consumer who will pay to the degree that polluters can pass on costs…

Those voting yes on both proposals all said the tax increase was affordable, so they did consider actual willingness/ability to pay.

The bottom line, is that there is a lot of behavior going on in the responses to this survey which doesn’t fit with the model of paying for a public good model where people state their honest WTP, even with a supposedly state of the art design. There is some free-riding - other people should pay or the government should borrow – and on the other hand some altruism as well as protest votes about the policy design. There is also irrational behavior represented by voting no, yes, though we probably can assume that some of these didn't understand the potential implication of voting against the lower tax rate.

Tuesday, February 19, 2019

Energy Efficiency Improvements Do Not Save Energy

I have a new working paper out, coauthored with Stephan Bruns and Alessio Moneta, titled: "Macroeconomic Time-Series Evidence That Energy Efficiency Improvements Do Not Save Energy". It's another paper from our ARC funded project: "Energy Efficiency Innovation: Diffusion, Policy and the Rebound Effect". We estimate the economy-wide effect on energy use of energy efficiency improvements in the U.S. We find that the rebound is around 100%, implying that in the long run energy efficiency improvements do not save energy or reduce greenhouse gas emissions.


At the micro level, we might naïvely expect a 1% improvement in energy efficiency to reduce energy use by 1%. But people adjust their behavior. Efficiency improvements reduce the cost of energy services like heating, transport, or lighting. Because these are now cheaper to produce, people consume more of them, and so the percentage reduction in energy use is less than the improvement in efficiency. This is known as the direct rebound effect.

People might also redirect their spending to consume more of complementary goods, like larger houses in the case of residential heating improvements, and reduce their consumption of substitute goods and services, like bus rides or cycling, in the case of car fuel economy improvements. These changes have implications for the energy used to produce these goods and services. Additionally, the reduction in demand for energy should lower the price of energy further boosting the rebound in energy use. Finally, the improvement in energy efficiency is an increase in productivity, which should result in economic growth. Higher incomes mean higher demand for energy. Adding these indirect rebound effects to the direct rebound effect we get the economy-wide rebound effect.

The size of the economy-wide rebound effect is crucial for estimating the contribution that energy efficiency improvements can make to reducing energy use and greenhouse gas emissions. Our study provides the first empirical general equilibrium estimate of the economy-wide rebound effect. Previous studies use simulation models, known as computable general equilibrium models, or partial equilibrium econometric models that don't allow the price of energy to adjust. Some of the latter studies also measure rebound incorrectly, for example assuming that energy intensity – energy used per dollar of GDP – measures energy efficiency. In fact, the majority of the rebound effect happens when energy intensity rebounds as people shift to more energy intensive consumption after an energy efficiency improvement. Economic growth induced by the efficiency improvement is expected to contribute less to total rebound.

We use a structural vector autoregressive model, or SVAR, that is estimated using search methods developed in machine learning. We apply the SVAR to U.S. monthly and quarterly data. An SVAR explains changes in the vector of variables, x, in terms of its past values and a vector of serially and mutually uncorrelated shocks, ε:

In our basic model, the vector, x, contains three variables: primary energy use, GDP, and the price of energy. The first of the shocks is a shock to energy use, holding constant shocks to GDP and the price of energy and the past values of all three variables. We think this is a reasonable definition of an energy efficiency shock. The other two shocks are income and price shocks.

The matrix, B, which transmits the shocks to the dependent variables cannot be estimated without imposing some restrictions or conditions on the model. Usually economists use economic theory to impose restrictions on the coefficients in B (short-run restrictions) and the Π_i (long-run restrictions). Alternatively, they sample a range of models, rejecting only those that don't meet qualitative "sign restrictions" on the matrix B. Instead, we use independent component analysis, an approach that is relatively new to econometrics. This imposes conditions on the nature of the shocks instead and estimates B without direct restrictions. Unlike the short- and long-run restrictions approach, it doesn't impose a priori restrictions on the data, and unlike the sign restrictions approach, it estimates a unique model.

Using the estimated SVAR model we compute the impulse response functions of the dependent variables to the shocks:


The top left graph shows the effect of an energy efficiency shock on energy use. The grey shading is a 90% confidence interval, the x-axis is in months, and the y-axis in log units.

Initially, an energy efficiency shock strongly reduces energy use, but this effect wears off over the following years as consumers and the economy adjusts. Eventually, there is no change in energy use so that rebound is 100%.

The other graphs in the first column show the effect of the energy efficiency shock on GDP and the price of energy. The second column shows the effect of a shock to GDP, and the final column an energy price shock.

The implications for policy are that encouraging energy efficiency innovation is unlikely to make a contribution to reducing greenhouse gas emissions. This is one reason why I am skeptical of projections that predict that energy intensity will fall much faster in the future than in the past because of energy efficiency policies.

On the other hand, if these policies raise rather than reduce the costs of producing energy services then the direct rebound (and presumably the economy-wide rebound) will be negative rather than positive. As, apart from their environmental effects, these would reduce economic welfare, it seems that there would be better options to reduce emissions by switching to low carbon energy.

Sunday, December 23, 2018

Annual Review 2018

I've been doing these annual reviews since 2011. They're mainly an exercise for me to see what I accomplished and what I didn't in the previous year. This year was a bit of a struggle at times, so it's a good idea to remind myself of what I did manage to accomplish.

Me and my mother holding my brother in 1967

Going into this year, I had high expectations for getting more research done, as I finished my term as economics program director at Crawford at the end of 2017. In the first semester, I was teaching a new course, or rather a subject I last taught more than a decade ago – environmental economics – but I thought that should be manageable and had three weeks of class prepared at the beginning of the semester. I definitely don't have a comparative advantage in teaching, it takes me a lot of time and effort to prepare. Then my mother died in the week that class began. This was quite expected – she was not doing well when I visited in December – but of course the exact timing is never known. I didn't travel to Israel for the funeral. I had already agreed with my brother up front to travel for the "stone-setting", which in Israel is 30 days after the death. It is the custom to bury someone on the day they die, if possible, so I didn't want to delay that. After I got back, I got ill with a flu/cold, which resulted in me completely losing my voice so I couldn't teach at all. So this was a difficult semester. In October/November I again got ill with flu/lung infection of some sort and lost a month of research time.

Noah and me in Sweden 

But there were also happier travels during the year. In June and July, I traveled with my wife, Shuang, and son, Noah, to the Netherlands, Finland, Sweden, and Japan. I went to three conferences: the IAEE meeting in Groningen and the IEW and World Congress in Gothenburg. Shuang also attended the World Congress. The visits to Finland and Japan were just for fun. Stephan Bruns was also at the IAEE meeting and actually presented our paper on rebound, which got very positive feedback. Stephan and Alessio Moneta did most of the econometric work on the paper, which we are about to submit now.

In September I went to Rome and Singapore for two workshops. At the Villa Mondragone, near Frascati, outside of Rome, was the Climate Econometrics Conference. I presented a paper that compared different estimators of the climate sensitivity. This produced some unexpected results, and it looks like it needs a lot more work some time! I met lots of people including meeting my coauthor Richard Tol for the first time.

Villa Mondragone near Frascati

In Singapore, I attended the 5th Asian Energy Modelling Workshop, which mostly focuses on integrated assessment modeling. By then, I was confident enough to present the rebound paper myself.

I also went to the Monash Environmental Economics Workshop in Melbourne in November. This is a small meeting with just one stream of papers, but they are all focused on environmental economics, whereas the larger annual AARES conference mostly focuses on agriculture.

Akshay Shanker and I finally put out a working paper that was our contribution to a Handelsbanken Foundation funded project headed by Astrid Kander. We are also branding this as part of our ARC funded DP16 project, as we have also been using ARC funding on it. We also completed work this year on the major part of the work on rebound that was part of the DP16 proposal. Zsuzsanna Csereklyei, who was working on the DP16 project, moved to a lecturer position at RMIT.

The Energy Change Institute at ANU won the annual ANU Grand Challenge Competition with a proposal on Zero-Carbon Energy for the Asia-Pacific. Actually, the project already received several hundred thousand dollars of interim funding from the university in 2018 and I have been working with Akshay on the topic of electricity markets as part of this project. We'll continue research on the topic during 2019.

ECI Grand Challenge Presentation Team: Paul Burke, Kylie Catchpole, and Emma Aisbett

We only managed to publish two papers with a 2018 date:

Burke P. J., D. I. Stern, and S. B. Bruns (2018) The impact of electricity on economic development: A macroeconomic perspective, International Review of Environmental and Resource Economics 12(1) 85-127. Working Paper Version | Blogpost

Csereklyei Z. and D. I. Stern (2018) Technology choices in the U.S. electricity industry before and after market restructuring, Energy Journal 39(5), 157-182. Working Paper Version | Blogpost

But we have several papers in press:

Bruns S. B., J. König, and D. I. Stern (in press) Replication and robustness analysis of 'Energy and economic growth in the USA: a multivariate approach', Energy Economics. Working Paper Version | Blogpost

Bruns S. B., Z. Csereklyei, and D. I. Stern (in press) A multicointegration model of global climate change, Journal of Econometrics. Working Paper Version | Blogpost

Bruns S. B. and D. I. Stern (in press) Lag length selection and p-hacking in Granger causality testing: Prevalence and performance of meta-regression models, Empirical Economics. Working Paper Version | Blogpost

We posted five new working papers, three of which haven't been published yet:

Flying More Efficiently: Joint Impacts of Fuel Prices, Capital Costs and Fleet Size on Airline Fleet Fuel Economy Blogpost
November 2018. With Zsuzsanna Csereklyei.

Energy Intensity, Growth and Technical Change
September 2018. With Akshay Shanker. Blogpost

How to Count Citations If You Must: Comment
January 2018. With Richard Tol. Blogpost

Google Scholar citations approached 16,000 with an h-index of 51.

The trend to fewer blogposts continued – this is only the 9th blogpost this year. Twitter followers rose from 750 to 950 over the year.

Akshay Shanker – his primary adviser was Warwick McKibbin and I was on his supervisory panel – received his PhD with very positive feedback from the examiners. He has a part time position at ANU working on the Grand Challenge Project and I am supervising him on that.

There doesn't seem to have been any major progress on the issues surrounding economics at ANU, that I mentioned in last year's post. Arndt Corden seems to be heading towards being a specialist program dealing with developing Asia and there is no overall identity for economics at Crawford. I increasingly identify with the Centre for Applied Macroeconomic Analysis.

On a related theme, I applied for three jobs on three different continents. One of these – the one in Australia – went as far as an onsite interview, but the more I learnt about the job the less enthusiastic I was, and I wasn't offered it. It was a 50/50 admin/leadership and research position.

Looking forward to 2019, a few things can be predicted:
  • We're about to submit our first paper on the rebound effect and should also put out a working paper or two on the topic.
  • I'll continue research with Akshay on the Grand Challenge project.
  • I'm not planning to go to any conferences this year. I have one seminar presentation lined up at Macquarie University in the second half of the year.
  • My PhD student Panittra Ninpanit will submit her thesis at the beginning of the year, and I have a new student, Debasish Kumar Das, starting. The plan is for him to work on electricity reliability.
  • I'll be teaching environmental economics and the masters research essay course again in the first semester.
 Trying to understand the menu in Finland

Monday, November 26, 2018

Flying More Efficiently

I have another new working paper out, coauthored with Zsuzsanna Csereklyei on airline fleet fuel economy. Zsuzsanna worked as research fellow here at the Crawford School on my Australian Research Council funded DP16 project on energy efficiency and the rebound effect. This paper reports on some of our research in the project. We also looked at energy efficiency in electric power generation in the US.

The nice thing about this paper is that we have plane level data on the aircraft in service in 1267 airlines in 174 countries. This data is from the World Airliner Census from Flight Global. We then estimated the fuel economy of 143 aircraft types using a variety of data sources. We assumed that the plane would fly its stated range with the maximum number of passengers and use all its fuel capacity. This gives us litres of fuel per passenger kilometre. Of course, many flights are shorter or are not full, and so actual fuel consumption per passenger kilometre will vary a lot, but this gives us a technical metric which we can use to compare models.


The graph shows that the fuel economy of new aircraft has steadily improved over time. One of the reasons for the scatter around the trendline is that large aircraft with longer ranges tend to have better fuel economy than small aircraft:


This is also one of the reasons why fuel economy has improved over time. Still, adjusted for size, aircraft introduced in earlier decades had (statistically) significantly worse fuel economy than more recent models. We used these regressions to compute age and size adjusted measures of fuel economy, which we used in our main econometric analysis.

The main analysis assumes that airlines choose the level of fuel economy that minimizes costs given input prices and the type of flying that they do. There is a trade off here between doing an analysis with very wide scope and doing an analysis with only the most certain data. We decided to use as much of the technical aircraft data as we could, even though this meant using less certain and extrapolated data for some of the explanatory variables.

We have data on wages in airlines and on the real interest rates in each country. The wage data is very patchy and noisy and we extrapolated a lot of values from the observations we had in the same way that, for example, the Penn World Table extrapolates from surveys. There are no taxes on aircraft fuel for international travel and the price of fuel reported by Platts does not vary a lot around the world. But countries can tax fuel for domestic aviation. We could only find data on these specific taxes for a small number of countries in a single year. So, we used proxies, such as the price of road gasoline and oil rents, for this variable. We proxy the type of flying airlines do using the characteristics of their home countries.

The most robust results from the analysis – that hold whether we use crude fuel economy or fuel economy adjusted for size and age – are that – all things constant – larger airlines select planes with higher fuel economy, higher interest rates are associated with poorer fuel economy, higher fuel prices are associated with higher fuel economy (but the elasticity is small), and fuel economy is worse in Europe and Central Asia than other regions.

It seems that for a given model age and size, more fuel efficient planes cost more. This would explain why, even holding age and size factors constant, higher interest rates are correlated with worse fuel economy. Also, if larger airlines have more access to finance or a lower cost of capital they will be able to afford the more fuel efficient planes.

What effect could carbon prices have on fleet fuel economy? The most relevant elasticity is the response of unadjusted fuel economy to the price of fuel. This allows airlines to adjust the size and model age of planes in response to an increase in the price of fuel. We estimate that this elasticity is -0.09 to -0.13, which suggests the effect won't be very big. Because we use proxies for the price of fuel, we expect that the true value of this elasticity is actually higher. The elasticity also assumes that there is a given set of available aircraft models. Induced innovation might result in more efficient models being developed. There might also be changes in the types of airlines and flights. So the effect could be quite a bit larger in the long run.

Wednesday, October 3, 2018

Energy Intensity, Growth, and Technical Change

I have a new working paper out, coauthored with Akshay Shanker. Akshay recently completed his PhD at the Crawford School and is currently working on the Energy Change Institute's Grand Challenge Project among other things. This paper was one of the chapters in Akshay's thesis. Akshay originally came to see me a few years ago about doing some research assistance work. I said: "The best thing you could do is to write a paper with me – I want to explain why energy intensity has declined using endogenous growth theory." This paper is the result. Along the way, we got additional funding from the College of Asia and the Pacific, the Handelsbanken Foundation, and the Australian Research Council.

World and U.S. energy intensities have declined over the past century, falling at an average rate of approximately 1.2–1.5 percent a year. As Csereklyei et al. (2016) showed, the relationship has been very stable. The decline has persisted through periods of stagnating or even falling energy prices, suggesting the decline is driven in large part by autonomous factors, independent of price changes.

In this paper, we use directed technical change theory to understand the autonomous decline in energy intensity and investigate whether the decline will continue. The results depend on whether the growing stock of knowledge makes R&D easier over time – known as state-dependent innovation – or whether R&D becomes harder over time.

Along a growth path where real energy prices are constant, energy use increases, energy-augmenting technologies – technologies that improve the productivity of energy ceteris paribus – advance, and the price of energy services falls. The fall in the price of energy services reduces profitability and incentives for energy-augmenting research. However, since the use of energy increases, the "market size" of energy services expands, improving the incentives to perform research that advances energy-augmenting technologies. In the scenario with no state dependence, the growing incentives from the expanding market size are enough to sustain energy-augmenting research. Energy intensity continues to decline, albeit at a slower rate than output growth, due to energy-augmenting innovation. There is asymptotic convergence to a growth path where energy intensity falls at a constant rate due to investment in energy-augmenting technologies. Consistent with the data, energy intensity declines more slowly than output grows, implying that energy use continues to increase.

This graph shows two growth paths – for countries that are initially more or less energy intensive – that converge to the balanced growth path G(Y) as their economies grow:


This is very consistent with the empirical evidence presented by Csereklyei et al. (2016).

However, the rate of labor-augmenting research is more rapid along the balanced growth path and there will be a shift from energy-augmenting research to labor-augmenting research for a country that starts out relatively energy intensive. This explains Stern and Kander's (2012) finding that the rate of labor-augmenting technical change increased over time in Sweden as the rate of energy-augmenting technical change declined.

The following graph shows the ratio of the energy-augmenting technology to the labor-augmenting technology over time in the US, assuming that the elasticity of substitution between energy and labor services is 0.5:

Up till about 1960, energy-augmenting technical change was more rapid than labor-augmenting technical change and the ratio rose. After this point labor-augmenting technical change was more rapid, but the rise in energy prices in the 1970s induced another period of more rapid energy-augmenting technical change.

In an economy with extreme state-dependence, energy intensity will eventually stop declining because labor-augmenting innovation crowds out energy-augmenting innovation. Our empirical analysis of energy intensity in 100 countries between 1970 and 2010 suggests a scenario without extreme state dependence where energy intensity continues to decline.

Tuesday, April 24, 2018

Replicating Stern (1993)

Last year, Energy Economics announced a call for papers for a special issue on replication in energy economics. Together with Stephan Bruns and Johannes König we decided to do a replication of my 1993 paper in Energy Economics on Granger causality between energy use and GDP. That paper was the first chapter in my PhD dissertation. It is my fourth most cited paper and given the number of citations could be considered "classic" enough to do an updated robustness analysis on it. In fact, another replication of my paper has already been published as part of the special issue. The main results of my 1993 paper were that in order to find Granger causality from energy use to GDP we need to  use both a quality adjusted measure of energy and control for capital and labor inputs.

It is a bit unusual to include the original author as an author on a replication study, and my role was a bit unusual. Before the research commenced, I discussed with Stephan the issues in doing a replication of this paper, giving feedback on the proposed design of the replication and robustness analysis. The research plan was published on a website dedicated to pre-analysis plans. Publishing a research plan is similar to registering a clinical trial and is supposed to help reduce the prevalence of p-hacking. Then, after Stephan and Johannes carried out the analysis, I gave feedback and helped edit the final paper.

Unfortunately, I had lost the original dataset and the various time series I used have been updated by the US government agencies that produce them. The only way to reconstruct the original data would have been to find hard copies of all the original data sources. Instead we used the data from my 2000 paper in Energy Economics, which is quite similar to the original data. Using this close to original data, Stephan and Johannes could reproduce all my original results in terms of the direction of Granger Causality and the same qualitative significance levels. In this sense, the replication was a success.

But the test I did in 1993 on the log levels of the variables is inappropriate if the variables have stochastic trends (unit roots). The more appropriate test is the Toda-Yamamoto test. So, the next step was to redo the 1993 analysis using the Toda-Yamamoto test. Surprisingly, these results are also very similar to those in Stern (1993). But, when Stephan and Johannes used the data for 1949-1990 that are currently available on US government websites, the Granger causality test of the effect of energy on GDP was no longer statistically significant at the 10% level. Revisions to past GDP have been very extensive, as we show in the paper:

Results were similar when they extended the data to 2015. However, when they allowed for structural breaks in the intercept to account for oil price shocks and the 2008-9 financial crisis, the results were again quite similar to Stern (1993) both for 1949-1990 and for 1949-2015.

They then carried out an extensive robustness check using different control variables and variable specifications and a meta-analysis of those tests to see which factors had the greatest influence on the results.

They conclude that p-values tend to be substantially smaller (test statistics are more significant) if energy use is quality adjusted rather than measured by total joules and if capital is included. Including labor has mixed results. These findings largely support Stern’s (1993) two main conclusions and emphasize the importance of accounting for changes in the energy mix in time series modeling of the energy-GDP relationship and controlling for other factors of production.

I am pretty happy with the outcome of this analysis! Usually it is hard to publish replication studies that confirm the results of previous research. We have just resubmitted the paper to Energy Economics and I am hoping that this mostly confirmatory replication will be published. In this case, the referees added a lot of value to the paper, as they suggested to do the analysis with structural breaks.

Thursday, April 5, 2018

Buying Emissions Reductions

This semester I am teaching environmental economics, a course I haven't taught since 2006 at RPI. Last week we covered environmental valuation. I gave my class an in-class contingent valuation survey. I tried to construct the survey according to the recommendations of the NOAA panel. Here is the text of the survey:

Emissions Reduction Fund Survey

In order to meet Australia’s international commitments under the Paris Treaty, the government is seeking to significantly expand the Emissions Reduction Fund, which pays bidders such as farmers to reduce carbon emissions. To fully meet Australia’s commitment to reduce emissions by 26-28% below 2005 levels by 2030 the government estimates that the fund needs to be expanded to $2 billion per year. The government proposes to fund this by increasing the Medicare Levy.

1. Considering other things you need to spend money, and other things the government can do with taxes do you agree to a 0.125% increase in the Medicare levy, which is equivalent to $100 per year in extra tax for someone on average wages. This is expected to only meet half of Australia’s commitment, reducing emissions to 13-14% below 2005 levels or by a cumulative 370 million tonnes by 2030.

Yes No

2. Considering other things you need to spend money, and other things the government can do with taxes do you agree to a 0.25% increase in the Medicare levy, which is equivalent to $200 per year in extra tax for someone on average wages. This is expected to meet Australia’s commitment, reducing emissions to 26-28% below 2005 levels or by a cumulative 740 million tonnes by 2030.

Yes No

3. If you said yes to either 1 or 2, why? And how did you decide on whether to agree to the 0.125% or 0.25% tax?

4. If you said no to both 1. and 2. why?

***********************************************************************************


85% voted in favour of the 0.125% Medicare tax option and 54% voted in favour of 0.25% - So both would have passed. A few people voted against 0.125 and for 0.25, so I changed their votes to for 0.125 as well as 0.25.


Reasons for voting for both:

  • $200 not much, willing to do more than just pay that tax 
  • We should meet the target
 
  • Tax is low compared to other taxes - can reduce government spending on health in future
 
  • Can improve my health
 
  • Benefit is much greater than cost to me
 
  • I pay low tax as I'm retired, so can pay more
 
  • I'm willing to pay so Australia can meet commitment
 
  • Only $17 a month
 
  • Tax is small
 
  • Because reducing emissions is the most important environmental issue
 
 

Reasons for voting for 0.125 but against 0.25:

  • Can afford 0.125 but not 0.25
  • Government can cover the rest with other measures like incentives
 
 

Reasons for voting against both:

  • There are other ways to reduce emissions - give incentives to firms rather than tax the middle class... 
  • Government should tax firms
  • Don't believe in emissions reduction fund because it is inefficient

  • I prefer to spend my money rather than pay tax and reduction in emissions is not very big for tax paid


Mostly the reasons for voting for both are ones we would want to see if we are really measuring WTP - can afford to pay and it is a big issue. Those thinking it will increase their personal health or reduce health spending were made to think about health by the payment vehicle. I chose the Medicare Levy as the payment vehicle as the Australian government has a track record of increasing the Medicare Levy for all kinds of things, like repairing flood damage in Brisbane!
 I chose the emissions reduction fund because it actually exists and actually buys emissions reductions.

Most people who voted for 0.125% but against 0.25% have valid reasons - they can't afford the higher tax. However, one person said the government should cover the rest by other means. So that person may really be willing to pay 0.25% if the government won't do that.


When we get to the people who voted against both tax rates, most are against the policy vehicle rather than not being willing to pay for climate change mitigation. So, from the point of view of measuring WTP these votes would result in an under estimate. These "protest votes" are a big problem for CVM. Only one person said they weren't willing to pay anything given the bang for the buck.

Saturday, February 10, 2018

A Multicointegration Model of Global Climate Change

We have a new working paper out on time series econometric modeling of global climate change. We use a multicointegration model to estimate the relationship between radiative forcing and global surface temperature since 1850. We estimate that the equilibrium climate sensitivity to doubling CO2 is 2.8ºC – which is close to the consensus in the climate science community – with a “likely” range from 2.1ºC to 3.5ºC.* This is remarkably close to the recently published estimate of Cox et al. (2018).

Our paper builds on my previous research on this topic. Together with Robert Kaufmann, I pioneered the application of econometric methods to climate science – Richard Tol was another early researcher in this field. Though we managed to publish a paper in Nature early on (Kaufmann and Stern, 1997), I became discouraged by the resistance we faced from the climate science community. But now our work has been cited in the IPCC 5th Assessment Report and recently there is also a lot of interest in the topic among econometricians. This has encouraged me to get involved in this research again.

We wrote the first draft of this paper for a conference in Aarhus, Denmark on the econometrics of climate change in late 2016 and hope it will be included in a special issue of the Journal of Econometrics based on papers from the conference. I posted some of our literature review on this blog back in 2016.

Multicointegration models, first introduced by Granger and Lee (1989), are designed to model long-run equilibrium relationships between non-stationary variables where there is a second equilibrium relationship between accumulated deviations from the first relationship and one or more of the original variables. Such a relationship is typically found for flow and stock variables. For example, Granger and Lee (1989) examine production, sales, and inventory in manufacturing, Engsted and Haldrup (1999) housing starts and unfinished stock, Siliverstovs (2006) consumption and wealth, and Berenguer-Rico and Carrion-i-Silvestre (2011) government deficits and debt. Multicointegration models allow for slower adjustment to long-run equilibrium than do typical econometric time series models because of the buffering effect of the stock variable.

In our model there is a long-run equilibrium between radiative forcing, f, and surface temperature, s:
The equilibrium climate sensitivity is given by 5.35*ln(2)/lambda. But because of the buffering effect of the ocean, surface temperature takes a long time to reach equilibrium. The deviations from equilibrium, q, represent a flow of heat from the surface to (mostly) the ocean. The accumulated flows are the stock of heat in the Earth system, Q. Surface temperature also tends towards equilibrium with this stock of heat:
where u is a (serially correlated but stationary) random error. Granger and Lee simply embedded both these long-run relations in a vector autoregressive (VAR) time series model for s and f. A somewhat more recent and much more powerful approach (e.g. Engsted and Haldrup, 1999) notes that:
where F is accumulated f and S is accumulated s. In other words, S(2) = s(1)+s(2), S(3) = s(1)+s(2)+s(3) etc. This means that we can estimate a model that takes into account the accumulation of heat in the ocean without using any actual data on ocean heat content (OHC) ! One reason that this is exciting, is because OHC data is only available since 1940 and data for the early decades is very uncertain. Only since 2000 is there a good measurement network in place. This means that we can use temperature and forcing data back to 1850 to estimate the heat content. Another reason that this is exciting is that F and S are so-called second order integrated variables (I(2)) and estimation with I(2) variables, though complicated, is super-super consistent – it is easier to get an accurate estimate of a parameter despite noise and measurement error issues in a relatively small sample. The I(2) approach combines the 2nd and 3rd equations above into a single relationship which it embeds in a VAR model that we estimate using Johansen's maximum likelihood method. The CATS package which runs on top of RATS can estimate such models as can the Oxmetrics econometrics suite. The data we used in the paper is available here.

This graph compares our estimate of OHC (our preferred estimate is the partial efficacy estimate) with an estimate from an energy balance model (Marvel et al., 2016) and observations of ocean heat content (Cheng et al, 2017):


We think that the results are quite good, given that we didn't use any data on OHC to estimate it and that the observed OHC is very uncertain in the early decades. In fact, our estimate cointegrates with these observations and the estimated coefficient is close to what is expected from theory. The next graph shows the energy balance:


The red area is radiative forcing relative to the base year. This is now more than 2.5 watts per square meter – doubling CO2 is equivalent to a 3.7 watt per square meter increase. The grey line is surface temperature. The difference between the top of the red area and the grey line is the disequilibrium between surface temperature and radiative forcing according to the model. This is now between 1 and 1.5 watts per square meter and implies that, if radiative forcing was held constant from now on, that temperature would increase by around 1ºC to reach equilibrium.** This gap is exactly balanced by the blue area, which is heat uptake. As you can see, heat uptake kept surface temperature fairly constant during the last decade and a half despite increasing forcing. It's also interesting to see what happens during large volcanic eruptions such as Krakatoa in 1883. Heat leaves the ocean, largely, but not entirely, offsetting the fall in radiative forcing due to the eruption. This means that though the impact of large volcanic eruptions on radiative forcing is short-lived, as the stratospheric sulfates emitted are removed after 2 to 3 years, they have much longer-lasting effects on the climate as shown by the long period of depressed heat content after the Krakatoa eruption in the previous graph.

We also compare the multicointegration model to more conventional (I(1)) VAR models. In the following graph, Models I and II are multicointegration models and Models IV to VI are I(1) VAR models. Model IV actually includes observed ocean heat content as one of its variables, but Models V and VI just include surface temperature and forcing. The graph shows the temperature response to permanent doubling of radiative forcing:


The multicointegration models have both a higher climate sensitivity and respond more slowly due to the buffering effect. This mimics, to some degree, the response of a general circulation model. The performance of Model IV is actually worse than the bivariate I(1) VARs. This is because it uses a much shorter sample period than Models V and VI. In simulations that are not reported in the paper, we found that a simple bivariate I(1) VAR estimates the climate sensitivity correctly if the time series is sufficiently long - much longer than the 165 years of annual observations that we have. This means that ignoring the ocean doesn't strictly result in omitted variables bias as I previously claimed. Estimates are biased in a small sample, but not in a sufficiently large sample. That is probably going to be another paper :)

* "Likely" is the IPCC term for a 66% confidence interval. This confidence interval is computed using the delta method and is a little different to the one reported in the paper.
** This is called committed warming. But, if emissions were actually reduced to zero, it's expected that forcing would decline and that the decline in forcing would about balance the increase in temperature towards equilibrium.

References

Berenguer-Rico, V., Carrion-i-Silvestre, J. L., 2011. Regime shifts in stock-flow I(2)-I(1) systems: The case of US fiscal sustainability. Journal of Applied Econometrics 26, 298—321.

Cheng L., Trenberth, K. E., Fasullo, J., Boyer, T., Abraham, J., Zhu, J., 2017. Improved estimates of ocean heat content from 1960 to 2015. Science Advances 3(3), e1601545.

Cox, P. M., Huntingford, C., Williamson, M. S., 2018. Emergent constraint on equilibrium climate sensitivity from global temperature variability. Nature 553, 319–322.

Engsted, T. Haldrup, N., 1999. Multicointegration in stock-flow models. Oxford Bulletin of Economics and Statistics 61, 237—254.

Granger, C. W. J., Lee, T. H., 1989. Investigation of production, sales and inventory relationships using multicointegration and non-symmetric error correction models. Journal of Applied Econometrics 4, S145—S159.

Kaufmann R. K. and D. I. Stern (1997) Evidence for human influence on climate from hemispheric temperature relations, Nature 388, 39-44.

Marvel, K., Schmidt, G. A., Miller, R. L., Nazarenko, L., 2016. Implications for climate sensitivity from the response to individual forcings. Nature Climate Change 6(4), 386—389.

Siliverstovs, B., 2006. Multicointegration in US consumption data. Applied Economics 38(7), 819–833.

Saturday, February 3, 2018

Data and Code for "Modeling the Emissions-Income Relationship Using Long-run Growth Rates"

I've posted on my website the data and code used in our paper "Modeling the Emissions-Income Relationship Using Long-run Growth Rates" that was recently published in Environment and Development Economics. The data is in .xls format and the econometrics code is in RATS. If you don't have RATS, I think it should be fairly easy to translate the commands into another package like Stata. If anything is unclear, please ask me. I managed to replicate all the regression results and standard errors in the paper but some of the diagnostic statistics are different. I think only once does that make a difference, and then it's in a positive way. I hope that providing this data and code will encourage people to use our approach to model the emissions-income relationship.