Monday, July 25, 2016

Data and Code for Our 1997 Paper in Nature

I got a request for the data in our 1997 paper in Nature on climate change. I didn't think I'd be able to send the actual data we used as I used to follow the practice of continually updating the datasets that I most used rather than keeping an archival copy of the data actually used in a paper. But I found a version from February 1997, which was the month we submitted the final version of the paper. I got the RATS code to read the file and with a few tweaks it was producing the results that are in the paper. These are the results for observational data in the paper, not those using data from the Hadley climate model. I have now put up the files on my website. In the process I found this website - zamzar.com - that can convert .wks to .xls files. Apparently, recent versions of Excel can't read the .wks Lotus 1-2-3 files that were a standard format 20 or more years years ago. For those that don't know, Lotus 1-2-3 was the most popular spreadsheet program before Microsoft introduced Excel. I used it in the late 80s and early 90s when I was in grad school.

The EKC in a Nutshell

Introduction
The environmental Kuznets curve (EKC) is a hypothesized relationship between various indicators of environmental degradation and countries’ gross domestic product (GDP) per capita. In the early stages of economic growth environmental impacts and pollution increase, but beyond some level of GDP per capita (which will vary for different environmental impacts) economic growth leads to environmental improvement. This implies that environmental impacts or emissions per capita are an inverted U-shaped function of GDP per capita, whose parameters can be statistically estimated. Figure 1 shows a very early example of an EKC. A vast number of studies have estimated such curves for a wide variety of environmental impacts ranging from threatened species to nitrogen fertilizers, though atmospheric pollutants such as sulfur dioxide and carbon dioxide have been most commonly investigated. The name Kuznets refers to the similar relationship between income inequality and economic development proposed by Nobel Laureate Simon Kuznets and known as the Kuznets curve.


The EKC has been the dominant approach among economists to modeling ambient pollution concentrations and aggregate emissions since Grossman and Krueger (1991) introduced it in an analysis of the potential environmental effects of the North American Free Trade Agreement. The EKC also featured prominently in the 1992 World Development Report published by the World Bank and has since become very popular in policy and academic circles and is even found in introductory economics textbooks.

Critique
Despite this, the EKC was criticized almost from the start on empirical and policy grounds, and debate continues. It is undoubtedly true that some dimensions of environmental quality have improved in developed countries as they have become richer. City air and rivers in these countries have become cleaner since the mid-20th Century and in some countries forests have expanded. Emissions of some pollutants such as sulfur dioxide have clearly declined in most developed countries in recent decades. But there is less evidence that other pollutants such as carbon dioxide ultimately decline as a result of economic growth. There is also evidence that emerging countries take action to reduce severe pollution. For example, Japan cut sulfur dioxide emissions in the early 1970s following a rapid increase in pollution when its income was still below that of the developed countries and China has also acted to reduce sulfur emissions in recent years.

As further studies were conducted and better data accumulated, many of the econometric studies that supported the EKC were found to be statistically fragile. Figure 2 presents much higher quality data with a much more comprehensive coverage of countries than that used in Figure 1. In both 1971 and 2005 sulfur emissions tended to be higher in richer countries and the curve seems to have shifted down and to the right. A cluster of mostly European countries had succeeded in sharply cutting emissions by 2005 but other wealthy countries reduced their emissions by much less.


Initially, many understood the EKC to imply that environmental problems might be due to a lack of sufficient economic development rather than the reverse, as was conventionally thought, and some argued that the best way for developing countries to improve their environment was to get rich. This alarmed others, as while this might address some issues like deforestation or local air pollution, it would likely exacerbate other environmental problems such as climate change.

Explanations
The existence of an EKC can be explained either in terms of deep determinants such as technology and preferences or in terms of scale, composition, and technique effects, also known as “proximate factors”. Scale refers to the effect of an increase in the size of the economy, holding the other effects constant, and would be expected to increase environmental impacts. The composition and technique effects must outweigh this scale effect for pollution to fall in a growing economy. The composition effect refers to the economy’s mix of different industries and products, which differ in pollution intensities. Finally the technique effect refers to the remaining change in pollution intensity. This will include contributions from changes in the input mix – e.g. substituting natural gas for coal; changes in productivity that result in less use, everything else constant, of polluting inputs per unit of output; and pollution control technologies that result in less pollutant being emitted per unit of input.

Over the course of economic development the mix of energy sources and economic outputs tends to evolve in predictable ways. Economies start out mostly agricultural and the share of industry in economic activity first rises and then falls as the share of agriculture declines and the share of services increases. We might expect the impacts associated with agriculture, such as deforestation, to decline, and naively expect the impacts associated with industry such as pollution would first rise and then fall. However, the absolute size of industry rarely does decline and it is improvement in productivity in industry, a shift to cleaner energy sources, such as natural gas and hydro-electricity, and pollution control that eventually reduce some industrial emissions.

Static theoretical economic models of deep determinants, that do not try to also model the economic growth process, can be summarized in terms of two parameters: The elasticity of substitution between dirty and clean inputs or between pollution control and pollution, which summarizes how difficult it is to cut pollution; and the elasticity of marginal utility, which summarizes how hard it is to increase consumer well-being with more consumption. It is usually assumed that these consumer preferences are translated into policy action. Pollution is then more likely to increase as the economy expands, the harder it is to substitute other inputs for polluting ones and the easier it is to increase consumer well-being with more consumption. If these parameters are constant then either pollution rises or falls with economic growth. Only if they change over time will pollution first rise and then fall. The various theoretical models can be classified as ones where the EKC is driven by changes in the elasticity of substitution as the economy grows or models where the EKC is primarily driven by changes in the elasticity of marginal utility.

Dynamic models that model the economic growth process alongside changes in pollution, are harder to classify. The best known is the Green Solow Model developed by Brock and Taylor (2010) that explains changes in pollution as a result of the competing effects of economic growth and a constant rate of improvement in pollution control. Fast growing middle-income countries, such as China, then having rising pollution, and slower growing developed economies, falling pollution. An alternative model developed by Ordás Criado et al. (2011) also suggests that pollution rises faster in faster growing economies but that there is also convergence so that countries with higher levels of pollution are more likely to reduce pollution faster than countries with low levels of pollution.

Recent Empirical Research and Conclusion 
Recent empirical research builds on these dynamic models painting a subtler picture than did early EKC studies. We can distinguish between the impact of economic growth on the environment and the effect of the level of GDP per capita, irrespective of whether an economy is growing or not, on reducing environmental impacts. Economic growth usually increases environmental impacts but the size of this effect varies across impacts and the impact of growth often declines as countries get richer. However, richer countries are often likely to make more rapid progress in reducing environmental impacts. Finally, there is often convergence among countries, so that countries that have relatively high levels of impacts reduce them faster or increase them slower. These combined effects explain more of the variation in pollution emissions or concentrations than either the classic EKC model or models that assume that either only convergence or growth effects alone are important. Therefore, while being rich means a country might do more to clean up its environment, getting rich is likely to be environmentally damaging and the simplistic policy prescriptions that some early proponents of the EKC put forward should be disregarded.

References
Brock, W. A. and Taylor, M. S. (2010). The green Solow model. Journal of Economic Growth 15, 127–153.

Grossman, G. M. and Krueger, A. B. (1991). Environmental impacts of a North American Free Trade Agreement. NBER Working Papers 3914.

Ordás Criado, C., Valente, S., and Stengos, T. (2011). Growth and pollution convergence: Theory and evidence. Journal of Environmental Economics and Management 62, 199-214.

Panayotou, T. (1993). Empirical tests and policy analysis of environmental degradation at different stages of economic development. Working Paper, Technology and Employment Programme, International Labour Office, Geneva, WP238.

Smith, S. J., van Ardenne, J., Klimont, Z., Andres, R. J., Volke, A., and Delgado Arias S. (2011). Anthropogenic sulfur dioxide emissions: 1850-2005. Atmospheric Chemistry and Physics 11, 1101-1116.

Stern, D. I. (2015). The environmental Kuznets curve after 25 years. CCEP Working Papers 1514.

Stern, D. I., Common, M. S., and Barbier, E. B. (1996). Economic growth and environmental degradation: the environmental Kuznets curve and sustainable development. World Development 24, 1151–1160.

Thursday, July 21, 2016

Dynamics of the Environmental Kuznets Curve

Just finished writing a survey of the environmental Kuznets curve (EKC) for the Oxford Research Encyclopedia of Environmental Economics. Though I updated all sections, of course, there is quite a bit of overlap with my previous reviews. But there is a mostly new review of empirical evidence reviewing the literature and presenting original graphs in the spirit of IPCC reports :) I came up with this new graph of the EKC for sulfur emissions:


The graph plots the growth rate from 1971 to 2005 of per capita sulfur emissions in the sample used in the Anjum et al. (2014) paper against GDP per capita in 1971. There is a correlation of -0.32 between the growth rates and initial log GDP per capita. This shows that emissions did tend to decline or grow more slowly in richer countries but the relationship is very weak -  only 10% of the variation in growth rates is explained by initial GDP per capita. Emissions grew in many wealthier countries and fell in many poorer ones, though GDP per capita also fell in a few of the poorest of those. So, this does not provide strong support for the EKC being the best or only explanation of either the distribution of emissions across countries or the evolution of emissions within countries over time. On the other hand, we shouldn't be restricted to a single explanation of the data and the EKC can be treated as one possible explanation as in Anjum et al. (2014). In that paper, we find that when we consider other explanations such as convergence the EKC effect is statistically significant but the turning point is out of sample - growth has less effect on emissions in richer countries but it still has a positive effect.

The graph below compares the growth rates of sulfur emissions with the initial level of emissions intensity. The negative correlation is much stronger here: -0.67 for the log of emissions intensity. This relationship is one of the key motivations for pursuing a convergence approach to modelling emissions. Note that the tight cluster of mostly European countries that cut emissions the most appears to have had both high income and high emissions intensity at the beginning of the period.


Tuesday, July 12, 2016

Legitimate Uses for Impact Factors

I wrote a long comment on this blogpost by Ludo Waltman but it got eaten by their system, so I'm rewriting it in a more expanded form as a blogpost of my own. Waltman argues, I think, that for those that reject the use of journal impact factors to evaluate individual papers, such as Lariviere et al., there should be then no legitimate uses for impact factors. I don't think this is true.

The impact factor was first used by Eugene Garfield to decide which additional journals to add to the Science Citation Index he created. Similarly, librarians can use impact factors to decide on which journals to subscribe or unsubscribe from and publishers and editors can use such metrics to track the impact of their journals. These are all sensible uses of the impact factor that I think no-one would disagree with. Of course, we can argue about whether the mean number of citations that articles receive in a journal is the best metric and I think that standard errors - as I suggested in my Journal of Economic Literature article - or the complete distribution as suggested by Lariviere et al., should be provided alongside them.

I actually think that impact factors or similar metrics are useful to assess very recently published articles, as I show in my PLoS One paper, before they manage to accrue many citations. Also, impact factors seem to be a proxy for journal acceptance rates or selectivity, which we only have limited data on. But ruling these out as legitimate uses doesn't mean rejecting the use of such metrics entirely.

I disagree with the comment by David Colquhoun that no working scientists look at journal impact factors when assessing individual papers or scientists. Maybe this is the case in his corner of the research universe but it definitely is not the case in my corner. Most economists pay much, much more attention to where a paper was published than how many citations it has received. And researchers in the other fields I interact with also pay a lot of attention to journal reputations, though they usually also pay more attention to citations as well. Of course, I think that economists should pay much more attention to citations too.


Wednesday, June 15, 2016

p-Curve: Replicable vs. Non-Replicable Findings

Recently, Stephan Bruns published a paper with John Ioannidis in PLoS ONE critiquing the p-curve.  I've blogged about the p-curve previously. Their argument is that the p-curve cannot distinguish "true effects" from "null effects" in the presence of omitted variables bias. Simonsohn et al., the originators of the p-curve, have responded in their blog, which I have added to the blogroll here. They say, of course, the p-curve cannot distinguish between causal effects and other effects but it can distinguish between "false positives", which are non-replicable effects and "replicable effects", which include both "confounded effects" (correlation but not causation) and "causal effects". Bruns and Ioannidis have responded to this comment too.

In my previous blogpost on the p-curve, I showed that the Granger causality tests we meta-analysed in our Energy Journal paper in 2014 form a right-skewed p-curve. This would mean that there was a "true effect" according to the p-curve methodology. However, our meta-regression analysis where we regressed the test statistics on the square root of degrees of freedom in the underlying regressions showed no "genuine effect". Now I understand what is going on. The large number of highly significant results in the Granger causality meta-dataset is generated by "overfitting bias". This result is "replicable". If we fit VAR models to more such short time series we will again get large numbers of significant results. However, regression analysis shows that this result is bogus as the p-values are not negatively correlated with degrees of freedom. Therefore, the power trace meta-regression is a superior method to the p-curve. In addition, we can modify this regression model to account for omitted variables bias by adding dummy variables and interaction terms (as we do in our paper). This can help to identify a causal effect. Of course, if no researchers actually estimate the true causal model then this method too cannot identify the causal effect. But there are always limits to our ability to be sure of causality. Meta-regression can help rule out some cases of confounded effects.

So, to sum up there are the following dichotomies:
  • Replicable vs. non-replicable - can use p-curve.
  • True or genuine effect (a correlation in the data-generating process) vs. false positive - metaregression model is more likely to give correct inference.*
  • Causal vs. confounded effect - extended meta-regression model can rule out some confounded effects.
The bottom line is that you should use meta-regression analysis rather than the p-curve.

* In the case of unit root spurious regressions mentioned in Bruns and Ioannidis' response, things are a bit complicated. In the case of a bivariate spurious regression, where there is a drift in the same direction in both variables then it is likely that Stanley's FAT-PET and similar methods will show that there is a true effect. Even though there is no relationship at all between the two variables, the nature of the data-generating-process for each means that they will be correlated. Where there is no drift or the direction of drift varies randomly then there should be equal numbers of positive and negative t-statistics in underlying studies and no relationship between the value of the t-statistic and degrees of freedom, though there is a relationship between the absolute value of the t-statistic and degrees of freedom. Here meta-regression does better than the p-curve. I'm not sure if the meta-regression model in our Energy Journal paper might be fooled by Granger Causality tests in levels of unrelated unit root variables. These would likely be spuriously significant but the significance might not rise strongly with sample size?

Wednesday, June 1, 2016

Mid-Year Update


It's the first official day of winter today here in Australia, though it has felt wintry here in Canberra for about a week already. The 1st Semester finished last Friday and as I didn't teach I don't have any exams or papers to grade and the flow of admin stuff and meetings seems to have sharply declined. So, most of this week I can just dedicate to catching up and getting on with my research. It almost feels like I am on vacation :) Looking at my diary, the pace will begin to pick up again from next week.

I'm working on two main things this week. One is the Energy for Economic Growth Project that has now been funded by the UK Department for International Development. I mentioned our brainstorming meeting last July in Oxford in my 2015 Annual Report. I am the theme leader for Theme 1 in the first year of the project. In the middle of this month we have a virtual workshop for the theme to discuss the outlines for our proposed papers. I am coauthoring a survey paper with Paul Burke and Stephan Bruns on the macro-economic evidence as part of Theme 1. There are two other papers in the theme: one by Catherine Wolfram and Ted Miguel on the micro-economic evidence and one by Neil McCulloch on the binding constraints approach to the problem.

The other is my paper with Jack Pezzey on the Industrial Revolution, which we have presented at various conferences and seminars over the last couple of years. I'm ploughing through the math and tidying the presentation up. It's slow going but I think I can see the light at the end of the tunnel! This paper was supposed to be a key element in the ARC Discovery Projects grant that started in 2012.

In the meantime, work has started on our 2016 Discovery Projects grant. Zsuzsanna Csereklyei has now started work at Crawford as a research fellow funded by the grant. She has been scoping the potential sources of data for tracing the diffusion of energy efficient innovations and processing the first potential data source that we have identified. It is hard to find good data sources that are usable for our purpose.

There is a lot of change in the air at ANU as we have a new vice-chancellor on board since the beginning of the year and now a new director for the Crawford School has been appointed and will start later this year. We are also working out again how the various economics units at ANU relate to each other... I originally agreed to be director of the Crawford economic program for a year. That will certainly continue now to the end of this year. It's not clear whether I'll need to continue in the role longer than that.

Finally, here is a list of all papers published so far this year or now in press. I can't remember how many of them I mentioned on the blog, though I probably mentioned all on Twitter:

Bruns S. B. and D. I. Stern (in press) Research assessment using early citation information, Scientometrics. Working Paper Version | Blogpost

Stern D. I. and D. Zha (in press) Economic growth and particulate pollution concentrations in China, Environmental Economics and Policy Studies. Working Paper Version | Blogpost
 
Lu Y. and D. I. Stern (2016) Substitutability and the cost of climate mitigation policy, Environmental and Resource Economics. Working Paper Version | Blogpost

Sanchez L. F. and D. I. Stern (2016) Drivers of industrial and non-industrial greenhouse gas emissions, Ecological Economics 124, 17-24. Working Paper Version | Blogpost 1 | Blogpost 2

Costanza R., R. B. Howarth, I. Kubiszewski, S. Liu, C. Ma, G. Plumecocq, and D. I. Stern (2016) Influential publications in ecological economics revisited, Ecological Economics. Working Paper Version | Blogpost

Csereklyei Z., M. d. M. Rubio Varas, and D. I. Stern (2016) Energy and economic growth: The stylized facts, Energy Journal 37(2), 223-255. Working Paper Version | Blogpost

Halkos G. E., D. I. Stern, and N. G. Tzeremes (2016) Population, economic growth and regional environmental inefficiency: Evidence from U.S. states, Journal of Cleaner Production 112(5), 4288-4295. Blogpost


Monday, May 23, 2016

Should We Test for Cointegration Using the Johansen Procedure If We Want to Estimate a Single Equation Static Regression?

A student from Cuba asked me:

"I want to apply the DOLS methodology... I have read several books and research works about DOLS but none of them explain clearly how to test cointegration in this case.... I asked some professors about this issue and one of them told me that I should apply the Johansen cointegration test."

It's quite easy to find papers that do this - first test for cointegration using the Johansen procedure, report only the cointegration test statistics, and if they can be used to reject the null hypothesis of non-cointegration then use some other method such as Dynamic Ordinary Least Squares (DOLS) to estimate a static single equation regression model. These researchers aren't actually interested in the complete vector autoregression (VAR) system, which is OK. I've reviewed quite a lot of papers that use this approach.

If your model has more than two variables (one dependent variable and one explanatory variable) then this is a very bad idea. The cointegration test statistics from the Johansen procedure (if they reject the null) say nothing about the cointegration properties of your single equation regression model.

The following simple example shows why. Imagine we have three variables, X1, X2, and X3 with the following "data generation process":


where epsilon 1 is a stationary stochastic process and epsilon 2 and 3 are simply white noise. Variables X2 and X3 follow simple random walks. Variable X1 cointegrates with X2. But X3 is a random walk that has nothing to do with the other two variables. If you estimate a VAR with these variables and do the Johansen cointegration test, you should expect to find that there is one cointegrating vector. But the following regression:


will not cointegrate. It is a spurious regression because it includes X3 which is an unrelated random walk. We cannot rely on finding that the VAR "cointegrates" to assume that this regression also cointegrates. Only X1 and X2 cointegrate in this example. Of course, it is possible that X1, X2, and X3 are jointly cointegrated but as this example shows, that doesn't have to be the case.

How can we avoid this? The cointegrating vector in this case is [1, -beta1]. We could test within the Johansen procedure whether we can restrict the cointegrating vector to not include a coefficient for X3. Unlike gamma3 in the static regression, if X3 does not belong in the cointegrating relationship, then this coefficient is expected to be zero. We can and should also test the residuals of the static regression to see if they cointegrate.

Tuesday, May 17, 2016

Stochastic Trend Included in Top 100 Economics Blogs!

Economics Blogs

I'm honored that Stochastic Trend has made it into a list of the top 100 economics blogs, albeit at position 99. It's a good list of possible blogs to follow.

Friday, May 13, 2016

"Replicating" the Climate Contest

My previous post discussed Doug Keenan's climate contest. I wondered how accurate we could actually expect to be in such a situation. I assume that the temperature series is a simple random walk, possibly with a constant drift term. We want to see how accurately we can determine whether there is a drift term in the random walk or not.

So, again just using Excel, I created 1000 series of 134 observations each distributed as Normal(mu, 0.11), where mu is the drift term. For 250 series I set mu to 0.01, for 250 series I set it to -0.01 and for 500 to 0. I then compute the usual t-test for the significance of the sample mean for each series.

Only 127 t-tests were significant at the 5% level and 201 at the 10% level. Using a 10% significance level, statistical power - correct rejection of the incorrect null hypothesis of no drift - is 29%. Using a 5% significance level, power is 20%. There is no distortion of the actual "size" of the test - the number of incorrect rejections of the true null.

So, combining this information, if you use this method and a 10% significance level you will get 595 correct classifications of whether a random walk has a drift or does not have a drift, which is far below the 900 required to win the contest.

Of course, it seems that Keenan's data is a bit more complicated than this and may or may not have any relevance to the actual nature of climate data or the nature of the climate change problem.

You can download my data here. The first column is the drift term used and the first row indicates the years and the statistics columns.

More Mathiness in Climate Econometrics: Doug Keenan's Climate Change Contest

My colleague Robert Kaufmann got an e-mail from Doug Keenan inviting him to participate in his "climate change contest" without the usual $10 submission fee. I hadn't heard about this contest and went to the site to investigate. So, Keenan has produced 1000 time series of 135 observations each that are somehow derived from random numbers and then added a plus 1 or minus 1 per 100 observations trend to some of these. The series have been calibrated so that the they could potentially reproduce in some way the observed global temperature time series from 1880 to the present without an added trend. The task of the contestant is to determine for each series whether it has an added trend or not. If any submission gets 90% of these or more right by 30th November, this year, that submission will win $100,000.

Keenan's idea is that no-one can validly detect with 90% accuracy whether there is a trend in temperature or not. Therefore, the IPCC's claim that temperature has definitely increased over the last century and it is very likely that this is due to human activity must be wrong.

I downloaded the data. Looking at some of the series it's pretty clear that they are some sort of random walk (stochastic trend). It is not simply a series of random numbers (white noise) with a linear trend added. I haven't bothered to write a program to test this. Assuming that they are simple random walks, I tested in Excel whether the mean of first difference was different to zero for each of the thousand series. Only 8 of the series have a mean first difference that is significantly different to zero at the 5% level using the standard calculation of the standard error of the mean, which assumes that the first differences are white noise. If they were normally distributed white noise and none of the original series had an added trend then we would expect about 50 of the means of the first differences to be significantly different to zero by the definition of statistical significance. So, something else seems to be going on here. I expect that statistical power to detect a non-zero drift term of 0.01 or -0.01 when the standard deviation of the first differences is 0.11 is in any case rather low. Perhaps we could use structural time series methods, but statistical power of 90% at a significance level of 10% is a lot to ask for in this situation. I created my own dataset to see how many series one could expect to correctly classify - statistical power using a simple data generating process and a simple test was 29% for a 10% significance level test. This means that we can only correctly classify 595 of the 1000 series.

The real question to ask is whether Keenan's thought experiment makes sense. I would argue that it doesn't. His argument is that if temperature follows some kind of integrated process then it is very hard to determine whether it has a drift component or will sooner or later just stochastically trend down again. Therefore, we can't know if temperature has statistically significantly increased or not. But theory and climate models predict that global temperature should be stationary if radiative forcing is constant. If we detect a random walk or a close to random walk signal in the temperature data then something else is happening. Research can then try to determine if it is likely to be due to anthropogenic factors or not. It is possible that we make a type 1 error - falsely rejecting the null hypothesis - but we can determine how likely that is. So, in my opinion, Keenan's contest is another case of mathiness in climate econometrics.

Saturday, April 16, 2016

The Time Effect in the Growth Rates Approach

After a very long process our original paper on the growth rates approach was rejected by JEEM about a month ago. I think the referees struggled to see what it added to more conventional approaches. A new referee in the 2nd round hadn't even read the important Vollebergh et al. paper, so it's not surprising they missed how we were trying to build on that paper. That, discussion with my coauthor, Paul Burke, and preparing a guest lecture for CRWF8000 on the environmental Kuznets curve, got me thinking about a clearer way to present what we are trying to do. It really is true that teaching can improve your research!

Vollebergh et al. divide the total variation in emissions in environmental Kuznets curve models into time and income effects:

where G is GDP per capita, E is emissions per capita, and i indexes countries and t time. They point out that the standard fixed effects panel estimation of the EKC imposes very strong restrictions on the first term:
Each country has a constant "country effect" that doesn't vary over time while all countries share a common "time effect" that varies over time. They think that the latter is unreasonable. Their solution is to find pairs of similar countries and assume that just those two countries each share a common time effect.

In my paper on "Between estimates of the emissions-income elasticity" I solved this problem by allowing the time effect to take any arbitrary path in any country by simply not modeling the time effect at all and extracting it as a residual. The downside of the between estimator is that it is more vulnerable to omitted variables bias than other estimators.

We introduced the growth rates approach to deal with several issues in EKC models, one of them is this time effects problem. The growth rates approach partitions the variation in the growth rate of emissions like this:

where "hats" indicate proportional growth rates, and X is a vector of exogenous variables including the constant. The time effect is the expected emissions growth rate in each country when the economic growth is zero. This is a clear definition. The formulation allows us to model the time effect in each individual country i as a function of a set of country characteristics including the country's emissions intensity, legal origin, level of income, fossil fuel endowment etc. I don't think this is that clear in the papers I've written so far. We focused more on testing alternative emissions growth models and, in particular, comparing the EKC to the Green Solow and other convergence models.

So what do these time effects look like? Here are the time effects for the most general model for the CDIAC CO2 data plotted against GDP per capita in 1971:


Yes, I also computed standard errors for these, but it's a lot of hassle to do a chart with confidence intervals and a continuous variable on the X-axis in Excel.... There is a slight tendency for the time effect to decline with increased income but there is a big variation across countries at the same income level. And here are the results for SO2:

These are fairly similar, but more negative as would be expected. Clearly the time effects story is not a simple one and one that has largely been ignored in the EKC literature.

Thursday, April 7, 2016

Should We Stop Investing in Carbon-Free Energy So That We Will Be Able to Afford CCS?

Myles Allen has a new interesting paper in Nature Climate Change:"Drivers of Peak Warming in a Consumption-Maximizing World", which has attracted media attention. The article in The Australian is framed as: "If we spend money now on renewable energy we won't be able to afford carbon sequestration later". This didn't sound right to me as I'm an "all of the above" kind of guy when it comes to climate policy and if there is less carbon in the air that needs scrubbing in the future the less it would seem to cost to scrub it.

I haven't done a thorough read of the mathematics in Allen's paper and this isn't going to a proper critique of his article. I just wanted to understand where the journalist got this idea from.

Allen uses a very simple cost-benefit framework where there is "backstop technology" - a technology that can remove carbon dioxide from the atmosphere at constant cost. The key assumption I think is that the "social cost of carbon" depends linearly on the level of income per capita. The following graph illustrates the main result:
If economic growth is rapid, then the social cost of carbon will rise much faster than if economic growth is slow. Therefore, it will pay off earlier to employ the backstop technology. This means that, paradoxically, peak warming will be less than under slower economic growth.

It is a long leap from this to arguing that we shouldn't be investing in renewable energy. Allen's model allows for an efficient level of abatement until the marginal cost of abatement hits the backstop cost. Also the model has no feedback from abatement cost to the rate of economic growth, which is exogenous. Almost all economic research, including my own, finds that the growth costs of climate mitigation are very small, at least until extreme levels of abatement are reached. So, the model is an interesting thought exercise about CCS but doesn't have as strong policy implications as the media suggests.

Friday, March 11, 2016

Economic Growth and Global Particulate Pollution Concentrations

I have just posted another working paper in the Trends and Drivers series, this time coauthored with recent Crawford masters student Jeremy van Dijk.

Particulate pollution, especially PM2.5, is thought to be the form of pollution with the most serious human health impacts. It is estimated that PM2.5 exposure causes 3.1 million deaths a year, globally, and any level above zero is deemed unsafe, i.e. there is no threshold above zero below which negative health effects do not occur. Black carbon is an important fraction of PM2.5 pollution that may contribute significantly to anthropogenic radiative forcing and, therefore, there may be significant co-benefits to reducing its concentration. In our paper, we use recently developed population-weighted estimates of national average concentrations of PM2.5 pollution that are available from the World Bank Development Indicators. These combine satellite and ground based observations.

Though the environmental Kuznets curve (EKC) was originally developed to model the ambient concentrations of pollutants, most subsequent applications focused on pollution emissions. Yet, previous research suggests that it is more likely that economic growth could eventually reduce the concentrations of local pollutants than emissions. We examine the role of income, convergence, and time related factors in explaining changes in PM2.5 pollution in a global panel of 158 countries between 1990 and 2010. We find that economic growth has positive but relatively small effects, time effects are also small but larger in wealthier and formerly centrally planned economies, and, for our main dataset, convergence effects are small and not statistically significant.

Crucially, when we control for other relevant variables, even for this particulate pollution concentration data there is no environmental Kuznets curve, if what we mean by that is that environmental impacts decline with increasing income once a given in sample level of income is passed - the turning point.

The following graph shows the relationship between the average growth rates over 20 years of particulate pollution concentrations and per capita GDP:

The two big circles are of course China and India where both GDP and particulate pollution grew strongly. We can see that there is a positive relationship between these two growth rates, especially when we focus on the larger countries. The main econometric estimate in the paper shows that a 1% increase in the rate of economic growth is associated with a 0.2% increase in the growth rate of particulate pollution. This is much weaker than the effects we found for emissions of carbon and sulfur dioxides. The estimated income turning point is $66k with a large standard error. On the other hand, when we estimate a model without the control variables, we obtain a turning point of only $3.3k with a standard error of only $1.2k. To check the robustness of this result, we estimate models with other data sets and time periods. These yield quite similar results.

We conclude that growth has smaller effects on the concentrations of particulate pollution than it does on emissions of carbon or sulfur. However, the EKC model does not appear to apply here either, casting further doubt on its general usefulness.


Thursday, March 3, 2016

My Submission to Stern Review of the REF

The Stern Review of the REF (Research Excellence Framework) is the latest British government review of research assessment in the UK, following on from the Metric Tide assessment. I have just made a submission to the enquiry. My main comment in response to the first question (1. What changes to existing processes could more efficiently or more accurately assess the outputs, impacts and contexts of research in order to allocate QR? Should the definition of impact be broadened or refined? Is there scope for more or different use of metrics in any areas?) follows:

"I think that there is substantial scope for using bibliometrics in the conduct of the REF. In Australia the Australian Research Council uses metrics to assess natural science disciplines and psychology. Research that I have conducted with my coauthor, Stephan Bruns, shows that this approach could be extended to economics and probably political science and perhaps other social sciences. We have written a working paper presenting our results that is currently under review by Scientometrics.

The paper shows that university rankings in economics based on long-run citation counts can be easily predicted using early citations. The rank correlation between universities' cumulative citations received over ten years for economics articles published in 2003 and 2004 and citations received in 2003 to 2004 alone is 0.91 in the UK and 0.82 in Australia. We compare these citation-based university rankings with the rankings of the 2008 Research Assessment Exercise in the UK and the 2010 Excellence in Research assessment in Australia. Rank correlations are quite strong but there are differences between rankings based on this type of peer review and rankings based on citation counts. However, if assessors are willing to consider citation analysis to assess some disciplines as is the case for the natural sciences and psychology in Australia there seems no reason to not include economics in this set.

Previously, I published a paper, published in PLoS One showing that the predictability of citations at the article level is similar in economics and political science. This supports the view that metrics based research assessment can cover both economics and political science in addition to the natural sciences and economics.

I believe the REF review should seriously consider these findings in producing recommendations for a lighter touch future REF."

I also made briefer responses to some of their other questions. In particular:

5. How might the REF be further refined or used by Government to incentivise constructive and creative behaviours such as promoting interdisciplinary research, collaboration between universities, and/or collaboration between universities and other public or private sector
bodies?


"A major issue with the REF and the ERA in Australia is the pigeon-holing of research into disciplines, which might not match well the nature of the research conducted. This clearly will discourage publication in interdisciplinary venues that may not be as respected by mainstream reviewers. The situation is less acute in Australia where a single output can be allocated across different assessment disciplines, but I still think that assessment by pure disciplinary panels discourages interdisciplinary work in Australia. So, I imagine this is exacerbated in the UK.

7. In your view how does the REF process influence the development of academic disciplines or impact upon other areas of scholarly activity relative to other factors? What changes would create or sustain positive influences in the future?

Johnston et al. (2014) show that the total number of economics students has increased in UK more rapidly than the total number of all students, but the number of departments offering economics degrees has declined, particularly in post-1992 universities. Also, the number of universities submitting to the REF under economics has declined sharply with only 3 post-1992 universities submitting in the latest round. This suggests that the REF has driven a concentration of economics research in the more elite universities in the UK.

Johnston, J., Reeves, A. and Talbot, S. (2014). ‘Has economics become an elite subject for elite UK universities?’ Oxford Review of Education, vol. 40(5), pp. 590-609.

Thursday, February 25, 2016

Economic Growth and Particulate Pollution Concentrations in China

A new working paper coauthored with Donglan Zha, who is visiting the Crawford School, which will be published in a special issue of Environmental Economics and Policy Studies. Our paper tries to explain recent changes in PM 2.5 and PM 10 particulate pollution in 50 Chinese cities using new measures of ambient air quality that the Chinese government has published only since the beginning of 2013. These data are not comparable to earlier official statistics and we believe are more reliable. We use our recently developed model that relates the rate of change of pollution to the growth of the economy and other factors as well as also estimating the traditional environmental Kuznets curve (EKC) model.

Though the environmental Kuznets curve (EKC) was originally developed to model the ambient concentrations of pollutants, most subsequent applications have focused on pollution emissions. Yet, it would seem more likely that economic growth could eventually reduce the concentrations of local pollutants than emissions. This is the first application of our new model to such concentration data.

The data show that there isn't much correlation between the growth rate of GDP between 2013 and 2014 and the growth rate of PM 2.5 pollution over the same period:



What is obvious is that pollution fell sharply from 2013 to 2014, as almost all the data points have negative pollution growth. We have to be really cautious in interpreting a two year sample. Subsequent events suggest that this trend did not continue in 2015.

In fact, the simple linear relationship between these variables is negative, though statistically insignificant. The traditional EKC model and its growth rate equivalent both have a U shape curve - the effect of growth is negative at lower income per capita levels and positive at high ones. But the (imprecisely estimated, so not statistically significant) turning point fro PM 2.5 is way out of sample at more than RMB 400k.* So, growth has a negative effect on pollution in the relevant range. When we add the initial levels of income per capita and pollution concentrations to the growth rates regression equation the turning point is in-sample and statistically significant. The initial level of pollution has a negative and highly statistically significant effect. So, there is "beta convergence" - cities with initially high pollution concentrations, reduced their level of pollution faster than cleaner cities did.

So what does all this mean? These results are very different than those we found for emissions of CO2, total GHGs, and sulfur dioxide. In all those cases, we found that growth had a positive and quite large effect on emissions. In some cases, the effect was close to 1:1. Of course, we should be cautious about interpreting this small Chinese data set. But our soon to be released research on global PM 2.5 concentrations, will again show that the effect of growth is smaller for these data than it is for the key pollution emissions data. This confirms early research that suggested that pollution concentrations turn down before emissions do, though it doesn't seem to support the traditional EKC interpretation of the data.

BTW, it is really important in this research to use the actual population of cities and not just the registered population (with hukou). If you divide the local GDP by the registered population you can get very inflated estimates of GDP per capita for cities like Shenzhen.

* The turning point is in-sample for PM 10.

Tuesday, February 23, 2016

Mathiness in Climate Change Econometrics

Terence Mills has a "white paper" on the Global Warming Policy Foundation Website. It predicts little future increase in temperature. Not surprisingly, The Australian has published a totally positive article about it. I commented in the comments there:

"Mills assumes that past fluctuations in temperature are purely random and of unknown causes and ignores greenhouse gases, or the sun, or volcanic eruptions, or any other specific factor that might drive climate change. He then fits simple statistical models based on this assumption to the data. Not surprisingly, if you assume that there isn't any specific factor driving the climate, your best forecast for the future is for not much change because you don't know what random shocks will show up to change the climate in the future. A more sensible approach is to test which of the various proposed drivers might actually have an effect and how large that effect has been. There are a lot of refereed academic papers that do just that including some I published myself. It's pretty easy to show that greenhouse gases have an effect on the climate, it's quite big (but fairly uncertain how big), and if emissions continue on a business as usual path there will be a lot of increase in temperature."

More technically: Mills fits univariate ARIMA models to HADCRUT,  RSS global lower troposphere series (only available since 1980) and Central England Temperature series. These include models with no deterministic component (an ARIMA(0,1,3) model of HADCRUT) and a model with a deterministic trend with breakpoints chosen based on "eyeballing" the temperature graph. None of these models predicts any future warming, because there is no trend in the trendless model and because the "hiatus" means there is no recent trend in the segmented trend model. Of course, a model with just a single linear deterministic trend fitted to HADCRUT data would forecast a lot of warming in the 21st Century, though with a very wide forecast error envelope. But that model isn't estimated, for some reason...

This is a prime case of "mathiness" I think - lots of math that will look sophisticated to many people used to build a model on silly assumptions with equally silly conclusions.

In other news, my paper coauthored with Luis Sanchez on drivers of greenhouse gas emissions is now published in Ecological Economics. It is open access till 12th April.

P.S. This post was cited in the Daily Mail.

Saturday, February 13, 2016

Family Portrait

This will slow things down for a while :) Noah was born two days ago. He is a very large baby - 4.71kg and 55cm long. If he was a t-statistic, he'd have 2 or 3 stars. He'll have to make do with one, his surname.*

* Stern = Star in German.

Saturday, January 23, 2016

PhD Applications Again

Three and a half years ago, I wrote a post about PhD applications. Since then, I have received a huge number of enquiries from prospective students. I now have two PhD students (Alrick Campbell and Panittra Ninpanit) and am on the committee/panel of two others (Anil Kavuri and Rohan Best). There are a couple of other good students who have applied but haven't come here because either their English test scores didn't meet our requirement or they couldn't get funding. We don't offer any new internal Crawford School scholarships at this point and I don't have any grant funds for PhD students. So, it is quite unlike applying for a PhD in the US where most students are funded by university sourced money in social sciences like economics. Here it is most likely that you will be funded by the Australian government one way or another, by your own government, or by an intergovernmental organization like the Asian Development Bank.*

As I mentioned in my previous post, also, unlike North America, applying to do a PhD in the social sciences and humanities here in Australia requires lining up a supervisor (=advisor) up front. Therefore, it is more like applying to do a PhD in the natural sciences and engineering in the US. Our formal process here also requires that potential students submit a research proposal, despite the fact that at ANU there is up to a year of coursework required in the economics program, which makes it seem more like a US PhD than most Australian PhD programs where you start doing research more or less straight away.

This is where I have been a bit frustrated by potential students submitting proposals that aren't at all related to the kind of research I do (despite this blog and my research webpage), proposals that are not very good, or being surprised that they need to submit a proposal because that isn't required to apply for a PhD in the US. Some of the latter seem like potentially good students. When I ask them for a proposal, the usual reaction is to write something rather quickly. I can't blame these students - when many programs around the world don't require a proposal, why should they invest a lot in writing one. One of the main reasons I did my PhD in the US rather than Britain was that I didn't know what to write a proposal about at the time. Another downside of a student submitting an upfront proposal is that they might then feel somewhat locked into that subject despite having written the proposal being a sunk cost. Alrick and Panittra were exceptions, having a pretty good proposal up front that was related to my research, which is why I agreed to supervise them.**

So, after receiving another off-the-wall topic from a prospective student this morning, I'm thinking of taking a radically new approach. Maybe, I should require students to submit a completed research paper (like we did when I was at RPI) instead of a  proposal for future research and then discuss this paper with the student to see how they think etc. I would require students to work on one of the broad areas I work on ("economic growth", "meta-analysis" etc.) and develop an actual proposal with them after they arrive here.

Or maybe the process is working exactly as it should? After all, I have had a few good applications and probably as many students as I should have. Any thoughts?

* Australian students can get an APA. Foreign students main option is the Australia Awards program. There are very few scholarships for students not from developing countries that Australia is interested in giving aid to. According to the government's Innovation Package, this will change dramatically.

** Students only need to line up the primary supervisor ahead of time. The other panel members usually join after the student has finished their coursework.

Friday, January 22, 2016

Follow-up on Anti-Vax

So, apparently my impression that anti-vax was a right wing cause (anti-government mandates/one-world government or whatever) was unusual and most people think it is a left-wing cause (anti big-pharma/pro natural remedies etc). Turns out that neither is the case and that there are people on both the left and the right (at least in the US) who are anti-vax. Actually, it seems that there is a slight maximum of concern about vaccines near the center of the political spectrum, as shown on this graph from the linked article:



Thursday, January 21, 2016

Drivers of Industrial and Non-Industrial Greenhouse Gas Emissions to be Published in Ecological Economics

My paper with my former master's student Luis Sanchez has been accepted by Ecological Economics. This is one of the papers in the series using growth rates estimators of the income-emissions relationship that came out of my work on the IPCC 5th Assessment Report. This is the second paper I have published based on work done in our course: IDEC8011 Master's Research Essay. The previous one was with Jack Gregory who is now a PhD student at University of California, Davis. BTW, we previously submitted this paper to Nature Climate Change, Global Environmental Change, and Climatic Change in that order, with the first submission on 5 January 2015.

Wednesday, January 20, 2016

Long-run Estimates of Interfuel and Interfactor Elasticities

A new working paper coauthored with Chunbo Ma on estimating long-run elasticities. This is one of the major parts of our ARC DP12 project, the "Present" part of the title: "Energy Transitions: Past, Present, and Future". We just resubmitted the paper to a journal and I thought that was a good time to post a working paper with the benefit of some referee comments.

Both my meta-analysis of interfuel elasticities of substitution and Koetse et al.'s meta-analysis of the capital-energy elasticities of substitution show that elasticity estimates are dependent on the type of data – time series, panel, or cross-section – and the estimators used. Estimates that use time series data tend to be smallest in absolute value and those using cross-section data tend to be largest.

We review the econometric research that discusses how best to get long-run elasticity estimates from panel data. One suggestion is to use the between estimator, which is equivalent to an OLS regression on the average values over time for each country, firm etc. in the panel. Alternatively, Chirinko et al. (2011) argued in favor of estimating long-run elasticities of substitution using a long-run difference estimator, which is very similar to the "growth rates estimator" we have used recently.

We apply both these estimators to a Chinese dataset we have put together from both public and non-public data sources. We have data for 30 Chinese provinces over 11 years from 2000 to 2010. We estimate models for choice of fuels - interfuel substitution - and for the choice between capital, labor, and energy - interfactor substitution.

A big issue with the between estimator, which has made it relatively unpopular, is that it is particularly vulnerable to omitted variables bias. The big omitted variable in most production analysis is the state of technology. There is a lot of variation across provinces in productivity and prices and it seems that the two are correlated:


The first graph shows the price index for aggregate coal input that we constructed. Generally, coal is more expensive in Eastern China. The second graph shows an index of provincial total factor productivity, relative to Shanghai, which is the most productive province. Coastal provinces are the most productive - their distance to the technological frontier is low. To address this potential omitted variables bias, we add province level inefficiency and national technological change terms to the cost function equation. Chirinko et al. (2011) instead used instrumental variables estimation, but we found that their proposed instruments in many cases have very low or negative correlations with the targeted variables. We do use instrumental variables estimation, but this is due to the endogeneity inherent in our constructed coal and energy prices indices. We use Pindyck's (1979) approach to this. We also impose concavity on the cost function, if necessary.

The results show that demand for coal and electricity in China is very inelastic, while demand for diesel and gasoline is elastic. With the exception of gasoline and diesel, there are limited substitution possibilities among the fuels. Substitution possibilities are greater between energy and labor than between energy and capital. These results seem very intuitive to us. However, they are quite different to some previous studies for China, in particular the estimates in the paper by Hengyun Ma et al. (2008) Their estimates of the elasticities of substitution are negatively correlated with ours. Their study uses similar but older data, though we have improved the calculation of some variables. They use fixed effects estimation and don't impose concavity. These might be some of the reasons why our results differ. We also provide traditional fixed effects estimates with concavity imposed. These estimates are mostly close to zero. This suggests that the between and difference estimators are picking up longer-run behavior.

Which of these two estimators should we use in future? We can't give a definitive answer to that question but the difference estimator does seem to have some advantages. In particular, it allows cross-equation restrictions on the bias of technical change, which should result in better estimates of those parameters. So, that would be my first preference, though I am kind of reluctant to ignore the between variation in the data.

Tuesday, January 19, 2016

Influential Publications in Ecological Economics Revisited to be Published in... Ecological Economics

Our paper on the changes over the last decade in patterns of influence in ecological economics has been accepted for publication. Not very surprisingly the journal where it will be published is Ecological Economics. Elsevier have already sent me an e-mail saying that I should expect the proofs on 21 January! That is fast.

Thursday, January 14, 2016

People's Ability to Delude Themselves is Amazing

The Australian reported a couple of days ago that the University of Wollongong gave a PhD for a thesis by an anti-vaccination activist, Judy Wilyman. It's the comments on the article where the delusion is amazing. Many people comment that it is totally outrageous that the University of Wollongong gave this PhD because obviously anti-vax is total nonsense and a conspiracy theory. At the same time, some of them are complaining that climate scepticism doesn't get sufficient respect from academia. Of course, climate scepticism is just as much nonsense and a conspiracy theory as anti-vax.* But these people believe that one of these theories is totally correct and the other totally bogus. I'm a bit surprised, as I thought that both these theories were right-wing anti-government theories. Apparently not in Australia?

This is, of course, exactly the same as people who are convinced that their religion is true and all other religions are false.

* I am open-minded about both anti-vaccination and climate change sceptical hypotheses. Also UFOs, yetis...

Wednesday, January 13, 2016

Between and Within

This will be obvious to anyone with a good understanding of econometrics, but it is quite stunning really to think that all the information you see in the first set of graphs in my previous post on the EKC is thrown away by fixed effects panel estimators. That is because the graphs plot the mean value over time in each country of the dependent variable against the mean value over time in each country of the explanatory variable. Fixed effects estimation first deducts these means from the data and then estimates the regression of the two residuals using ordinary least squares. This is why fixed effects is also called the "within estimator" because the "between (country) variation" you see in these graphs is ignored. Of course, you can estimate a model that just exploits this between variation using the between estimator.*

The reason the latter estimator is rarely used is because researchers are worried about omitted variables bias. Any omitted variables are subsumed in the error term while the fixed effects estimator eliminates their country specific means and so reduces the potential bias. Hauk and Wacziarg (2009), however, found that when there is also measurement error in the explanatory variables (which can also bias the regression estimates) the between estimator performs well compared to alternatives. Fixed effects estimation tends to inflate the effect of the measurement error.

Differenced estimators sweep out any country fixed effects in the differencing operation.** So they also remove all the between variation in the data. However, they do allow us to include country characteristics that are constant over time to explain differences in growth rates across countries, which standard fixed effects does not allow.***

* The linked paper was eventually published in Ecological Economics.
** For a two period panel, fixed effects and first differences produce identical results.
*** There are variations of fixed effects that can allow this.

Monday, January 4, 2016

The Environmental Kuznets Curve after 25 Years

This year marks the 25th anniversary of the release of the working paper: "Environmental Impacts of a North American Free Trade Agreement" by Gene Grossman and Alan Krueger, which launched the environmental Kuznets curve industry. I have a new working paper out whose title capitalizes on this milestone. This is my contribution to the special issue of the Journal of Bioeconomics based on the workshop at Griffith University that I attended in October. It's a mix between a survey of the literature and a summary of my recent research with various coauthors on the topic.

Despite the pretty pictures of the EKC in many economics textbooks, there isn't a lot of evidence for an inverted U-shape curve when you look at a cross-section of global data:


Carbon emissions from energy use and cement production and sulfur dioxide emissions both seem to be monotonically increasing in income per capita. Greenhouse gas emissions from agriculture and land-clearing (AFOLU, lower left) or particulate concentrations (bottom right) just seem to be amorphous clouds. In fact, we do find an EKC with an in sample income turning point for PM 2.5 pollution, but only when we look at changes over time in individual countries. Interestingly, Grossman and Krueger originally applied the EKC to ambient concentrations of pollutants and it is there that it seems to work best.

The paper promotes our new "growth rates" approach to modeling emissions. Here are graphs of the growth rates of pollution and income per capita that exactly match the traditional EKC graphs above:



There is a general tendency for declining economies to have mostly declining pollution and vice versa, though this effect is strongest for energy-related carbon emissions. The graphs for sulfur and AFOLU GHG emissions are both shifted down by comparison. There is a general tendency unrelated to growth for these pollutants to decline over time - a negative "time effect". Growth has a positive effect though on all three. PM 2.5 (lower right) is a different story. Here economic growth eventually brings down pollution. We don't find a significant negative time effect.

I first got interested in the EKC in November 1993 when I was sitting in Mick Common's office at the University of York where I'd recently started as a post-doc (though I was still working on my PhD). He literally drew the EKC on the back of an envelope and asked whether more growth would really improve the environment even if the EKC was true. I did the basic analysis really quickly but then it took us another couple of years to get the paper published in World Development.

Sunday, December 27, 2015

Annual Review 2015

I've been doing these annual reviews since 2011. They're mainly an exercise for me to see what I accomplished and what I didn't in the previous year. I ended last year noting that I had had a full year since I ended being research director at the Crawford School. I didn't know then that I would be taking on another administration/leadership role before the year was over. But, in July, I took over as director of the International and Development Economics Program. So far, this seems to be less work than being research director was and so more compatible with research productivity! In the first part of the year I was chairing the ANU submission to ERA 2015 in economics. The result was disappointing for the ANU, with economics overall falling from a 5 to a 4 as did econometrics (FoR 1403). Applied economics (FoR 1402) fell from a 4 to a 3. We have developed a strategy to turn things around and I am pretty confident we will get a 4 next time. The positive news was that economic theory (FoR 1401) went up from 4 to 5. Also, policy and administration (1605) went from 3 to 5, which was very good news for the Crawford School.


Abu Dhabi
 
Perhaps the best professional news this year is that we got awarded an ARC Discovery Projects Grant for research on "Energy Efficiency Innovation, Diffusion and the Rebound Effect." We are expecting that Zsuzsanna Csereklyei will be joining us next year to work as a post-doc on the project. My colleague Paul Burke also got a DECRA fellowship.

I am also part of a team together with Astrid Kander of Lund University and Sophia Henriques and Paul Sharp at University of Southern Denmark that won a grant from the Handelsbanken Research Foundation on “Energy Use and Economic Growth: a Long-run European Study (1870-2013). The money will mostly fund Sophia and there is some additional  travel money. As I won't be traveling to Sweden in the near future (see below), it looks like Akshay Shanker - one of our PhD students - who I am working with on a directed technological change paper - will use the money to travel to Sweden early in 2016.

Punting on the River Cherwell, Oxford
 
In July, I traveled back to the UK after returning to Australia from conferencing in the Middle East (see below). I attended a brainstorming workshop at Oxford Policy Management (in Oxford, of course, at Pembroke College) to prepare a proposal to get funding for research on electricity and economic growth and development from the UK, Department For International Development. At this point, it looks likely that our consortium will get the grant but this isn't confirmed yet.

Pembroke College, Oxford

We got four journal articles accepted for publication including our papers on carbon dioxide emissions in the short-run in Global Environmental Change and on global energy trends in Energy Economics, and articles in: Environmental and Resource Economics (still "in press") and the Journal of Cleaner Production (January 2016 publication date). In the meantime, our Energy Journal paper accepted in 2014 is still in press and doesn't yet show up on the journal website.... I also released four working papers that aren't yet published: Two with Stephan Bruns - one on research assessment using citations and another on meta-analysis of Granger causality test statistics, a third one with my former masters student Luis Sanchez, and the fourth one with a long list of coauthors headed by Bob Costanza. We have received revise and resubmits for the latter two and resubmitted the papers. I have another paper coauthored with Chunbo Ma where we also received a revise and resubmit that we are now working on. When we do, we'll also put out a working paper. We also did a revise and resubmit on a paper submitted in mid 2014, which is now under second review. I now have a spreadsheet to help keep track of all these projects!

Right now, I have ten publications in various stages of the review and publication process. Two are in press, three resubmitted after revision, two we are revising, and three in first review.  First review at that journal - we've already sent one to a bunch of other journals.

I also published two book chapters. One is an article on the energy GDP relationship in the New Palgrave Dictionary of Economics and the other a chapter in a Routledge book on energy and poverty.

One milestone this year was passing 10,000 citations on Google Scholar and an H-Index of 40. I also just went over 4,000 citations on Scopus.


Champagne Pool, Waiotapu Thermal Area, near Rotorua

I went to the AARES conference in Rotorua in February, the IAEE meeting in Antalya, and for the first time to the International Energy Workshop, which was in Abu Dhabi. I also gave a seminar at University of Queensland in September and was invited to a workshop at Griffith University in October. Finally, I went to the Economics and Business PhD Conference at UQ in early November to comment on a PhD student's paper. Locally, I moved house and suburb in Canberra, buying a house for the first time in my life.

Since August, Donglan Zha has been visiting Crawford from Nanjing Aeronautics and Astronautics University. We are going to work on modeling concentrations of air pollution in China. I also have a new PhD student since the beginning of the year, Panittra Ninpanit. Her first PhD paper will be on decomposition of carbon emissions in Thailand using input-output analysis.

I taught the quantitative methods section of our environmental management research methods course over five weeks in the first semester. This is a tough subject to teach in such a short time slot. I thought I did well, but I got my worst evaluations so far at Crawford School. I guess most people don't like doing statistics. I also taught my energy economics course again and will continue to teach it in the future. It has been rebranded as an economics course, IDEC8089, instead of a general Crawford School course (CRWF8017). This doesn't seem to have affected participation from across different ANU programs too much.

Ecological Economics has gone to a new editorial model where there are several editors who handle much of the incoming flow of paper submissions and associate editors like me play a lower key role. I was offered to be one of the new editors, but I decided that the cost/benefit trade-off wasn't good enough and after 13 years (!) as an associate editor it was time for others to play a bigger role. I have joined the editorial advisory board for Nature Energy, a new journal that will start publishing in 2016.

As I am getting more involved in Twitter, I posted fewer blogposts this year - only 38. The most popular was: "The Industrial Revolution Remains One of History's Great Mysteries?" Second was:
"The Extent and Consequences of P-Hacking in Science" and third: "Carbon Dioxide Emissions in the Short Run: The Rate and Sources of Economic Growth Matter".

As always, it is possible to predict some of the things that will be happening in the coming year, though this year is more uncertain than most. First, I'm not sure how long I will be IDEC director for.  Our main innovation in the program that we hope happens in 2016 is a new degree targeting the private fee-paying market for masters degrees (rather than scholarship funded). I'll let you know more if it is successful. Second, my wife is expecting a baby due on 5 February. So, I haven't submitted any abstracts to conferences as I normally would.... We will see how things go.

On the predictable side, I hope to put out three new working papers early in the new year. Two will be the last two papers from our DP12 ARC grant. One is the paper coauthored with Chunbo on estimaing elasticities of substitution and the other the paper on the industrial revolution coauthored with Jack Pezzey. All the math for the latter paper is now nailed down and it is just a question of polishing. Another will be based on a paper I just submitted to the Journal of Bioeconomics for a special issue based on the Griffith workshop. As mentioned above, Zsuzsanna Csereklyei should be moving to Canberra in early 2016 to start work on the ARC grant.

Backyard at our new house