Thursday, June 25, 2015

Changes at ANU

Yesterday we heard the surprising news that Brian Schmidt would be the next vice-chancellor (president) of the ANU. Here at Crawford there is change too with the recent announcement that Tom Kompas would step down as School director after five years. We will be searching for a new school director and in the interim Bob Breunig will be acting director. That means that director of the International and Development Economics Program became vacant. I have agreed to be acting director of the program while Bob is directing the School (till 15 July 2016). There is more to the program that just the Masters of International and Development Economics. We also have a Masters of Environmental and Resource Economics. Effectively, though, it is the department of economics located at the Crawford Building. Crawford also has another department of economics - the Arndt-Corden Department of Economics - located in the Coombs Building. I was based there in 2009-2010. Other changes at Crawford is that Frank Jotzo is becoming deputy director of the School and John McCarthy is replacing him as READ director. I am leaving the READ group.


Monday, June 22, 2015

Population, Economic Growth and Regional Environmental Inefficiency: Evidence from U.S. States

I have a new paper in the Journal of Cleaner Production coauthored with George Halkos and Nikalaos Tzeremes. George was a lecturer at University of York when I was a post-doc there. We haven't previously put out a working paper version of this paper.

In this paper, we apply a conditional directional distance function allowing multiple exogenous factors to measure environmental performance to evaluating the air pollution performance levels of U.S. states for the years 1998 and 2008. The overall results reveal that there is much variation in environmental inefficiencies among the U.S. states. A second stage nonparametric analysis indicates a nonlinear relationship between states’ population size, GDP per capita levels and states’ environmental inefficiency levels.

Our results indicate that environmental inefficiency on the whole decreases with increased population and income per capita but there are limits to this improvement and at high income and population levels the tendency may reverse. In particular, small poor states tend to be environmentally inefficient, whereas large states tend to be more efficient regardless of their level of income. The results show that there is not so much of a trade off between environmental quality and economic development in small and poor US states in the South and Mid-West. As these states grow in income and population they can improve their environmental efficiency. However, large and richer states face more environmental challenges from growth. This may explain the differences in policy across states. For example, California which is already an environmentally efficient state is also a state which has lead in environmental regulation. There are fewer local environmental policies in states across the South and parts of the mid-West. Politicians and populations in these states may see less trade off between environmental quality and development and hence be reluctant to adopt specific environmental policies. These patterns also match recent trends in voting for the Republican and Democratic parties the so-called Blue and Red States. However, there are exceptions to a simplistic analysis along these lines as Texas for example is an environmentally efficient state in our analysis as would be expected from its large population size.

Wednesday, June 17, 2015

Meta-Granger Causality Testing

I have a new working paper with Stephan Bruns on the meta-analysis of Granger causality test statistics. This is a methodological paper that follows up on our meta-analysis of the energy-GDP Granger causality literature that was published in the Energy Journal last year. There are several biases in the published literature on the energy-output relationship, which we document in the Energy Journal paper:

1. Publication bias due to sampling variability - the common tendency for statistically significant results to be preferentially published. This is either because journals reject papers that don't find anything significant or more likely because authors don't bother submitting papers without significant results. So they either scrap studies that don't find anything significant or data-mine until they do. This means that the published literature may over-represent the frequency of statistically significant tests. This is likely to be a problem in many areas of economics, but especially in a field where results are all about test statistics and not about effect sizes.

2. Omitted variables bias - Granger causality tests are very susceptible to omitted variables bias. For example, energy use might seem to cause output in a bivariate Granger causality test because it is highly correlated with capital. This is a very serious problem in the actual empirical Granger causality literature, which I noted in my PhD dissertation.

3. Over-fitting/over-rejection bias - In small samples, there is a tendency for vector autoregression model fitting procedures to select more lags of the variables than the true underlying data generation process has. There is also a tendency to over-reject the null hypothesis of no causality in these over-fitted models. This means that a lot of Granger causality results from small sample studies are spurious. We realized in our Energy Journal paper that this was also a serious problem in the empirical Granger causality literature. The following graph illustrates this using studies from the energy-output causality literature:


Each graph shows normalized test statistics for causality in one of the two directions. Rather than fit models with more lags in larger samples, researchers tend to deplete the degrees of freedom by adding more lags. Therefore, there tend to be fewer degrees of freedom for studies with three lags than with two, and fewer for those with two than with one. Also, we see that the average significance level increases as the lags increase and degrees of freedom reduces.

Of course, the second two types of biases give researchers additional opportunities to select statistically significant results for publication and so, more generally, "publication bias" includes selection of statistically significant results from those provided by sampling variability and by various biases.

The standard meta-regression model used in economics deals with the first of the three biases by exploiting the idea that if there is a genuine effect then studies with larger samples should have more statistically significant test statistics than smaller studies. If there is no real effect then there will be either no relation or even a negative relation between significance and sample size. Meta-analysis can test for the effects of omitted variables bias by including dummy variables and interaction terms for the different variables included in primary studies. Finally, in our Energy Journal paper we controlled for the over-fitting/over-rejection bias by including the number of degrees of freedom lost in model fitting in our meta-regression.

The new paper focuses on the latter issue and examines both the potential prevalence of over-fitting and over-rejection and the effectiveness of controlling for over-fitting. The approach used in this paper is a little different to the Energy Journal paper - here we include the number of lags selected as the control variable. We show by means of Monte Carlo simulations that, even if the primary literature is dominated by false-positive findings of Granger causality, the meta-regression model correctly identifies the absence of genuine Granger causality. The following graphs show the key results:

Power is the probability of reject the null hypothesis of no causality when it is incorrect - so here we have set up a simulated VAR where there is causality from energy to GDP. Mu is the mean sample size of the primary studies in our simulation and var is the variance. So, the lefthand graph is a simulation of mostly small sample studies. The middle one has a mixture of small and large studies, and the right hand graph has mostly large studies (but a few small ones too). The meta sample size is the number of studies that are brought together in the meta-analysis. DGP2a is a data generating process with a small effect size - DGP2b has a larger effect size.

So, what do these graphs show? When the samples in primary studies are small and we only have a meta sample of 10 or 20 studies, it is hard to detect a genuine effect, whatever we do. When the effect size is small it is still hard to detect an effect even when we have 80 primary studies using the traditional economics meta-regression model ("basic model"). Our "extended model" which controls for the number of lags really helps a lot in this situation. With large primary study sizes it is quite easy to detect a true effect with only 20 studies in the meta-analysis and our method adds little value. However, the energy-GDP causality literature has mostly small similar sized samples and is trying to detect what is quite a small effect in the energy causes GDP direction (elasticity of 0.05 or 0.1). Our approach has much to offer in this context.

Wednesday, June 10, 2015

Two Papers Accepted for Publication

Two of our papers have just been accepted for publication. One is my paper with Yingying Lu on sensitivity analysis of climate policy computable general equilibrium models. It has been accepted for publication in Environmental and Resource Economics. The other is a paper with George Halkos and Nikalaos Tzeremes. The paper is on environmental efficiency across the U.S. States and has been accepted by the Journal of Cleaner Production. We haven't put out a working paper version of this one. I'll do a blogpost on it when it is available online at the journal. George was a lecturer at University of York when I was a post-doc there.

In the case of the first paper, we only sent it to one journal (JEEM) before the one it was finally published in. We sent the second paper to quite a few journals but managed to get it into one with a pretty high impact factor after significant revision.

Monday, June 8, 2015

Update

Haven't blogged for over a month. Partly this is because I am doing more tweeting and also because I have been busy with the end of semester and traveling to Turkey (IAEE conference), Israel, and Abu Dhabi (International Energy Workshop). This was my first time attending the IEW. I think there is good feedback in the parallel sessions - better than at the IAEE meeting. On the other hand, the IAEE plenaries are more consistent, I think. IEW is strongly tied to the ETSAP modeling forum, which precedes it - TIMES/MARKAL models - and is attended by "modelers" rather than the mix of business, government, and academic communities at IAEE.

In other news, our paper on the behavior of carbon dioxide emissions in the short-run has been published in Global Environmental Change. The article is open access until 26 July. Also our survey paper in the Review of Economics is also open access.