David Stern's Blog on Energy, the Environment, Economics, and the Science of Science
Thursday, March 28, 2013
Academic Rank vs. Citations
I updated an analysis I did a couple of years ago of Crawford faculty. It is a lot easier now to include citation data from Google Scholar as more than half our faculty now have profiles on Google Scholar. This chart shows the Google Scholar citations organized by academic rank. Level B is lecturer (equivalent to assistant prof in the US), Level C is senior lecturer (about equal to associate prof in the US), and D and E are associate professor and professor (both are probably equivalent to professor in the US). Though there are overlaps, it is possible to distinguish ranks quite clearly. If we controlled for discipline, the pattern would get even clearer. Economists have more citations than political scientists and perhaps environmental studies people have even more - Bob Costanza, of course, is the highest point on the chart.
Wednesday, March 20, 2013
ERA Verifies that Research Funding is Going to High Quality Research Groups
Yesterday I went to a presentation by the head of the Australian Research Council, Aidan Byrne. I asked him to comment on the article in the Australian that reported his thoughts that ERA results shouldn't be tied to larger amounts of funding. He said that ERA has already had significant effects without having money tied to it and the problem is that there aren't necessarily significant differences between being ranked a 3 or 4 or a 4 or 5 as scores are rounded up or down to produce the final outcomes. He said that the most valuable use of ERA is to verify that research funding is going to high quality research groups. There is a dramatic difference between the funding received by disciplines at institutions ranked 1 and 2 and those ranked 3,4, and 5 with a smaller difference between the latter three categories. Clearly the vast majority of funding is going to the groups ranked at world standard or above. It would help get this message across if ARC presented funding numbers per full-time-equivalent faculty member in each quality ranking. But they want to avoid doing productivity measures for some reason. The ARC slides I saw just show total dollars for all 5's, all 4's, and all 3's without adjusting for number of institutions or individual researchers. As there are usually fewer institutions ranked 5 this downplays the relationship between quality and funding. Also, it would make sense to then leave funding out as one of the indicators used to assess quality. And none of this information is in the ERA report. I think that would be valuable.
Saturday, March 16, 2013
Is There Really Granger Causality between Energy Use and Output?
Finally, we have a revised version of the meta-analysis paper I presented in Perth in September out as a working paper.
So what's it all about? There is a massive literature on Granger causality testing of whether energy use causes economic growth or vice versa. We collected more than 400 papers. Yet the literature is very inconclusive. In fact we found that about 40% of tests for each direction of causation in our sample of 70 or so papers have statistically significant results at the 5% level. 40% is a lot more than 5% so either there must be a real effect or some kind of biases. On the other hand, it's not overwhelming evidence.
A recent paper conducted the first meta-analysis of this field of research. When we noticed this paper, we were initially worried that we had been "scooped". But, it turned out that the analysis in Chen et al. is fairly exploratory. Our paper tries to see if the effects found in the literature are genuine or simply the results of various biases. We do this by exploiting the "statistical power trace" - if an effect is non-zero then the test statistic associated with restricting it to zero should be greater in absolute value the greater the degrees of freedom associated with the model estimate. So we regress the test statistics for the Granger causality tests - after converting them all to normal test statistics - on the square root of the degrees of freedom. The way we have set things up, if we can reject the null that the regression coefficient on the square root of degrees of freedom is non-positive then there is a real Granger causality effect in the underlying literature.
We do this separately for tests of energy causes output and output causes energy. Overall there is no genuine effect in the literature but there do seem to be some genuine effects in subsets of the literature. Specifically, if we control for energy prices then income causes energy use. This is the energy demand function relationship, which Stern and Enflo also found was very strong in the Swedish data. Energy use might cause income but only if we control for employment and the VAR model passes a cointegration test. So, this is pretty tentative. There were some things we would have liked to test but simply had too little data. For example, does adjusting for energy quality make a difference?
There is a whole other story in the paper, which is about dealing with the econometric pitfalls associated with these kind of time series models. Initially, we found that the greater the degrees of freedom the more negative the test statistics were. Significantly so. It turns out that there is a tendency to included too many lags of the variables in small sample sizes. And these over-fitted models result in spurious rejections of the null hypothesis of no Granger causality. We control for this issue by including the number of degrees of freedom lost in fitting the model as an independent variable. This is likely to be important in other meta-analyses of Granger causality tests. We have a further econometric theory paper in preparation on this topic.
In some ways this is a silly question. We know that energy is used to produce things and we know that in theory income is a determinant in the demand function for energy. But observing that in the data in a consistent way doesn't seem to be that easy.
So what's it all about? There is a massive literature on Granger causality testing of whether energy use causes economic growth or vice versa. We collected more than 400 papers. Yet the literature is very inconclusive. In fact we found that about 40% of tests for each direction of causation in our sample of 70 or so papers have statistically significant results at the 5% level. 40% is a lot more than 5% so either there must be a real effect or some kind of biases. On the other hand, it's not overwhelming evidence.
A recent paper conducted the first meta-analysis of this field of research. When we noticed this paper, we were initially worried that we had been "scooped". But, it turned out that the analysis in Chen et al. is fairly exploratory. Our paper tries to see if the effects found in the literature are genuine or simply the results of various biases. We do this by exploiting the "statistical power trace" - if an effect is non-zero then the test statistic associated with restricting it to zero should be greater in absolute value the greater the degrees of freedom associated with the model estimate. So we regress the test statistics for the Granger causality tests - after converting them all to normal test statistics - on the square root of the degrees of freedom. The way we have set things up, if we can reject the null that the regression coefficient on the square root of degrees of freedom is non-positive then there is a real Granger causality effect in the underlying literature.
We do this separately for tests of energy causes output and output causes energy. Overall there is no genuine effect in the literature but there do seem to be some genuine effects in subsets of the literature. Specifically, if we control for energy prices then income causes energy use. This is the energy demand function relationship, which Stern and Enflo also found was very strong in the Swedish data. Energy use might cause income but only if we control for employment and the VAR model passes a cointegration test. So, this is pretty tentative. There were some things we would have liked to test but simply had too little data. For example, does adjusting for energy quality make a difference?
There is a whole other story in the paper, which is about dealing with the econometric pitfalls associated with these kind of time series models. Initially, we found that the greater the degrees of freedom the more negative the test statistics were. Significantly so. It turns out that there is a tendency to included too many lags of the variables in small sample sizes. And these over-fitted models result in spurious rejections of the null hypothesis of no Granger causality. We control for this issue by including the number of degrees of freedom lost in fitting the model as an independent variable. This is likely to be important in other meta-analyses of Granger causality tests. We have a further econometric theory paper in preparation on this topic.
In some ways this is a silly question. We know that energy is used to produce things and we know that in theory income is a determinant in the demand function for energy. But observing that in the data in a consistent way doesn't seem to be that easy.
Friday, March 15, 2013
Australian Research Assessment Not Heading in UK Direction
That's the message I get from this interview with the head of the ARC, Aidan Byrne (formerly a science dean at ANU) in the Australian. The assumption of many in the sector, myself included is that the Australian research assessment exercise, ERA, and the funding attached to it would evolve in a way that generally followed UK practice with something of a time lag. In the UK, much more money is tied to the REF, the funding ratio associated with the three highest rankings of departments is 9:3:1, and case studies are being used very heavily to assess broader impact. Prof. Byrne argues that case studies should be used sparingly if at all to measure impact, not much money should be tied to ERA outcomes, and the funding ratio should be flatter. On Wednesday, I saw a presentation by Tim Cahill of the ARC on the ERA 2012 process and outcomes. One key finding was that for the citation based disciplines (most STEM disciplines (but not math or computer science) and psychology) there is a weak correlation between the ERA ranks assigned to universities and their citation performance relative to the benchmarks. A lot of subjectivity still seems to come into the ranking by the ERA committees. As they only count the number of citations per paper and not where they were cited, I guess that makes sense. So should Australian universities pay as much attention to ERA as they have been doing? For example, ANU has tied indicators in it strategic plan to the number of disciplines that achieve given ERA rankings by 2020. If I was the minister and looking for budget cuts would I want to continue with ERA on this basis?
Wednesday, March 13, 2013
Global Anthropogenic Sulfur Emissions Updated to 2011
A new article by Zbigniew Klimont, Steven Smith, and Janusz Cofala updates Smith et al.'s estimates of global sulfur emissions to 2011. The global downward trend that started around 1990 or earlier * continues. The small increase in the early part of the last decade was just a blip:
This chart also shows some previous estimates. In general the trend has been revised down over time. The trend in China is also now heading down:
Another paper by Smith and Bond declares "the end of the age of aerosols". Well not quite yet. We'll have to wait till 2100 for that :)
The downside for me of this new data is that I will now have to redo all the econometrics in a paper I have in preparation (with Robert Kaufmann) that was almost ready for submission :(
* As shown in my 2006 paper, studies prior to Smith et al. 2001 showed emissions continuing to grow strongly through 1990. Smith et al. (2001) showed a flattening of the trend in the 1980s. My paper showed a plateau from the mid-1970s to 1990 and Smith et al. (2011) showed a slow downward trend from 1973 to 1990 and then a steeper decline.
This chart also shows some previous estimates. In general the trend has been revised down over time. The trend in China is also now heading down:
Another paper by Smith and Bond declares "the end of the age of aerosols". Well not quite yet. We'll have to wait till 2100 for that :)
The downside for me of this new data is that I will now have to redo all the econometrics in a paper I have in preparation (with Robert Kaufmann) that was almost ready for submission :(
* As shown in my 2006 paper, studies prior to Smith et al. 2001 showed emissions continuing to grow strongly through 1990. Smith et al. (2001) showed a flattening of the trend in the 1980s. My paper showed a plateau from the mid-1970s to 1990 and Smith et al. (2011) showed a slow downward trend from 1973 to 1990 and then a steeper decline.
Monday, March 11, 2013
Saltwater vs. Freshwater
An interesting new working paper tries to define whether economics departments are "saltwater" or "freshwater" based on citation networks. They find that the divide is strongest for macroeconomics and econometrics. This makes sense as this divide is about macroeconomics. So, this seems to be a real thing. It's amusing to see where non-US departments fall on the gradient. Looking at places I've studied or worked, ANU is saltwater and Hebrew University is freshwater, which is amusing (there isn't much water at all in Jerusalem and ANU is located next to a lake). LSE is very salty. More widely, Tel Aviv U. is very freshwater and Cambridge is very saltwater. By comparison, ANU and Hebrew U. are brackish. So it doesn't have anything to do with the water really :)
Lake Burley Griffin - ANU Campus is at lower left