My colleague Robert Kaufmann got an e-mail from Doug Keenan inviting him to participate in his "climate change contest" without the usual $10 submission fee. I hadn't heard about this contest and went to the site to investigate. So, Keenan has produced 1000 time series of 135 observations each that are somehow derived from random numbers and then added a plus 1 or minus 1 per 100 observations trend to some of these. The series have been calibrated so that the they could potentially
reproduce in some way the observed global temperature time series from
1880 to the present without an added trend. The task of the contestant is to determine for each series whether it has an added trend or not. If any submission gets 90% of these or more right by 30th November, this year, that submission will win $100,000.
Keenan's idea is that no-one can validly detect with 90% accuracy whether there is a trend in temperature or not. Therefore, the IPCC's claim that temperature has definitely increased over the last century and it is very likely that this is due to human activity must be wrong.
I downloaded the data. Looking at some of the series it's pretty clear that they are some sort of random walk (stochastic trend). It is not simply a series of random numbers (white noise) with a linear trend added. I haven't bothered to write a program to test this. Assuming that they are simple random walks, I tested in Excel whether the mean of first difference was different to zero for each of the thousand series. Only 8 of the series have a mean first difference that is significantly different to zero at the 5% level using the standard calculation of the standard error of the mean, which assumes that the first differences are white noise. If they were normally distributed white noise and none of the original series had an added trend then we would expect about 50 of the means of the first differences to be significantly different to zero by the definition of statistical significance. So, something else seems to be going on here. I expect that statistical power to detect a non-zero drift term of 0.01 or -0.01 when the standard deviation of the first differences is 0.11 is in any case rather low. Perhaps we could use structural time series methods, but statistical power of 90% at a significance level of 10% is a lot to ask for in this situation. I created my own dataset to see how many series one could expect to correctly classify - statistical power using a simple data generating process and a simple test was 29% for a 10% significance level test. This means that we can only correctly classify 595 of the 1000 series.
The real question to ask is whether Keenan's thought experiment makes sense. I would argue that it doesn't. His argument is that if temperature follows some kind of integrated process then it is very hard to determine whether it has a drift component or will sooner or later just stochastically trend down again. Therefore, we can't know if temperature has statistically significantly increased or not. But theory and climate models predict that global temperature should be stationary if radiative forcing is constant. If we detect a random walk or a close to random walk signal in the temperature data then something else is happening. Research can then try to determine if it is likely to be due to anthropogenic factors or not. It is possible that we make a type 1 error - falsely rejecting the null hypothesis - but we can determine how likely that is. So, in my opinion, Keenan's contest is another case of mathiness in climate econometrics.
Keenan's idea is that no-one can validly detect with 90% accuracy whether there is a trend in temperature or not. Therefore, the IPCC's claim that temperature has definitely increased over the last century and it is very likely that this is due to human activity must be wrong.
I downloaded the data. Looking at some of the series it's pretty clear that they are some sort of random walk (stochastic trend). It is not simply a series of random numbers (white noise) with a linear trend added. I haven't bothered to write a program to test this. Assuming that they are simple random walks, I tested in Excel whether the mean of first difference was different to zero for each of the thousand series. Only 8 of the series have a mean first difference that is significantly different to zero at the 5% level using the standard calculation of the standard error of the mean, which assumes that the first differences are white noise. If they were normally distributed white noise and none of the original series had an added trend then we would expect about 50 of the means of the first differences to be significantly different to zero by the definition of statistical significance. So, something else seems to be going on here. I expect that statistical power to detect a non-zero drift term of 0.01 or -0.01 when the standard deviation of the first differences is 0.11 is in any case rather low. Perhaps we could use structural time series methods, but statistical power of 90% at a significance level of 10% is a lot to ask for in this situation. I created my own dataset to see how many series one could expect to correctly classify - statistical power using a simple data generating process and a simple test was 29% for a 10% significance level test. This means that we can only correctly classify 595 of the 1000 series.
The real question to ask is whether Keenan's thought experiment makes sense. I would argue that it doesn't. His argument is that if temperature follows some kind of integrated process then it is very hard to determine whether it has a drift component or will sooner or later just stochastically trend down again. Therefore, we can't know if temperature has statistically significantly increased or not. But theory and climate models predict that global temperature should be stationary if radiative forcing is constant. If we detect a random walk or a close to random walk signal in the temperature data then something else is happening. Research can then try to determine if it is likely to be due to anthropogenic factors or not. It is possible that we make a type 1 error - falsely rejecting the null hypothesis - but we can determine how likely that is. So, in my opinion, Keenan's contest is another case of mathiness in climate econometrics.
No comments:
Post a Comment