Friday, September 22, 2017

More on Millar et al.

Millar and Allen have an article in the Guardian explaining what their paper really says. They say that existing ESMs assume too little cumulative emissions by the 2020's when atmospheric carbon dioxide will be higher than now and so temperature higher than now. But we have already reached that level of cumulative emissions and so we need to do an adjustment to the graph of cumulative emissions vs temperature. But no change to the graph of temperature vs. current concentration of CO2. The discrepancy arises because of uncertainty in cumulative emissions. Models have backfilled this estimate from other variables. Then they say that human induced warming is 0.93C and so that is the temperature baseline of their shifted frame of reference:


The argument of critics like Gavin Schmidt, Zeke Hausfather, and me that the estimate of current human-induced warming is too low still stands. And this means that the remaining carbon budget is smaller than argued by Millar et al. Any likely path to 1.5 degrees will require exceeding that temperature and then bringing radiative forcing down again.


Thursday, September 21, 2017

Is the Carbon Budget for 1.5 Degrees Much Larger than We Thought?

An article in Nature Geoscience by Millar et al. on carbon budgets has attracted a lot of attention and debate.* A blogpost by the lead author explains that current human-induced warming has increased in the 2010's by 0.93C over the the 1861-1880 period, while the mean CMIP5 climate model run projected that given cumulative carbon emissions to date the temperature should be 0.3C warmer than that. They argue that that means that the remaining carbon emissions budget allowed for staying within a 1.5C increase in temperature is larger than previously thought as we have 0.6C to go at this point rather than 0.3C. I think there a number of issues with this claim.

First, the value for human-induced warming is based on averaging the orange line in this graph:


The orange line is derived by fitting estimated radiative forcing to observed temperature given by the HADCRUT4 dataset by regression. HADCRUT4 shows less increase in surface temperature than either the GISS or Berkeley Earth datasets because of how it covers the polar regions, in particular.
Using the Berkeley Earth dataset, the temperature increase from the 1861-80 mean to the 2010's mean – shown by black lines in this graph:


is 1.1C. As you can see, even that increase is assuming that conditions during the "hiatus" are more usual than those during the post-2014 increase in temperature. 0.93C is a very conservative estimate of warming to date. Though the recent period was affected by El Nino conditions, it's possible that it represents catching up to the long term trend rather than an excursion above the trend. Throughout the hiatus period ocean heat content was increasing. I do think it is likely that the jump in temperature in the last two years is a recoupling of surface temperature to this more fundamental trend. We have a paper under review that supports this view.**

Also, I think that averaging the orange trend line in the previous graph definitely is too conservative given the strongly non-stationary behavior of the trend. The most recent estimate of the trend would be a better guess.

Second, I think there are a few reasons*** why we might update the carbon budget (as measured from the beginning of the Industrial Revolution):

1. Our estimate of the transient climate sensitivity changes – we think that the short-run temperature for a given concentration of carbon dioxide in the atmosphere is higher or lower than we previously thought.

2. Our estimate of the airborne fraction changes – our estimate of the amount by which the carbon dioxide in the atmosphere increases in reaction to a given amount of emissions changes. CO2 in the atmosphere has increased by about half cumulative emissions.

3. Our estimate of non-CO2 forcing changes. There are important other sources of radiative forcing such as methane and sources of negative forcing such as sulfate aerosols.

Observations of warming to date, isn't one of these. So the paper is implicitly saying that these observations lead them to reduce their estimate of the climate sensitivity.

Third, though the paper says that Earth System Models overestimated warming to date, it seems that the authors use the same models to estimate the remaining carbon budget.

* I have extensively revised this post following a comment from Myles Allen, one of the paper's authors. Also, I realized that the second part of the post didn't really make much sense, so I deleted it.

** The paper has been in review since February, but we haven't posted a working paper, as one of my coauthors didn't want to do so before receiving referee comments.

*** The emissions path also affects the carbon budget as we can see from the mean values for the various RCP paths in the graph below from Millar et al. and the difference between the red plume of RCP paths and the grey plume which are paths where emissions grow at a constant 1% per annum rate. The slower we release carbon, the bigger the budget.



Tuesday, September 5, 2017

Confidence Intervals for Journal Impact Factors

Is the Poisson distribution a short-cut to getting standard errors for journal impact factors? The nice thing about the Poisson distribution is that the variance is equal to the mean. The journal impact factor is the mean number of citations received in a given year by articles published in a journal in the previous few years. So if citations followed a Poisson distribution it would be easy to compute a standard error for the impact factor. The only additional information you would need besides the impact factor itself, is the number of articles published in the relevant previous years.

This is the idea behind Darren Greenwood's 2007 paper on credible intervals for journal impact factors. As he takes a Bayesian approach things are a little more complicated in practice. Now, earlier this year Lutz Bornmann published a letter in Scientometrics that also proposes using the Poisson distribution to compute uncertainty bounds - this time, frequentist confidence intervals. Using the data from my 2013 paper in the Journal of Economic Literature, I investigated whether this proposal would work. My comment on Bornmann's letter is now published in Scientometrics.

It is not necessarily a good assumption that citations follow a Poisson process. First, it is well-known that the number of citations received each year by an article, first increases and then decreases (Fok and Franses, 2007; Stern, 2014) and so the simple Poisson assumption cannot be true for individual articles. For example, Fok and Franses argue that for articles that receive at least some citations, the profile of citations over time follows the Bass model. Furthermore, articles in a journal vary in quality and do not all each have the same expected number of citations. Previous research finds that the distribution of citations across a group of articles is related to the log-normal distribution (Stringer et al., 2010; Wang et al., 2013).

Stern (2013) computed the actual observed standard deviation of citations in 2011 at the journal level for all articles published in the previous five years in all 230 journals in the economics subject category of the Journal Citation Reports using the standard formula for the variance
where Vi is the variance of citations received in 2011 for all articles published in journal i between 2006 and 2010 inclusively, Ni is the number of articles published in the journal in that period, Cj is the number of citations received in 2011 by article j published in the relevant period, and Mi is the 5-year impact factor of the journal. Then the standard error of the impact factor is √(Vi/Ni ).

Table 1 in Stern (2013) presents the standard deviation of citations, the estimated 5-year impact factor, the standard error of that impact factor, and a 95% confidence interval for all 230 journals. Also included are the number of articles published in the five year window, the official impact factor published in the Journal Citation Reports and the median citations for each journal.

The following graph plots the variance against the mean for the 229 journals with non-zero impact factors:



There is a strong linear relationship between the logs of the mean and the variance but it is obvious  that the variance is not equal to the mean for this dataset. A simple regression of the log of the variance of citations on the log of the mean yields:

where standard errors are given in parentheses. The R-squared of this regression is 0.92. If citations followed the Poisson distribution, the constant would be zero and the slope would be equal to one. These hypotheses are clearly rejected. Using the Poisson assumption for these journals would result in underestimating the width of the confidence interval for almost all journals, especially those with higher impact factors. In fact, only four journals have variances equal to or smaller than their impact factors. As an example, the standard error of the impact factor estimated by Stern (2013) for the Quarterly Journal of Economics is 0.57. The Poisson approach yields 0.2.

Unfortunately, accurately computing standard errors and confidence intervals for journal impact factors appears to be harder than just referring to the impact factor and number of articles published. But it is not very difficult to download the citations to articles in a target set of journals from the Web of Science or Scopus and compute the confidence intervals from them. I downloaded the data and did the main computations in my 2013 paper in a single day. It would be trivially easy for Clarivate, Elsevier, or other providers to report standard errors.

References

Bornmann, L. (2017) Confidence intervals for Journal Impact Factors, Scientometrics 111:1869–1871.

Fok, D. and P. H. Franses (2007) Modeling the diffusion of scientific publications, Journal of Econometrics 139: 376-390.

Stern, D. I. (2013) Uncertainty measures for economics journal impact factors, Journal of Economic Literature 51(1), 173-189.

Stern, D. I. (2014) High-ranked social science journal articles can be identified from early citation information, PLoS ONE 9(11), e112520.

Stringer, M. J, Sales-Pardo, M., Nunes Amaral, L. A. (2010) Statistical validation of a global model for the distribution of the ultimate number of citations accrued by papers published in a scientific journal, Journal of the American Society for Information Science and Technology 61(7): 1377–1385.

Wang, D., Song C., Barabási A.-L. (2013) Quantifying long-term scientific impact, Science 342: 127–132.