David Stern's Blog on Energy, the Environment, Economics, and the Science of Science
Monday, August 31, 2009
EERH Research Reports Joining RePEc
As you may know, my position is funded by the Environmental Economics Research Hub based at the Crawford School at ANU. We have a working paper series reporting on research carried out in the Hub. We've now completed indexing the series so that it can be catalogued by RePEc, which is the biggest and best system of indexing working papers in economics. I've been working on this with Meredith Bacon who is the coordinator of the Hub. I'll make another post when things are complete at the RePEc end.
Revise and Resubmit
I got a "revise and resubmit" for the paper I submitted to Journal of Productivity Analysis. There are of course no guarantees in these situations that you will be published and "major revisions" have been requested but that is still good news.
Allen: The British Industrial Revolution in Global Perspective
Robert C. Allen tries to explain why the Industrial Revolution took place in Britain in his new book The British Industrial Revolution in Global Perspective.
Allen (2009) places energy innovation centre-stage in his theory. Like Tony Wrigley, he compares Britain to the Netherlands and Belgium. These were the most developed economies in the world in the early modern age with much higher wages than elsewhere due to their dominant position in world commerce. In both countries the price of fuelwood was rising in the early modern period relative to the price of coal. But the price ratio of traditional fuel to coal was higher in London than in the Low Countries and even higher in the coalfield areas of northeastern England and western Britain. Compared to France, India, or China, Britain had both cheap sources of fuel and high wages.
Coal was lower quality than wood as a heating and cooking fuel but became a “backstop technology” once the relative price of wood to coal rose sufficiently, while in the Netherlands peat replaced coal. But innovations were required in order to use coal effectively in new applications from home heating and cooking – new specially designed chimneys had to replace older chmneys and open hearths - to iron smelting and then steam engines. These induced innovations sparked the industrial revolution. Initially though the new innovations were only profitable in Britain where wages relative to energy prices were the highest in the world. Continued innovation eventually made coal using technologies profitable in other countries too.
He concludes: "The British were not more rational or prescient than the French in developing coal-based technologies. The British were simply luckier in their geology... In other words, there was only one route to the twentieth century - and it traversed northern Britain." (p275)
Sunday, August 30, 2009
Comments on Scientific Papers
Once upon a time, many comments were published on scientific papers, including in economics. Nowadays, relatively few are. Rick Trebino has written an article that suggests some of the reasons why not. The article is rather amusing/cathartic for anyone who has been frustrated with the academic publication process. In economics, comments have been replaced by increasingly lengthy refereeing processes at the top journals. I saw a paper somewhere on this but can't remember where. One of my colleagues/coauthors, Robert Kaufmann, simply published his comment on a paper in the American Economic Review in Ecological Economics when the AER refused to on the grounds that it added too little to the original paper. They'll only publish a comment that makes a substantial new contribution rather than merely correcting a mistake.
Saturday, August 29, 2009
Time Allocation to Reading Journal Articles
A recent article in Science discusses the future of scientific publication and scholarship strategies. The chart above showing trends to less time spent reading more papers also shows that scientists spend about 5% of their time on reading articles.
Wednesday, August 26, 2009
Estimating a Two-Equation Model of the Swedish Economy
My coauthor sent me some Swedish capital stock estimates for 1850-2000 which allows me to estimate the production function equation that explains economic output in terms of labor, capital and energy in addition to the energy cost ratio equation I estimated on its own before. Having two equations which share many of the same parameters makes estimating those parameters accurately a lot easier. I got some moderately sensible results pretty much straight away. But this is a pretty tricky econometric problem for the following reasons:
1. The production function we are using (CES function) is non-linear and non-linear econometrics is always harder than linear econometrics.
2. Variables like capital, GDP, and energy are "stochastically trending" i.e. they are random walks (now you know why this blog has the title it does :)). Econometrics with trending or random walk variables is tougher than with more classically behaved variables that just fluctuate around an average value.
3. The variables on the right hand side of our equations are almost certainly endogenous in the economic system. The capital stock is affected by the level of GDP and not just vice versa. Econometrics with endogenous variables is trickier too.
4. The rate of technological change has not necessarily been constant over the last 200 years. In fact it almost certainly hasn't. We need to deal with that too.
From the preliminary econometric results and simulations I've done in Excel it seems that the elasticity of substitution between capital and energy is between 0.3 and 0.7. This is roughly the range that Koetse et al. found in their meta-analysis of the elasticity of substitution between capital and energy. The traditional Cobb-Douglas model used very extensively in empirical and theoretical applications assumes that it is 1.0 instead, which would mean that energy availability isn't much of a constraint on economic output and growth. But, instead, it seems that it is.
1. The production function we are using (CES function) is non-linear and non-linear econometrics is always harder than linear econometrics.
2. Variables like capital, GDP, and energy are "stochastically trending" i.e. they are random walks (now you know why this blog has the title it does :)). Econometrics with trending or random walk variables is tougher than with more classically behaved variables that just fluctuate around an average value.
3. The variables on the right hand side of our equations are almost certainly endogenous in the economic system. The capital stock is affected by the level of GDP and not just vice versa. Econometrics with endogenous variables is trickier too.
4. The rate of technological change has not necessarily been constant over the last 200 years. In fact it almost certainly hasn't. We need to deal with that too.
From the preliminary econometric results and simulations I've done in Excel it seems that the elasticity of substitution between capital and energy is between 0.3 and 0.7. This is roughly the range that Koetse et al. found in their meta-analysis of the elasticity of substitution between capital and energy. The traditional Cobb-Douglas model used very extensively in empirical and theoretical applications assumes that it is 1.0 instead, which would mean that energy availability isn't much of a constraint on economic output and growth. But, instead, it seems that it is.
Tuesday, August 25, 2009
Collapse
I have been reading Collapse, Jared Diamond's account of the collapses of several past civilizations - Easter Island, the Maya, the Anasazi, and the Greenland Norse settlement prominent among them - and discussions of environmental stresses and sustainability issues in modern societies. Included is some original research of his with a coauthor on the factors affecting success or failure in the Pacific Islands. He also discusses a few cultures which adapted and moved back from the brink, including Tokugawa Japan, the New Guinea Highlands, Tikopia and medieval Iceland. In the case of Tikopia success involved the wiping out of two of the clans by the one surviving clans while in Iceland severe desertification occurred in the uplands before things stabilized. So success is relative.
My preconception was that he would be overly deterministic about the role of environmental degradation in these stories. But that isn't the case. In fact, for an economist things seem a bit too open ended. He tries to explain these examples by a five factor theory but I can summarize in fewer points, I think.
Societies tend to overshoot their carrying capacity when either they experience long periods of favorable climate (e.g. Greenland) or move into new areas where they misperceive the carrying capacity even in the short-run (e.g. Iceland). In the latter case environmental degradation results causing a fall in carrying capacity. In the former a change in climate for the worse is the cause in fall in carrying capacity. What happens next depends on the fragility of the environment and the rigidity of institutions. A more fragile environment (e.g. Easter Island vs sustainable example) or more rigid institutions increases the likelihood of collapse. For example, the Greenlanders seem to have eaten no fish for inexplicable reasons and otherwise seem to have tried to maintain European style agriculture rather than adopt ideas from the native Americans (they did hunt seals but not all types). Rulers need to show their people that they can provide for them to legitimate their rule as well as compete with rival rulers. It might make more sense to try to maintain the current system at continuing environmental cost until it finally collapses rather than admit that it has failed. At the same time temples (Maya) or statues (Easter Island) tend to get bigger and bigger.
We can certainly see the same symptoms in our world today. Rulers seek legitimacy by maintaining economic growth. There is a fear of accepting even small reductions in GDP in order to protect the climate. And conservative attitudes in institutions prevail. The Greenlanders didn't want to be like the Inuit, while conservative Americans don't want to be like the French or Swedish today. As institutional economists long-ago noted technology changes faster than institutions do.
Monday, August 24, 2009
Presentation at ANZSEE Conference
My abstract was approved for the ANZSEE Conference in Darwin. If there are no hitches with the funding I hope to go and present. I've never been to the Northern Territory (or anywhere west of Melbourne in Australia for that matter) so that should be interesting.
Saturday, August 22, 2009
U.S. Electric Supply and Grid
Some great maps of U.S. electricity supply and grid from NPR's website. The map above shows all power stations by size. Other maps show the share of different power sources by state (e.g. Vermont is the most nuclear state in the Union). Most importantly the transmission lines are mostly not where the best locations for alternative energy are (with the exception of some areas of the southwest near Los Angeles. Not surprisingly the latter are a hotbed of solar investment. Australia's situation is similar from what I have heard.
Thursday, August 20, 2009
The Declining Relative Value of Energy
One of the little known facts of energy economics is that in the long-run data that we have available the value of energy relative to the value of output has fallen over time. The chart shows the data for Sweden. Output here is gross output. Relative to GDP the ratio was 1:1 or above in the early 19th century. That fact seems to rule out structural change as an explanation of the trend. Agriculture today doesn't have such a high energy cost share. If the elasticity of substitution between energy and other inputs is one, then the cost share of energy in each industry should stay the same no matter how much technological change takes place. So the data suggest that the elasticity of substitution between energy and capital is less than one (if technological change has improved energy efficiency). That implies that there are limits to substitution between energy and capital as is believed by most ecological economists if not all economists.
So I've been trying to get an estimate of the elasticity of substitution from this data supplied by my co-author, Astrid Kander. That's not so easy, because we need to take into account the rate of technological change and estimating the rate of technological change is always hard. That's because technological change is just everything that we can't explain in a model and when you throw a time trend or similar variable into a regression model the computer doesn't know that that is supposed to be technological change, gets confused and comes up with nonsense :)
You have to give the model more information and so that is what we are going to have to do.
Wednesday, August 19, 2009
Energy Quality
Today I submitted another paper, this time on "energy quality" to Ecological Economics. The paper has been several years in the making. I've kept coming back and changing things, sometimes radically, until I finally came up with something that I thought was submittable. I also simultaneously submitted it to the Munich Personal RePEc Archive.
Energy quality is the idea that different fuels have different economic productivities. Each joule of electricity produces more additional output or utility than each joule of coal. This makes sense - consumers are willing to pay more for a joule of electricity (one watt-second) than for a joule's worth of coal. However, energy is usually just aggregated together according to the amount of heat energy available from each fuel. This assumes that the productivity of one joule is the same irrespective of which fuel that joule is associated with and that all fuels are infinitely substitutable one for the other. The latter is also clearly untrue. A rational consumer would only use the cheapest fuel if they were all perfect substitutes.
The question is then how to measure or model energy quality. We thought we'd figured this out back in 2000 in our paper in Ecological Economics. Just measure the quality of each fuel according to its (real) price which should be proportional to its marginal product. The quality of aggregate energy in the economy can then also be computed using standard indexation methods.
But since then, I've had increasing doubts that this is the only or the best way to measure energy quality. The obvious starting point is the literature on labor quality.
One of the approaches is to treat the quality of each fuel as a coefficient that accompanies that energy type whether in a production function, demand function etc. Unlike prices or marginal products, this concept of quality does not depend on the quantities of the other inputs used or on the amount of the fuel itself used. These coefficients might change over time as new ways of using energy are devised. Electricity wasn't as essentially useful before the invention of computers as it was afterwards. The problem is that very quickly you end up attributing all energy augmenting technological change to a change in energy quality.
So the paper looks at some other approaches too. All have advantages and disadvantages and depending on the elasticity of substitution between fuels they may not all be defined. In the special case of infinite substitutability all the definitions are defined and all of them produce exactly the same unambiguous result, though still there is no obvious way to distinguish between technological change and increases in energy quality.
Energy quality is the idea that different fuels have different economic productivities. Each joule of electricity produces more additional output or utility than each joule of coal. This makes sense - consumers are willing to pay more for a joule of electricity (one watt-second) than for a joule's worth of coal. However, energy is usually just aggregated together according to the amount of heat energy available from each fuel. This assumes that the productivity of one joule is the same irrespective of which fuel that joule is associated with and that all fuels are infinitely substitutable one for the other. The latter is also clearly untrue. A rational consumer would only use the cheapest fuel if they were all perfect substitutes.
The question is then how to measure or model energy quality. We thought we'd figured this out back in 2000 in our paper in Ecological Economics. Just measure the quality of each fuel according to its (real) price which should be proportional to its marginal product. The quality of aggregate energy in the economy can then also be computed using standard indexation methods.
But since then, I've had increasing doubts that this is the only or the best way to measure energy quality. The obvious starting point is the literature on labor quality.
One of the approaches is to treat the quality of each fuel as a coefficient that accompanies that energy type whether in a production function, demand function etc. Unlike prices or marginal products, this concept of quality does not depend on the quantities of the other inputs used or on the amount of the fuel itself used. These coefficients might change over time as new ways of using energy are devised. Electricity wasn't as essentially useful before the invention of computers as it was afterwards. The problem is that very quickly you end up attributing all energy augmenting technological change to a change in energy quality.
So the paper looks at some other approaches too. All have advantages and disadvantages and depending on the elasticity of substitution between fuels they may not all be defined. In the special case of infinite substitutability all the definitions are defined and all of them produce exactly the same unambiguous result, though still there is no obvious way to distinguish between technological change and increases in energy quality.
Monday, August 17, 2009
Um?
What's the point of using carbon dioxide emissions to grow algae when we can just use the carbon dioxide in the atmosphere to grow plants? Maybe the plants can be grown with fewer other inputs if the carbon dioxide is concentrated? But if the algae are used as products then no carbon is sequestered. So what is this guy "frustrated" about? I guess there may be reductions in other fossil fuel inputs that would be used in conventional agriculture, but an ETS or carbon tax provides the incentive to reduce the use of those inputs anyway. This raises another point, does direct sequestration of emissions make any sense anyway in place of growing plants to absorb carbon from the atmosphere? Can it be cheaper?
In other news it looks likely that the inefficient approach of mandating renewable energy will pass the Senate while the more efficient (but not as efficent as it could be) ETS component failed to do so. I'm all in favor of incentives or government programs for research and development into alternative energy but mandating the use of a particular technology doesn't make sense. Under carbon pricing, whatever is the cheaper option to cut emissions will be adopted first and if that is renewable energy then a mandate is not needed and if it is not renewable energy then the mandate is more costly than the pricing policy. Of course, this is not at all surprising.
In other news it looks likely that the inefficient approach of mandating renewable energy will pass the Senate while the more efficient (but not as efficent as it could be) ETS component failed to do so. I'm all in favor of incentives or government programs for research and development into alternative energy but mandating the use of a particular technology doesn't make sense. Under carbon pricing, whatever is the cheaper option to cut emissions will be adopted first and if that is renewable energy then a mandate is not needed and if it is not renewable energy then the mandate is more costly than the pricing policy. Of course, this is not at all surprising.
Sunday, August 16, 2009
Crucial Assumptions
"All theory depends on assumptions which are not quite true. That is what makes it theory. The art of successful theorizing is to make the inevitable simplifying assumptions in such a way that the final results are not very sensitive. A "crucial" assumption is one on which the conclusions do depend sensitively, and it is important that crucial assumptions be reasonably realistic. When the results of a theory seem to flow specifically from a special crucial assumption, then if the assumption is dubious, the results are suspect."
(Robert M. Solow, "A contribution to the theory of economic growth", Quarterly Journal of Economics 70(1): 65-94, p65)
This is the opening paragraph of Solow's Nobel-Prize winning paper on economic growth. His innovation was to replace Harrod and Domar's assumption that the aggregate production function was a Leontief function with the neoclassical assumption that labor and capital are substitutes. He then explored the implications of the model with a Cobb Douglas function and a CES function with elasticity of substitution greater than unity.
I've been working on the theory of growth both for my Environmental Economics Research Hub project and a longer term project on the role of energy in economic growth (that was one of the themes of my PhD dissertation after all :)) and thinking about assumptions. I've been reading Galor and Weil's unified growth theory. It is an elegant model where before the industrial revolution there is only one stable growth equilibrium with low rates of technological change and education and after it only a high technological change and education equilibrium. In the intervening period of the industrial revolution two equilibria are active. The crucial assumption that leads to this result is that the rate of technological change is a rising function of the size of the population. This isn't an unreasonable assumption but strangely it is the one thing that isn't clearly "micro-founded" in the model. The model also doesn't explain why the industrial revolution started in England. If population size is critical, why not China? Or even Italy?
The world's first iron bridge (1779) at Ironbridge. I visited there when I was 17 or 18.
(Robert M. Solow, "A contribution to the theory of economic growth", Quarterly Journal of Economics 70(1): 65-94, p65)
This is the opening paragraph of Solow's Nobel-Prize winning paper on economic growth. His innovation was to replace Harrod and Domar's assumption that the aggregate production function was a Leontief function with the neoclassical assumption that labor and capital are substitutes. He then explored the implications of the model with a Cobb Douglas function and a CES function with elasticity of substitution greater than unity.
I've been working on the theory of growth both for my Environmental Economics Research Hub project and a longer term project on the role of energy in economic growth (that was one of the themes of my PhD dissertation after all :)) and thinking about assumptions. I've been reading Galor and Weil's unified growth theory. It is an elegant model where before the industrial revolution there is only one stable growth equilibrium with low rates of technological change and education and after it only a high technological change and education equilibrium. In the intervening period of the industrial revolution two equilibria are active. The crucial assumption that leads to this result is that the rate of technological change is a rising function of the size of the population. This isn't an unreasonable assumption but strangely it is the one thing that isn't clearly "micro-founded" in the model. The model also doesn't explain why the industrial revolution started in England. If population size is critical, why not China? Or even Italy?
The world's first iron bridge (1779) at Ironbridge. I visited there when I was 17 or 18.
Tuesday, August 11, 2009
Europe Research Links
Today I attended a workshop on research collaboration between Australia and Europe. My impression was that in the areas I work in it's probably pretty hard to get funded by the European Union. Ironically, it seemed that if you have specific expertise about Australia you are more likely to be able to join a funded project. 2/3 of EU funding goes to FP-7 (Framework Program 7) "Cooperation" projects which are large across country projects that were described as being similar to the ARC's Linkage Program. Other programs seemed to direct more attention to early career researchers. Australian success in the counterpart to ARC's Discovery (ERC) was apparently zero. As was Australian success in the social sciences and humanities area of "Cooperation".
Anyway, one easy thing to do is to nominate yourself as an international expert to assess grant proposals for the EU. It took me half an hour to submit my information on the CORDIS website. If selected they will pay you for your time and bring you to Brussels for a short period for the assessment panel. I thought it was worth a shot given the minimal upfront cost. If you are interested in more information on research collaboration with Europe visit the FEAST website, which is the Australian portal to everything European researchwise.
Anyway, one easy thing to do is to nominate yourself as an international expert to assess grant proposals for the EU. It took me half an hour to submit my information on the CORDIS website. If selected they will pay you for your time and bring you to Brussels for a short period for the assessment panel. I thought it was worth a shot given the minimal upfront cost. If you are interested in more information on research collaboration with Europe visit the FEAST website, which is the Australian portal to everything European researchwise.
Sunday, August 9, 2009
Predicting Nobel Prize-Winners in Economics
I've been reading bits of "Lives of the Laureates" on and off recently. The book is based on a lecture series at Trinity University where Nobel laureates in economics are asked to describe their evolution as economists. Many of their lives do seem to have been "stochastic trends" :) Some like Clive Granger weren't even sure that they were ever economists. John Harsanyi left ANU because some people there didn't even know what game theory was (this is in the 1950s). Anyway, that got me to thinking who will win the prize in the future. Robert Barro is one economist that has been very highly cited but has not yet won the Nobel Prize. His top ten articles have all been cited more than 1000 times on Google Scholar. The same is true of Eugene Fama but it is hard to imagine him winning the Nobel Prize in the wake of GFC. Andrei Shleifer is still pretty young to win the Prize, though he is maybe the most cited economist of all. He won the John Bates Clark Medal which is probably a good predictor. He also has some controversy associated with him regarding insider trading in Russian stocks.
You can easily generate more ideas from RePEc's ranking. But if your candidate does not have at least one article on Google Scholar with more than 1000 citations and their top ten papers do not each have hundreds of citations they probably don't have much chance. Actually there have been prediction markets in this, but currently I can't get anything at the website referred to.
You can easily generate more ideas from RePEc's ranking. But if your candidate does not have at least one article on Google Scholar with more than 1000 citations and their top ten papers do not each have hundreds of citations they probably don't have much chance. Actually there have been prediction markets in this, but currently I can't get anything at the website referred to.
Friday, August 7, 2009
Declining Abstract Views and Downloads Per Person (and per Paper) at RePEc
The chart shows monthly abstract views and paper downloads per registered member at RePEc. I can't go back any further as I only started collecting the necessary data in October 2004 and RePEc doesn't seem to publish a time series on the number of members. The number of abstract views and downloads per item has also declined. Hypotheses that could explain this trend:
1. There is less interest over time in economics papers.
2. An increasing proportion of the people signing up are junior academics with fewer papers to their credit.
3. There is more stuff online (relative to offline information) and so each available paper is getting less attention.
4. Higher quality authors, working paper series,and journals were more likely to register for RePEc earlier on in its history.
There isn't much support for 1 as total downloads and abstract views has risen over time. On 2 there was a fall in the number of abstracts per member till early 2006 since when the number has been stable. The number of downloadable papers per member has been pretty much constant over time. The proportion of papers that is downloadable has increased over time though. So there is limited support for this hypothesis. But a fall in papers per author could also be support for 4. And as I mentioned downloads per paper has also declined.
I suspect it's mainly a mixture of 3 and 4, but there isn't any way to check this with the publicly available data.
I'll be happy to send anyone who is interested my membership data, shown in this chart:
Thursday, August 6, 2009
Results of Emissions Poll
My poll on what global greenhouse gas emissions would be in 2050 is over and here are the results:
The final distribution is still somewhat bimodal. The only person who chose "no change" was me. I presume that there will be increased action to address climate change in the next 40 years, but going by the precedent of the Kyoto Treaty it won't be as effective as its proponents hope. Still as most business as usual scenarios (apart from civilizational collapse) would see emissions increasing and maybe doubling the cuts that will be achieved relative to BAU are still very significant. OTOH exactly "no change" has a probability of zero of occurring, but I didn't feel like I had enough information to go for the slight increase or slight decrease categories.
To compute the mean value of the poll, I assigned the following values to the intervals:
100% or more = 125%, 50-100% = 75%, 25-50% = 37.5%, 0-25% = 12.5%, 0-neg25 = -12.5%, neg25-neg50% = -37.5%, neg50-neg100% = -75%
The mean is a 3.2% increase.
So opinion is fairly evenly divided. 13 voted for an increase and 12 for a decrease.
The final distribution is still somewhat bimodal. The only person who chose "no change" was me. I presume that there will be increased action to address climate change in the next 40 years, but going by the precedent of the Kyoto Treaty it won't be as effective as its proponents hope. Still as most business as usual scenarios (apart from civilizational collapse) would see emissions increasing and maybe doubling the cuts that will be achieved relative to BAU are still very significant. OTOH exactly "no change" has a probability of zero of occurring, but I didn't feel like I had enough information to go for the slight increase or slight decrease categories.
To compute the mean value of the poll, I assigned the following values to the intervals:
100% or more = 125%, 50-100% = 75%, 25-50% = 37.5%, 0-25% = 12.5%, 0-neg25 = -12.5%, neg25-neg50% = -37.5%, neg50-neg100% = -75%
The mean is a 3.2% increase.
So opinion is fairly evenly divided. 13 voted for an increase and 12 for a decrease.
Wednesday, August 5, 2009
Community of Science
I just discovered the Community of Science database. The main purpose for joining it is in order to get funding opportunity alerts. So far I have found it to be more useful for Australia based researchers than SPIN/SMARTS. But you also get to post a profile as in the link above. I don't know how useful that is - whether people use this to search for expertise. Anyway it doesn't take much extra effort once you set up a funding alert.
Tuesday, August 4, 2009
Journal of Economic Surveys
After a lightning fast review and rejection (one very useful referee report and one unuseful referee report) of my paper on meta-analysis of interfuel substitution at Energy Economics I'm resubmitting it to the Journal of Economic Surveys. One of the editors is a coauthor of two of the (better) studies in my meta-analysis and I noticed that Tom Stanley is on the editorial board. I've revised the presentation quite a bit based on the useful referee's comments but the econometrics are unchanged. Hopefully, third time lucky.
Monday, August 3, 2009
Rank Inflation?
While looking for something else, I came across a report from DEEWR (Commonwealth Department of Education) on trends in employment in the higher education sector in Australia. Since 1999 there has been a nice increase in the number of academic staff (faculty) at Australian universities after a lengthy period of stagnation:
Currently 33% of staff are at level B (lecturer = assistant professor in the US), 19% are at level A (post-docs and associate lecturers), 23% are at level C (senior lecturer = associate professor in the US) and 24% are at levels D and E (associate professor and full professor). As I had suspected the number of senior staff at levels D and E has risen since 1996. The average growth rate at ranks D&E has been 4.5% per year, vs. 1.9% at level C, 1.5% at level B, and 1.6% at level A. Is this a case of rank inflation (like grade inflation)? Or do more people now qualify for the higher ranks on the same basis as was used in the past?
Check out this paper for more information on the structure of the academic workforce in Australia (but not by rank).
Currently 33% of staff are at level B (lecturer = assistant professor in the US), 19% are at level A (post-docs and associate lecturers), 23% are at level C (senior lecturer = associate professor in the US) and 24% are at levels D and E (associate professor and full professor). As I had suspected the number of senior staff at levels D and E has risen since 1996. The average growth rate at ranks D&E has been 4.5% per year, vs. 1.9% at level C, 1.5% at level B, and 1.6% at level A. Is this a case of rank inflation (like grade inflation)? Or do more people now qualify for the higher ranks on the same basis as was used in the past?
Check out this paper for more information on the structure of the academic workforce in Australia (but not by rank).
Sunday, August 2, 2009
Pacific Decadal Oscillation
I just saw an interesting article about the Pacific Decadal Oscillation and the El Nino-La Nina cycle. If my understanding is correct the PDO is much like a longer wavelength version of El Nino/La Nina. The shorter El Nino waves are superimposed on the longer PDO waves:
This could be good news for our water situation here in Australia. Though the PDO has been in the cold phase for a few years now and the situation isn't good. If that is the effect of climate change then around 2050 is really not going to be good here in southern Australia when the PDO is presumably back in a warm phase. Climate change predictions are for Southern and Western Australia (the Mediterranean zone) to get drier as the desert moves south. On the other hand, all the evidence I've seen indicates that during the Ice Ages this part of Australia was apparently much drier than today, which points in the opposite direction unless the Holocene has had optimal temperatures for rainfall and either hotter or colder climates are both more arid. Anyway, with our dams at around 44% capacity we can do with all the help we can get.
This could be good news for our water situation here in Australia. Though the PDO has been in the cold phase for a few years now and the situation isn't good. If that is the effect of climate change then around 2050 is really not going to be good here in southern Australia when the PDO is presumably back in a warm phase. Climate change predictions are for Southern and Western Australia (the Mediterranean zone) to get drier as the desert moves south. On the other hand, all the evidence I've seen indicates that during the Ice Ages this part of Australia was apparently much drier than today, which points in the opposite direction unless the Holocene has had optimal temperatures for rainfall and either hotter or colder climates are both more arid. Anyway, with our dams at around 44% capacity we can do with all the help we can get.
Subscribe to:
Posts (Atom)