That's the message I get from this interview with the head of the ARC, Aidan Byrne (formerly a science dean at ANU) in the Australian. The assumption of many in the sector, myself included is that the Australian research assessment exercise, ERA, and the funding attached to it would evolve in a way that generally followed UK practice with something of a time lag. In the UK, much more money is tied to the REF, the funding ratio associated with the three highest rankings of departments is 9:3:1, and case studies are being used very heavily to assess broader impact. Prof. Byrne argues that case studies should be used sparingly if at all to measure impact, not much money should be tied to ERA outcomes, and the funding ratio should be flatter. On Wednesday, I saw a presentation by Tim Cahill of the ARC on the ERA 2012 process and outcomes. One key finding was that for the citation based disciplines (most STEM disciplines (but not math or computer science) and psychology) there is a weak correlation between the ERA ranks assigned to universities and their citation performance relative to the benchmarks. A lot of subjectivity still seems to come into the ranking by the ERA committees. As they only count the number of citations per paper and not where they were cited, I guess that makes sense. So should Australian universities pay as much attention to ERA as they have been doing? For example, ANU has tied indicators in it strategic plan to the number of disciplines that achieve given ERA rankings by 2020. If I was the minister and looking for budget cuts would I want to continue with ERA on this basis?