Many of us in the social sciences and humanities were a little apprehensive about the Excellence in Research for Australia (ERA) exercise. Some of that apprehension came from the very idea of assessing research quality, which was a relatively novel idea to us, but we accepted that we live in an age where accountability for public expenditure is a recurring and necessary theme.
Part of our apprehension also stemmed from the shift from the previous Research Quality Framework (RQF) system to ERA. We had got with the program on RQF, and worked hard to do as well as possible. I took part in a national consultation process to make sure the process was able to capture the nuances of various disciplines, and took part in numerous internal trials. We used search engines like Google and Google Scholar to make up for the deficiencies of the Social Sciences Citation Index which, of course, does not cover citations in books - the highest quality research output to which we (perhaps economists excepted) all aspire. Natural scientists compare strange things like h-numbers that are largely mysterious to us.
Both systems eschewed any attempt to assess the quality of research at the individual level - something quite curious when it is considered we spend much of our professional lives assessing the individual efforts of our students.
Search engines also helped us build evidence for portfolios demonstrating that elusive concept of impact: the use of one's research by policy-makers and in curricula around the world. It was an affirming experience to find references in policy documents and reading lists at universities around the world, where we had no idea our work was being used.
The natural scientists, we heard, didn't like RQF and disciplines like astronomy particularly didn't like the impact element. This we understood. Their research makes few direct contributions to the lives of those taxed to the support the enterprise, but we all support such pure science as underpinning much more applied and useful knowledge. But astronomers for years have had to argue their case with policy-makers and the public. I can recall many years ago an astronomer, doing his best to maximise favourable public perception with a television interview on a new discovery being asked by the cub reporter, 'But what are the benefits of this for the public?'. The best he could do was to stammer that 'this was research at the frontiers of science.' Quite true, but it seemed a bit lame.
So we had invested a lot of effort in RQF. Then the government changed, and ERA appeared, with many changes that seemed to reflect the concerns of the natural scientists, and especially that putative concern of astronomers over demonstrating impact. The astronomers certainly seemed to be in the ascendancy, with one of their number, Penny Sackett, appointed as the new Chief Scientist.
ERA shifted the goal posts. Especially important for us was the removal of impact, which we had initially feared, but come to quite like, and (significantly) the disciplinary rankings of book publishers. We were particularly concerned with the latter, because (as stated earlier) the publication of the research monograph is the summit for most social science disciplines, the research output by which we judge ourselves and our peers. We heard that the Australian Research Council (ARC), the custodians of ERA, considered ranking publishers might have left them open to legal action. This seemed surprising. The American Political Science Association, for example, ranks publishers (Cambridge University Press is number one) without being subjected to litigation from those they consider to be third rate.
So, given these apprehensions, we awaited the announcement of the results with some trepidation to see how we would measure up. My own School did well enough and not too far away from expectations. In both 'Political Science' and 'Policy and Administration' we were scored at 3 – 'at world standard'. We had expected a 4 for 'Policy and Administration', in which we have particular strength, but we understood this was a first, inevitably inexact attempt to apply the system. Our 3 for Political Science was about par for my expectations. I (correctly) thought there would be only one 5 and that would be ANU. It has a large number of high quality political scientists, many of whom have been full-time researchers. It received much more generous research funding than most institutions, so it would be surprising if it did not do well.
Strangely, in 'Policy and Administration' there was only one 4 and no 5s. The 4 was UNSW - initially surprising to those of use who are general policy and administration scholars, but understandable given that their very good social policy research centre would lie in this Field of Research (FOR) classification.
My expectations were quite realistic, rather than expressions of hope, because we knew with some accuracy where others placed us. In a global ranking of political science departments conducted earlier by a scholar at the London School of Economics (on the basis of journal publication), the University of Tasmania was ranked 167, quite a good result globally, and about 8 nationally (Hix, 2004). As a mid-sized department, we did a little better when the scores were adjusted on an FTE basis, moving to 147 globally. Size matters: Western Australia and Murdoch, with small, productive departments did well in absolute terms, but moved up substantially in rankings adjusted for size. A more recent evaluation within Australia (Sharman and Weller, 2009) gave a similar picture in terms of relativities, using journals and ARC grants, though no size adjustment was attempted. There were some differences, but one could think of reasons for those changes in terms of personnel changes and the like.
After looking to see how one was ranked, thoughts turned naturally to other disciplines, first within one's own institution and then more broadly. The areas ranked at 5 at the University of Tasmania contained no real surprises, but the geologists working in ore body research (an ARC centre of excellence with a genuine world reputation) was, for me, a surprise omission. I suspect it was disadvantaged by not having a sufficiently narrow FOR code, just as the similarly excellent separation scientists did expectedly well with a neat fit with 'Analytical Chemistry.' One limitation with ERA would seem to be comparisons between disciplines with multiple FOR codes and those whole disciplines (like political science) that have only one.
Political Science as a single FOR category must rank the American Political Science Review as the top journal, yet this is not an international journal, with more than 90 percent of the authors it published coming form US institutions. The sheer number of political scientists in the US and the maintenance of institutional subscriptions because it is number one keeps it at the top, despite that fact that few of us read it. IN accepting the Mattei Dogan Prize, the closest thing to a Nobel in political science, at the the World Congress in 2009, Philippe Schmitter announced proudly that he had never once published in the APSR - to thunderous applause. The FOR categories determine much of the ERA result, because they determine which journals can be considered excellent: a single FOR code covers political theory, international relations, comparative politics, political sociology, voting behaviour, and so on.
A complete version of this paper, complete with references can be downloaded by clicking here.