Like what you've read?

On Line Opinion is the only Australian site where you get all sides of the story. We don't
charge, but we need your support. Here�s how you can help.

  • Advertise

    We have a monthly audience of 70,000 and advertising packages from $200 a month.

  • Volunteer

    We always need commissioning editors and sub-editors.

  • Contribute

    Got something to say? Submit an essay.


 The National Forum   Donate   Your Account   On Line Opinion   Forum   Blogs   Polling   About   
On Line Opinion logo ON LINE OPINION - Australia's e-journal of social and political debate

Subscribe!
Subscribe





On Line Opinion is a not-for-profit publication and relies on the generosity of its sponsors, editors and contributors. If you would like to help, contact us.
___________

Syndicate
RSS/XML


RSS 2.0

ERA: The dark matter in astronomically good research ranking?

By Aynsley Kellow - posted Tuesday, 1 March 2011


Tasmania's astronomers, I knew were well regarded, and with a 4 ('above world standard'), did quite well, as I expected. Or did they?

I was surprised to learn that a rating of 4 in fact placed them below the national average. Astronomy achieved a quite remarkable national average score of 4.2. Of thirteen institutions rated, six were rated at 5 ('well above world standard') and five at 4. Whereas political science, like most other social sciences seemed to have the kind of bell curve one would expect, with a distribution about a means that (surprisingly to me, a chair of an IPSA Research Committee) was below world standard, astronomy seemed to be, with only two exceptions, better than world standard. How could this be? The answer throws into question the whole ERA exercise as a means of comparing disciplines, because the result for astronomy seems to be largely an artefact of the methodology employed.

Part of the problem seems to be the peculiarly Australian penchant for emphasising research income (an input measure) for assessing research quality (an output measure). There is a strong case for using multiple indicators in assessing research performance. To assess the quality of research papers according to the quality of the journals in which they appear would appear to come dangerously close to committing the ecological fallacy, so other measure are valuable, but there seems to be an absence in the international literature of reports using research income as an indicator of either research 'performance' or 'quality' (Martin, 1996). There seems to a singular fixation in Australia with this measure, which commits the fallacy, well known to students of policy analysis, of confusing an input measure with outputs. Research income might well be important in providing research infrastructure funding to support research, but if we are interested in performance in terms of either effectiveness or efficiency we must focus on outputs and their relationship to inputs. We certainly would not regard a car as excellent simply because it cost a lot to buy and to operate.

Advertisement

Indeed, using research income introduces an acknowledged bias that is clearly demonstrated in astronomy. Large telescopes are expensive instruments, and research quality (unsurprisingly) is highly correlated with telescope size. Martin (1996: 351) emphasizes that size-adjusted indicators are vital if smaller research units are to be compared fairly with larger ones, yet most scientometric studies rely solely on size-dependent indicators such as publication or citation totals, unadjusted even for number of staff or income. The basic point that large budgets allow more researchers to be employed, and more researchers tend to produce more papers seems to have been lost sight of. The ERA rewards large budgets and fails to adjust for size. Its results therefore inevitably conflate size and quality to some extent.

While multiple indicators are recommended, those most often suggested are things like numbers of publications or citations, peer evaluation or estimates of the quality, importance or impact of publications (perhaps assessed by peer review). Nobody seems to suggest counting inputs.

There are also acknowledged biases in any attempt to assess quality in astronomical research. One is a language bias: a requirement to publish in English advantages anglophones, tend not to read or cite papers in other languages, and citation databases provide uneven coverage of foreign language journals (Sánchez, and Benn, 2004: 445). Another bias stems from the tendency of each community to over-cite its own results, and papers from large countries receive more citations than those from small countries. These biases are thought to favour citation of papers from the large North American and UK astronomy communities, but some also favour Australian researchers.

One possible reason for astronomy doing so well in a research quality assessment is that they have long experience with the task, with published research on measuring performance going back more than 25 years (Martin and Irvine, 1983).Certainly, astronomers in other countries seems to be particularly adept at being able to demonstrate their claims to research pre-eminence. While one might gain the impression from the ERA that Australian astronomy must lead the world, this claim would be disputed by the Canadians, who consider that they themselves occupy that position.

One of their number, Dennis Crabtree (2009:1) recently claimed that

Their [Science Citation Index] August, 2005 report on Science in Canada, which covered papers published in a ten-year plus ten month period, January 1994 - October 31, 2004, showed that Canada ranked #1 in the world in average citations per paper in the "Space Science" field. An examination of the journals included in the Space Science field shows that the field is dominated by astronomy.

Advertisement

Perhaps Australia overtook Canada by the time the ERA took place? No -

Crabtree (2009: 2) thinks not:

Canadian astronomy's excellence on the world stage continues. ScienceWatch's report on Science in Canada from May 31, 2009 indicates that of all science fields, astronomy had the highest impact relative to the world. Canadian astronomy papers published between 2004 and 2008 were cited 44% above the world average. For comparison, astronomy papers from the UK and France, for a similar period, were cited 41% and 21% above the world average.

  1. Pages:
  2. 1
  3. Page 2
  4. 3
  5. All

A complete version of this paper, complete with references can be downloaded by clicking here.



Discuss in our Forums

See what other readers are saying about this article!

Click here to read & post comments.

Share this:
reddit this reddit thisbookmark with del.icio.us Del.icio.usdigg thisseed newsvineSeed NewsvineStumbleUpon StumbleUponsubmit to propellerkwoff it

About the Author

Aynsley Kellow is Professor of Government at the University of Tasmania, and Chair of the Politics and Business Research Committee of the International Political Science Association. He has published widely on environment and resource and is completing a project on the international organisation of the mining industry. He holds shares in mining stocks, both directly and through his superannuation.

Other articles by this Author

All articles by Aynsley Kellow

Creative Commons LicenseThis work is licensed under a Creative Commons License.

Photo of Aynsley Kellow
Article Tools
Comment Comments
Print Printable version
Subscribe Subscribe
Email Email a friend
Advertisement

About Us Search Discuss Feedback Legals Privacy