Like what you've read?

On Line Opinion is the only Australian site where you get all sides of the story. We don't
charge, but we need your support. Here�s how you can help.

  • Advertise

    We have a monthly audience of 70,000 and advertising packages from $200 a month.

  • Volunteer

    We always need commissioning editors and sub-editors.

  • Contribute

    Got something to say? Submit an essay.

 The National Forum   Donate   Your Account   On Line Opinion   Forum   Blogs   Polling   About   
On Line Opinion logo ON LINE OPINION - Australia's e-journal of social and political debate


On Line Opinion is a not-for-profit publication and relies on the generosity of its sponsors, editors and contributors. If you would like to help, contact us.


RSS 2.0

The flawed assumptions underpinning NAPLAN

By John Töns - posted Wednesday, 13 May 2015

We are again embarking on the ritual of testing children's literacy and numeracy skills. The data that is gained from the NAPLAN tests is useful for policy makers to get an insight in how the system as a whole is performing. However, the debate in the media and both within schools and among parents and politicians indicates that the results are being misunderstood.

The NAPLAN is a standardised test. All that means is that if you test a thousand people you will be able to plot the results on a bell shaped curve. Roughly 10% will be at the top and bottom, a further 15% will almost be at the top and almost at the bottom with the remaining fifty percent forming the 'bell' of the bell shaped curve.

Given that we are testing virtually the entire Australian school population the results will be an accurate reflection of how the system as a whole is performing. BUT – it cannot and should not be used as a measure for an individual child's performance. Nor is it necessarily a concern if a particular school or state finds itself at the bottom of the pile.


This may seem counter intuitive but it is nonetheless accurate.

Let us start with the easier of the two myths to debunk. Your state or school has come bottom in all the NAPLAN catagories – disaster! There is no doubt that we should take a close look at the results and see if there is a problem. The first thing we have to remember is that because it is a standardised test the results will always be plotted on the bell shaped curve. Technically means that there is no pass or fail - it is merely a comparative assessment. To give a better idea of how flawed representing data in this manner can be we need to go back to the 1960ties.

In South Australia students sitting for year 11 public exams were given either a 1,2,3,4,5 or 6. You received a 1 if you were in the top 10% in the state, a 2 in the next 15% and so on. This worked well in subjects like English where there were enough students that one could be reasonably confident that if you got a 1 than on that particular day you were in the top 10%. However, there was a major problem with subjects with very small candidatures. Chinese had about 10 students sitting for the examination. The examiners were pleased with the standard that year; all ten students received marks over 90% - yet they were plotted on the same bell shaped curve and received a 1 or 6 depending on where their result sat on the curve. Clearly the problem was that their final report did not accurately reflect their performance.

If we apply that same reasoning to the NAPLAN we can see that our first priority needs to be whether or not students are actually meeting some benchmark of performance. Let us assume your school came top in Maths but in that particular year the overall competence in numeracy was well down then coming top is really nothing to write home about. If we look at the NAPLAN website this is what it has to say about the results:

Both NAPLAN and NAP sample assessment results are reported on scales which demonstrate how students have performed compared to established standards. Assessment scales also allow achievement to be mapped as students (sic) progress through schooling. It is important to note that the tests involved in the NAP are not pass/fail tests.

This raises at least two questions: how have these standards been established? And who has decided that these standards are appropriate? My concern with the established 'standards' is that they may well be a great deal lower than they should be. Perhaps I am being unduly cynical but education is one of the biggest government outlays. Set the standards low enough so that most students can comfortably meet them you provide a justification for curtailing investment in education. On the other side of the equation are those who argue that the standards are far too high. It does not really matter which side of the fence you are on; the point is surely that there is really no objective way of establishing what the standards ought to be.


The second point to note is that the NAPLAN is not a pass/fail test it simply describes where people sit on that bell shaped curve. But that is precisely why you cannot get too upset about the results – if your state happens to be in the bottom percentile this does not mean that it is underperforming; it could simply be that everyone is doing really well – just like our Chinese students!

The second myth is harder to dispel. Teachers, parents and commentators tend to use the results as indicators of individual student performances. When the results are published it would seem that it does make a meaningful statement about a child's performance. Firstly the margin of error in any standardised test is such that the result is meaningless when it is applied to a particular child's performance. At best it gives a rough indicator but teachers and students should be guided by the child's day to day performance in class. There is a real danger when the test is used as a means of trumping the child's performance in class.

Unfortunately we do tend to read reports about how parents regard the NAPLAN results as being more accurate than what the school might say about students. Just consider this example. Your child has done poorly on the NAPLAN test. Your child's teacher reassures you that your child is competent but has difficulty in discriminating between addition, multiplication and division signs. Meaning that when asked for the sum of 2 and 3 she will give six instead of five. Nothing wrong with her cognitive ability but there could well be a problem with her eyesight. This is why good teachers invest a great deal of time in error analysis. Anyone can mark a sum right or wrong but it takes a good teacher to understand why the mistake was made.

Some will regard this as a trivial concern. I spent a number of years teaching in the Foundation Programme at Flinders University. This was a year's course designed for adults who had not completed secondary education but who wanted to undertake tertiary education. Every year I encountered students who had been dismissed as stupid, lazy or incompetent and as a result had dropped out of school at the first opportunity. Eventually they decided that the label did not apply to them and had another go. All of these students eventually graduated – one of them has completed a masters degree. I do not want the NAPLAN to become another reason for some teachers to dismiss some of their charges as uneducable - teachers have a responsibility to all of their charges.

Secondly we need to get back to the reasons why the South Australian Government abandoned the reporting of year 11 results based on where students sat on the bell shaped curve. It was not just our Chinese students who were disadvantaged by that style of reporting. In a subjects like English and Maths where there were sufficient candidates to argue that the position on the curve was somehow meaningful it also created problems. Teachers argued successfully that reporting examination results in this manner did not give an accurate reflection of the quality of student performance. Math teachers argued that in some years there were only a handful of candidates who performed at a level that merited a top grade. English teachers argued that there were instances where exceptional performances were not given the credit that they were due.

Is the NAPLAN a waste of time and resources? I think not but it is like any tool it needs to be used for the purpose it was intended. One of the problems that has bedevilled education policy makers is how to allocate scarce resources. For example in eighties and nineties there were a number of Commonwealth programmes designed to address disadvantage. Schools were given additional funding based on what were perceived areas of disadvantage: ethnicity, gender, geographic isolation, disability and aboriginality. The problem was that there were some very wealthy and successful schools that attracted funding – there was simply no useful, objective means to determine where funds were most needed. NAPLAN has addressed that aspect but in the process it has created another problem – that of using the results inappropriately.

  1. Pages:
  2. Page 1
  3. All

Discuss in our Forums

See what other readers are saying about this article!

Click here to read & post comments.

4 posts so far.

Share this:
reddit this reddit thisbookmark with Del.icio.usdigg thisseed newsvineSeed NewsvineStumbleUpon StumbleUponsubmit to propellerkwoff it

About the Author

John Töns is President of the Zero Carbon Network a network established to promote clear thinking about the issues associated with climate change. In addition to operating the only zero carbon boarding kennels in South Australia he is also completing a PhD at Flinders University in the area of Global Justice. John is a founding member of a new political party Stop Population Growth Now.

Other articles by this Author

All articles by John Töns

Creative Commons LicenseThis work is licensed under a Creative Commons License.

Article Tools
Comment 4 comments
Print Printable version
Subscribe Subscribe
Email Email a friend

About Us Search Discuss Feedback Legals Privacy