The limits of NAPLAN

Starting today, students around Australia are sitting the NAPLAN test, a 3 day series of examinations intended to assess students general levels of literacy and numeracy. With the tests, comes the usual argument about the validity of the tests and the potential damage they cause.

The arguments on both sides contain a fair amount of rhetoric that sometimes goes a bit far, and when you push your argument too far on one part of a issue,the overall credibility of your position suffers, or worse still, those less informed actually believe your hyperbolic rhetoric and the gap between reality and public opinion becomes that little bit wider.

What follows is my attempt at a rational discussion about the issues associated with NAPLAN. Apologies in advance if you were hoping for hyperbolic rhetoric.

First of all, as the Head Teacher of a high school English faculty, I find that the tests have value for what we do. The tests provide an objective, standardized assessment of the variable aspects of our students literacy skills. In helping to make plans for future curriculum patterns the test results help us to identify which aspects of literacy to focus on next and potentially identify which students may need specialist help in the form of intensive skill development or even those who are significantly above the average of their peers and may benefit from extension work. The tests and their results are are, however, a beginning and middle point of the teaching and learning process that teachers engage in.

The conflict over the tests comes from the perception that they represent the end of the teaching process and that rather than the results serving as a guideline for future development, they instead represent a definitive indicator of success or failure of an education process. This is exacerbated when both state and federal governments talk about using test results like NAPLAN as the justification for managing school funding or even individual teachers bonuses or salaries.

Simply put, the tests are too limited in their scope to be a fair assessment of all of the many and variable things that a school does, and certainly too small in scope to be used to make large scale policy decisions.

For starters, the tests focus explicitly on literacy and numeracy. There is no arguing that these are fundamental skills and that standards should be set to ensure that students are ready to enter society and the workforce with a baseline of skills needed to function effectively.

But who is responsible for teaching literacy and numeracy? English and maths teachers, right? Well in NSW high school syllabi, literacy is identified as a cross-curricular priority, meaning that every teacher is expected to take responsibility for teaching basic literacy skills in the context of their own subject. The same is true of basic numeracy, and as an English teacher I engage students in activities such as plotting graphs, calculating ratios and determining averages of things like the syllable lengths of words in a piece of writing as an indicator of overall complexity of language.

On the surface it might see like an argument in favour of literacy being an indicator of overall school success,if everyone is expected to be teaching it, but the matter is then complicated by the fact that in high schools, even in the various English syllabi, basic literacy skills are not an explicit teaching outcome that teachers are required to focus on. There are some outcomes that require students to understand the appropriate use of language for purpose and audience, and others that could be interpreted as relating to functional literacy, but none that are explicit. Keep in mind at this point that under NSW law, teachers are required to effectively plan and program to the outcomes of the syllabus and at present in NSW many of the skills and abilities that are the focus of NAPLAN tests, such as spelling, functional grammar, and vocabulary, are firmly the province of primary school syllabi. The danger of putting undue emphasis on NAPLAN results is that teachers are then put in a position of conflict between their legal obligation to address the outcomes of the syllabus and the narrow focus of the NAPLAN tests.

This is where the argument about narrowing the curriculum comes in. In high school English classes, if a teacher were to focus their attention exclusively on ensuring the functional literacy skills of their students, then the additional time used to focus on literacy skills would ultimately come at the expense of time that would be otherwise used to focus on the English syllabus outcomes such as learning to adapt concepts between different forms of expression (such as writing poems based on visual images or learning to adapt novels into films) or understanding the relationship between the socio-historical context and the language forms and features of a text.

So if teachers are to be paid bonuses based on NAPLAN results, who gets the bonus if a student achieves improvements? Does the English teacher get all the glory or blame? Or does the school recognize the efforts made by the PE department to focus on literacy in their exploration of the rules of team sport? Ultimately it would require someone to investigate school processes and make a subjective determination as to where and to whom responsibility for student achievements should be credited. But then we are no longer using NAPLAN results as an objective measure of success and are instead using an investigatory process to make a determination.

I am not trying to argue that such an investigatory process would not be possible,although it would need to be carefully managed to ensure that it does not turn into a ‘witch hunt’, but what I am arguing is that no one at any level of government is talking seriously about it either. At present government discussion about teacher bonuses largely comes down to ‘give principals money, let them decide’, usually with a vague reference to test results included in the comment. There is hardly a detailed proposal available for public debate, and certainly no public discourse about efforts to maintain fairness of equity, and that only fuels the conflict further. Ambiguity in announcements of such sweeping changes to school management and teacher pay leave those on the side of opposing NAPLAN to assume the worst based on the poor experiences of teaching colleagues in other countries.

I want to take a moment here to also address primary schools. So far it might be reasonable to read this argument and come to the conclusion that primary school teachers could effectively judged by NAPLAN results as, after all, one teacher is with their class every hour of every day. Things are never so black and white.

NAPLAN tests are sat early in term 2 of years 3, 5, 7, and 9. That means that the teacher of a class sitting the exams has only been with their class for around 14 weeks out of the year. That class will have had a full year of education, or in the case of year 3, 3 years of education, usually with different teachers to the one who takes the class to the exam. If education is intended to be a ongoing process of skill development, then who deserves the credit for a students ability level at a particular moment in time? It could be argued that the teacher of a year 4 class has a greater claim to their improvements when they sit the NAPLAN test in year 5. Which is not to say that a year 5 teacher couldn’t make significant improvements in a short period of time! But once again, to make a fair assessment of the impact of a teachers practices on student outcome you would again need to look deeper than just the breakdown of NAPLAN results. And again we come to the same problem that arises when trying to use test results not as an indicator and guideline for future planning, but as a standalone measure of success or failure.

Unless, of course, you aren’t interested in making a fair assessment of a teachers practice, and have another goal in mind.

As well as the complex relationship between the multiple teachers involved in a single students education, we also have to consider the fact that literacy and numeracy are not the only things that a school is responsible for.

Consider physical fitness and health. Physical education, health and sport are mandatory topics at school until the end of year 10 for PE and health, and until the end of year 11 for sport. Physical fitness is considered a priority issue for schools and their students. The ‘childhood obesity epidemic’ is a popular topic in news media, mentioned even more frequently than average literacy rates around the country, so evidently it is a high priority for our society.

So where is the national fitness testing regime? Why are there no national standards for students knowledge of healthy eating and the links between diet, exercise and long term health outcomes? If a school is to be judged omits successes in core areas of student achievement, surely the students’ capacity to live a healthy life should count for something?

A commonly encountered response or this idea is that kids play sport on weekends and that parents have a responsibility to their child’s fitness and health beyond what happens in school. But is this is true of fitness and health, why is it not true or literacy and numeracy? Whichever side of the issue you fall on, there is a significant disparity in the application of philosophy. As it currently stands, it seems that the only measurement of school business the federal government is interested in is literacy and numeracy. At least our students, as young cardiac patients, will be able to read their insurance contracts and add up their mounting medical bills.

Consider, then, the effect of narrowing the curriculum if a schools’ success is measured entirely in terms of literacy and numeracy. Last year, in a web-cast presentation by Diane Ravitch, author of ‘life and death of the great American school system’, I heard of examples of schools in the U.S. whose funding was so dependent on results in literacy and numeracy testing that physical education had been reduced to students being given an aerobics DVD and told to keep a log of the hours they spent following it. This may sound extreme, but it is an example of the possible long term effects of measuring the success of a entire school by such a narrow band of criteria. Especially when, in NSW, the curriculum available to students is so much broader than just literacy and numeracy.

Ironically, those responsible for NAPLAN have, in a sense, preemptively agreed with me. In an article in today’s Sydney Morning Herald, ACARA chairman Barry McGaw described NAPLAN as ‘(taking) a few hours over three days every second year from years 3 to 9’. The test, in short, is a very small thing that measures only a small section of what schools do and what they are accountable for.

This is not to say that there is not room for discussion of ways to structure teacher accountability within the school system, nor am I arguing wholesale against standardized tests such as NAPLAN, but as a measure of the success or failure or a school, a school system or even an individual teacher, tests like NAPLAN cannot be used as sole sources of data. The test is just too limited.

Advertisements

Leave a comment

Filed under Reflections and Musings

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s