Using A Data Driven Strategy To Raise Attainment

By Andy McHugh

Great schools use data to sustain excellent results and raise attainment further. But with so much data available, there can be a lot of “noise” as the economist Daniel Kahneman puts it. So, why does data matter? Which data should we be paying attention to? Why does this data matter more? And how can we tell if this data is reliable? These questions are important for senior leaders to grapple with, but it isn’t solely their job. Middle leaders are responsible for doing this too and actually, they are often in a much better position to test that data in order to use it effectively to drive improvement in their own area of influence.

Let’s take the example of a department whose results are ‘average’. P8 scores hover around 0.0 and they are broadly in line with national and exam board results. The middle leader responsible may see those results and think to themselves, “Well my department is nowhere near the worst in the school, what we’re doing is working, so I’ll just keep things ticking over and not change anything. I’ll hope for the best, do what we’ve always done and our results will most likely stay the same next year. We might even do a bit better because we have one student whose two older siblings outperformed their predicted grades. I will change some of the PowerPoints though, because some of them have videos that are out of date.”

We’ve all met that middle leader before and with workload issues cited as a significant driver of the recruitment and retention crisis in schools, we can empathise with their view, to a point. But they are playing a very, very risky game.

Firstly, they are acting with complacency, assuming that everything will go well, without really knowing with any degree of certainty. This is a huge risk and one that “Good” schools can be particularly prone to taking.

Secondly, they are hoping for the best, rather than ensuring that it happens. Placing all of your hopes on the future performance of a student, based on how their siblings performed at an earlier time is like betting on a horse, just because its siblings raced well last year. And even if students themselves had done well in previous assessments, past performance is no guarantee of future success.

Thirdly, the actions they are taking are most likely not an effective or efficient way to spend time (our most precious resource) in pursuit of raising attainment. You just wouldn’t wash your car to make it go faster, you’d check to see if all the working parts were running well and focus on improving them instead as a priority. Similarly, surface-level resource development is easy, makes us feel good because we’ve ‘done a thing’ and makes everything look new and sparkly. But it doesn’t follow that results will improve as a consequence. There’s probably something that runs much deeper that is having a bigger impact on student performance.

So, what should this middle leader do instead? It’s relatively straightforward: just look at the data. It doesn’t lie. It points out inconvenient truths. And it can help to identify the areas we need to address as a priority. But, how do you go about using data to raise attainment? Well first of all, you need a clear plan. But to create that plan, you need to answer some important questions.

 

Five important questions for middle leaders seeking to raise attainment

1. What does your attainment data tell you about the different assessment objectives?

Are students competent in presenting knowledge and weak at analysing it? Can they explain reasons well, but fail to make evaluative judgements? Do they lack the skills of inference or deduction? Which one? Can they explain why one mathematical method is preferable to another in solving a problem?

2. In which areas of the curriculum do students show the most depth of knowledge?

Are the curriculum topics equally well understood? Do students prefer some topics rather than others? Are they taught these topics by specialists (e.g. is physics taught by a biologist? How far does this matter?) Does performance in different topics correspond to varying curriculum time allocated, time of the year the topic is taught, or by pedagogical strategies used?

3. How often and how well do you check for understanding during lessons?

Do all students show you their answers, or do you just take a sample? Do you assume that because one student ‘got it’ then they all did? Do some students opt out of answering and do they get away with it? Do you check again later on? When?

4. How responsive is your teaching?

Do you just create your resources and then teach them regardless of which class is in front of you? Do you differentiate and if so why and how far? Do you ask the same questions to everyone, or do you tailor those questions? What are your back up questions for when students don’t know how to answer your initial questions? Do you pre-plan them? When students show how well they understand, do you track that data and use it to feed forward for use in future lessons?

5. How consistent is performance across staff?

Do some staff routinely attain higher grades than other staff? Does this depend on the topics taught? Do some staff only perform well when teaching ‘top set’ students? Do all staff teach using the same curriculum materials, pedagogical strategies, homework tasks? Where are some staff going the extra mile?

 

If you can answer those five questions, then you can begin to make inferences about where your priorities lie.

If students are performing well for teachers who routinely use mini-whiteboards to check for understanding, then this could be something you roll out across the department, perhaps using that teacher to deliver CPD.

If your highest-performing students are outperforming everyone else on a particular assessment objective, then take samples of their work to see what they are including that other students aren’t. Those things then need to be taught explicitly, so that all students gain those extra marks.

Now, over to you. Time to gather that data and make some practical decisions. Raising attainment by the end of next year is a noble long-term goal, but raising it period 1 tomorrow is better and more straightforward than you think, once you have data to light the way forward.

 

You can read more articles by Andy McHugh here.

Author

Editor of HWRK Magazine, Andy is a teacher, Head of RE and Senior Examiner who loves nothing more than a good debate.

Write A Comment