How Should We Evaluate Engagement With Professional Learning?

By Nikki Sullivan

In my previous article, CPD and Protecting Peoples’ Pie Charts (HWRK Magazine, Issue 27), I wrote about how we must ensure that our enthusiasm for staff professional development does not lead to us to (1) seeking false proxies for effective CPD and (2) consequently placing undue burdens on staff, who are, ultimately, “humans first, professionals second” (Mary Myatt).

In this follow-up piece, I will be exploring some of the ways we might evaluate our CPD provision and its impact because, despite the challenges around this, our staff’s time is too precious, and CPD is too powerful an investment, not to engage in this ongoing review.

 

I must start with a caveat. One of the key messages that has come out of my reading and writing in this area is:

“Good evaluation does not need to be complex; what is necessary is good planning and paying attention to evaluation at the outset of the professional development program, not at the end.”

Guskey, quoted in Earley and Porritt

This article explores principles of evaluation which may prove helpful both retrospectively when looking to refine our approaches, or at the start of a CPD-based implementation project.

The problems

When evaluating CPD, we face issues around correlation and causation, multiple inputs affecting singular outputs, and the intangibility of some of the impact measures we seek to evaluate.

Peps McCrea describes teaching as having a “fuzzy feedback loop”. Unlike darts, the relationship between action (teaching) and impact (student learning) is unclear, consequently making it harder to improve our practice. Therefore, if we consider CPD as ‘teacher learning’ we are no further forwards because of the same “fuzzy feedback loop”. A quiz to check for understanding might have its place as part of a CPD event. Still, it would be unwise to rely solely on such strategies.

Should we look to evaluate CPD in schools by looking at student outcomes (e.g. SATs results or P8 scores) this issue is only exacerbated – the “fuzzy feedback” is multiplied. To extend the analogy, if we put in place CPD for 100 darts players, we can evaluate the impact (scores) from the refined action (throwing the dart). If we put in place CPD for 100 teachers, this causal connection is not as clear or attributable. Even if a school has been cautious in its implementation of a singular CPD refinement, it is rare that there is not more of an orchestra (or even cacophony) of implementations in schools which could have impacted students’ results.

We could look at staff retention as an easily quantifiable measure that evidence suggests is impacted by “the feeling of undertaking good professional development” (Coe et al.), but again, we face issues of sole attributability.

If we go from the problematic to the downright ugly, we might look to evaluate the impact of our CPD model by evaluating the quality of teachers. As has been highlighted multiple times over, this is foolish. As Dylan Wiliam states:

“Even if we combine classroom observations, measures of student progress and student surveys, the ratings of individual teachers are still not very accurate unless we use data over a number of years. In fact […] we would need to collect data on each teacher for eleven years.”

In addition, we don’t just want to know if something is having a positive impact – our evaluation needs to tell us what’s working, what’s not working, and why? Some of the above measures would provide some spurious data but may not provide insights into how to refine our approach.

The easy (easier)

Time

As shared recently in a blog post from Professor Rob Coe, “roughly 1% of an average teacher’s working time is spent in professional development (PD),” despite the evidence base around its positive, sustainable impact. Time spent on CPD is easy to measure – it may not be a measure of impact, but it can be an evaluative indicator.

Common approaches

This is not the article to get into the debate around teacher autonomy, but for argument’s sake, let’s assume that there will be some things we want to be common within a school. As an example, meeting and greeting at the door, with a bell task on the board. Let’s say that some CPD time has been used to support staff in engaging with the thinking behind this practice, what this could look like in their subject/phase, and the mechanics in the classroom.

Here, we can be clear about what we want to measure; we can easily see whether it is happening and how quickly students begin work. And, although this is one very specific kind of CPD (because we are not just using CPD to train staff in a set of pre-determined or even co-constructed strategies) if we have decided that we value it, then we need to attend to it. The specificity we see here gives us a way into considering the evaluation of CPD.

Untangling the complexity

How much time is one thing. The effectiveness of the use of this time is something else altogether. As discussed in Part 1, we must always consider the “opportunity cost” (Dylan Wiliam). Consequently, evaluating our CPD offer isn’t only about evaluating its impact; it’s about evaluating the impact of each element in relation to one other. What is working better, what is working less well, and why?

The below is as far as I have got with pulling my reading and thinking together.

1) Be specific about what we want to measure, both “macro and micro,” action and impact

“We do want small micro effects to cumulatively create a macro impact.”

Weston and Clay

As Weston and Clay discuss, sometimes we want to evaluate small changes which give us feedback about a specific element of CPD, and at other times we need to step back and consider, “How did we do overall?” The latter is almost easier to answer and a key part of a yearly CPD evaluation process. But we need to also shape this into what’s working, what’s not working, and why? Here is where we need greater specificity and, consequently, ongoing evaluation.

When considering the actions we wish to review, we need to recognise the multitude of components that constitute CPD – the forms, the mechanisms that make up these forms (Sims et al.), the content the forms of CPD serve, the logistics of this CPD in school… the list goes on. If we were to look at the “micro”, it might be focusing our evaluation on a particular CPD mechanism or the use of a specific time slot. Moving slightly up the scale, we might want to evaluate a singular form of CPD.

When considering the impact we wish to review, although ultimately “direct CPD” (Weston and Clay) has improved student learning as its ultimate goal, again, we often need to go more “micro” in our evaluative measures, considering individual teachers, teams, culture, and students. Earley and Porritt discuss looking at the impact on products, processes and outcomes. Guskey (discussed in Bubb and Earley) explores five levels of impact: participants’ reactions; participants’ learning; organisation support and change; participants’ use of new knowledge and skills; pupil learning outcomes. Ultimately, clarity is critical.

Are we looking at the impact on teacher understanding of spaced retrieval? Or reduction in teacher workload? Or how much time staff have to collaborate? Or how quickly Year 9, Set 3 start work? As quoted by the TDA in their 2008 report, when evaluating impact, the closer we get to our ultimate goal of student learning, “the greater number of other variables come into play.” This is why, although it is pivotal to be clear on a specific intended impact on student learning, we must also consider the incremental impacts that will lead to this.

There will be times when it is the action we wish to evaluate – I want to know more about the impact of our introduction of peer observations. There will be times when it is an impact measure we wish to evaluate – I want to know more about the actions helping and hindering teacher motivation. This ‘either end approach’ is an additional layer of stickiness, and why specificity at the outset of our evaluation is key.

2) Inform what we want to measure and any relevant ‘success criteria’ through engaging in best bets and external expertise

Because teaching has a fuzzy feedback loop, trial and error is an ineffective route to refinement. The same, therefore, can be said for CPD. These challenges are the very reason it is pivotal to engage in ‘best bets’ and ensure our CPD planning, including evaluation measures, is evidence-informed. Whether it be Zoe and Mark Enser, the EEF or Shaun Allison (all listed in the bibliography), through engaging in this reading first, it becomes more likely that we will be choosing a ‘worthwhile’ measure (whilst, of course, also improving the quality of what we implement).

This engagement in the literature around successful CPD also enables us to create greater detail, where relevant, around ‘success criteria’ or WAGOLL (what a good one looks like). For example, when evaluating a particular form of CPD, is there sufficient clarity around its ‘active ingredients’? If looking to evaluate the culture of CPD in our schools, have we engaged with the ‘School Environment and Leadership: Evidence Review’ from the EBE to help us determine what criteria within this broader measure we should focus our attention on?

3) Engage in a range of qualitative and quantitative methods, considering patterns across groups, and take a baseline

“Establishing the current practice or baseline is vital to help colleagues articulate the quality and depth of the subsequent impact on adult practice and young people’s learning.”

Porritt, 2009, quoted in Earley and Porritt

The greater the complexity, or the more “macro” what we want to measure, the more likely it is that we will need multiple sources of information to form our insights.

The likelihood is that we will be using a combination of the following:

Staff voice (deliberately at the top of the list!)
Student voice (carefully!)
Walkthroughs
Student work
Internal data
Student outcomes
Observation of, and resources from, CPD activities

Pivotal here is that we remember that we are evaluating our CPD provision, and therefore the actions should be for us as senior and middle leaders. In a cohesive CPD model, teachers will already know what they are working on – this review is for us to determine what we can do better, feeding into future CPD plans.

Many of these strategies involve asking staff, which takes up their time. Asking is invaluable, but we need to ensure it is manageable. At our school, we utilise ‘60-second staff voice’ using Microsoft Forms, usually doing no more than two per half-term. We also put quantitative questions first and make the qualitative questions optional. This is developing work but has already afforded us some valuable insights.

Additionally, both quantitative and qualitative methods need to be designed in such a way as to give us insights into patterns, e.g. across teacher career stages or student groups. And, for both methods, we take a baseline.

Quantitative measures can, most obviously, give us insights into growth/decline, e.g. the number of minutes lost to low-level disruption in a classroom. We should aim to be specific in our goal – not just look at whether measures are going up or down, but by how much. If we are looking at the response to the survey question, “Leaders make appropriate provision for my professional development,” then ‘an increase’ is not sufficiently clear. How much of an increase? This will depend heavily on our baseline measure (consider a school at 57% vs 97%).

Getting close to the action is important. If we want to evaluate a particular form of CPD – get in and see it. If done supportively, formatively, it need not feel ‘top down’ and can be part of a school’s ‘improve not prove’ culture, in the same way that we visit lessons. Where we attend sessions, because we have co-constructed or shared our success criteria, we can see what is working well and how we might refine approaches. And, in the same way that the person observing a lesson is often the one who gets the most out of the process, we can enable other staff members involved in delivering CPD to also participate in this activity.

4) Systemise and yet be responsive

Key dates and milestones for evaluation and attending to what we value need to be scheduled in advance – schools are too busy, and roles too multifaceted, for it to be otherwise. Building a calendar ensures we start from a position of clarity and cohesion. Where appropriate, choose dates linked with data drops, allowing systems and structures to work together, considering where one activity can serve multiple purposes. Without this efficiency, we run the risk of spending too much time on evaluation and leaving insufficient time for the action it is intended to lead to. Make it so you don’t have to remember to remember.

What we want to measure and, subsequently, the appropriate evaluation methods, will determine when this evaluation should occur. When we are looking to set up our evaluation for a new refinement to our CPD model, setting a realistic timeline is key – the amount of time it might take to see an impact on the use of wait time in the classroom versus teacher motivation will be different.

Allow for flex where needed – if we get to that point in the calendar and there was meant to be some staff voice, but we know now is not the time, remember we made the calendar in the first place – the systems we build serve us, not the other way around.

5) Speak to staff

“If schools are going to transform themselves into places where teachers can thrive, then we’ll need […] the collective knowledge available…”

David Didau

In theory, this section could sit in with point 3 about quantitative and qualitative sources. But, for me, it is too important not to sit separately. Staff sit at the centre of our CPD model – the opinions and insights of the professionals in our schools are invaluable.

Once-year surveys or Ofsted staff surveys don’t do the job. Firstly, they are too far apart and too all-encompassing. But in addition, especially in schools with a strong culture, colleagues know these surveys are high-stakes and, therefore, might not share what they think could be refined, choosing to focus solely on the positive. We need our own internal, low-stakes, formative staff voice systems. Anonymity can be used to ensure staff share their ‘whole truth’. Do we want feedback which helps us improve? Or positivity that makes us feel good but has no impact on CPD and school improvement?

Passing conversations where staff share their positive or negative feedback about CPD can be easy to dismiss because they feel immeasurable. But these conversations are indicators nevertheless. What we cannot do is take this positive or negative feedback wholesale – we must be careful not to just listen to the loudest voices. When we have invested time and effort into a project, it can be easy to be over the moon when staff come and tell us their tales of reading, collaboration, and subsequent classroom development. And yes, these are wonderful moments for which I am endlessly grateful. But we must remember the quieter voices, especially as these voices might have concerns, questions or brilliant suggestions. Part of our ongoing evaluation needs to be about seeking these voices out so that we can better invest in and protect all our staff… and their pie charts.

6) Supporting staff in their own evaluation

Both Jim Knight and David Didau talk about the sense and power of supporting teachers in evaluating whether they have met with the goals they have determined for themselves. There can be parameters around this, such as a personal goal still linked to broader school or team improvement priorities, or depending on career stage, staff may require more or less support in determining a goal that is high-leverage. Nevertheless, for me, this is a pivotal part of CPD evaluation.

Evaluating CPD through this individual lens isn’t logistically easy, but it has the benefit of being able to give us lots of information at the ‘micro’ level. When embedded in a strong culture of professional learning, any associated documentation serves a reflective as opposed to ‘logging that I’ve done it’ purpose, again providing us with a great source of qualitative evaluation.

7) External tools

“We don’t know what we don’t know,” therefore, engaging in evaluation without being sufficiently outwards facing can lull us into a false sense of security. As Tracy O’Brien states, “Good schools benchmark themselves against best practice elsewhere.”

Alongside looking at what other schools do in order to help us in our evaluation, there are an increasing number of external tools that schools can access:

To close…

I will finish with a final quotation from Earley and Porritt, which provides a valuable framework of questions when looking to engage in that invaluable evaluative process at the start of a CPD project:

  • What is impact evaluation? Why should we do it?

  • What is your current practice, your baseline? What is the evidence to show this?

  • For whom do you want professional development to make a difference?

  • By when?

  • Does it make a positive difference? How much of a difference?

  • How do we know? What is the evidence of impact?

  • How can we evaluate impact simply and practically?

Earley and Porritt

I am not writing this article from a position of thinking that we have it ‘nailed on’ when it comes to evaluating CPD in our school. In fact, it is because I wanted to reflect upon this thorny issue myself that led me to want to write this article. One thing that has struck me is that if we are trying to evaluate CPD and know what has had what impact, then this takes time – we can’t just throw ourselves in. We need to be more careful, more deliberate. We need to engage in thorough pre-evaluation, reading, baselining etc. Yes, whilst considering the opportunity cost of such activity and striking a balance between growth and inertia. That being said, if this makes us move slightly slower, I often think this is no bad thing.

“We can never know enough to make perfect decisions.”

David Didau

And although, as leaders, we may be in a position of feeling all too acutely the accountability of our position, we need to be strong enough not to pass on this unintelligent accountability through the gathering of data which does not give us valuable information and wastes valuable time.

We are building this CPD provision for our colleagues in school, so the best bet might be to start and come back to them. This is what a ‘person-centred CPD’ model is all about – not only building based on our students and our staff but reflecting alongside them too.

 

Bibliography

Allison, S. (2014) Perfect Teacher-Led CPD. United Kingdom: Crown House Publishing. 

Bubb, S., Earley, P. (2010) How to… evaluate impact. Available at: https://www.teachingtimes.com/professional-development-evaluate-impact/

Coe, R. (2023) Why are we holding out for more professional development time (even though school leaders say they can’t manage it)? Evidence Based Education. Available at: https://evidencebased.education/why-are-we-holding-out-for-more-professional-development-time-even-though-school-leaders-say-they-cant-manage-it/

Coe, R., et al. (2022) A model for school environment and leadership (School environment and leadership: Evidence review). Evidence Based Education. Available at: https://evidencebased.education/school-environment-and-leadership-evidence-review

Didau, D. (2020) Intelligent Accountability: Creating the Conditions for Teachers to Thrive. United Kingdom: John Catt Educational Limited.

Earley, P., Porritt, V. (2010) Effective Practices in Continuing Professional Development – What Works? Available at: https://www.teachingtimes.com/tda-effective-practices/

Earley, P., Porritt, V. (2010) Effective Practices in Continuing Professional Development – Evaluating Impact. Available from: https://www.teachingtimes.com/pdt-effective-practices-professional-development/

Sims, S., et al. (2021) Effective Professional Development: Guidance Report. EEF. Available at: https://educationendowmentfoundation.org.uk/education-evidence/guidance-reports/effective-professional-development.

Enser, Z., Enser, M. (2021) The CPD Curriculum: Creating Conditions for Growth. United Kingdom: Crown House Publishing. 

Mccrea, P. (2023) Expert Teaching. Spain: Independently published.

Myatt, M. (n.d.) Humans first, professionals second. Available at: https://www.marymyatt.com/blog/humans-first-professionals-second

O’Brien, T. (2022) School Self-Review – A Sensible Approach: How to know and tell the story of your school. John Catt.

Training and Development Agency for Schools (2007) Impact evaluation of CPD available at TRAINING AND DEVELOPMENT AGENCY FOR SCHOOLS (ioe.ac.uk)

Weston, D., Clay, B. (2018) Unleashing Great Teaching: The Secrets to the Most Effective Teacher Development. United Kingdom: Routledge.

Wiliam, D. (2018) Creating the Schools Our Children Need: Why What We’re Doing Now Won’t Help Much (And What We Can Do Instead). West Palm Beach, Florida: Learning Sciences International. 

Wiliam, D. (2021) [Twitter] 11 March. Available at: https://twitter.com/dylanwiliam/status/1370099308532531200

Author

Nikki is a Deputy Headteacher working in Bradford. With experience in both pastoral and academic senior leadership in the UK and Malaysia, Nikki has led implementation of policy development, CPD, and building a team of Faculty Research Leads. She blogs at https://lovetotalktandl.wordpress.com/

Write A Comment