The Case For Adaptive Quizzing In Assessment

As teachers, we know all too well the value of good questioning when teaching new information to students. We adapt our questions well to draw the right information out from students during the “delivery” part of teaching. But how well do we adapt our questions when assessing our students after the delivery?

Kristian Still argues the case for adaptive quizzing in his latest piece for HWRK Magazine.

 

Without question, the most efficient schedule [for spaced learning] is an adaptive one, accounting for the learner’s rates of forgetting and prior knowledge.

Latimier et al., 2021: 980

At the beginning of the summer, I stumbled upon the work of Dr Svenja Heitmann. Three papers in fact. In this series, Heitmann et al. (2018, 2021, 2022) investigated two mechanisms to ‘optimise’ practice quizzing when compared to note-taking, starting off in the laboratory and moving to the lecture hall with a field experiment with 155 undergraduate pre-service teachers at Bielefeld University.

Dr Heitmann and her colleagues focused their attention on the adaptive (personalisation) mechanisms. Two adaption models were investigated in the laboratory: performance-based and cognitive demand-based.

In the performance-based approach, performance on quiz questions was used as an indicator of retrieval success. In the cognitive demand-based approach, “perceived cognitive demand” when answering the quiz questions was used to indicate retrieval-effort (Heitmann et al., 2022), in other words, how hard the questions were to answer.

Both models led to performance benefits. Personalising the adaptive quizzing, using perceived cognitive demand-based adaptations “substantially increased the quizzing effect” (Heitmann et al., 2018: 10). This mechanism was then applied in the field study.

The field study results led Heitmann et al. (2021: 603) to conclude that the benefits of practice quizzing “in authentic learning contexts are even greater when the quiz questions are adapted to learners’ state of knowledge”.

In addition to improved test performance on familiar questions, their research also provided further evidence for knowledge transfer, suggesting that practice quizzing is a suitable tool to foster meaningful learning.

As for learner achievement motivations (much like a character trait), quizzing benefits were moderated by ‘hope of success’ scores but not ‘fear of failure’ scores.

Their conclusion is clear in the title of the 2018 paper – “Testing Is More Desirable When It Is Adaptive and Still Desirable When Compared to Note-Taking.”

Hardly groundbreaking. But give me a moment — the benefits of testing led learners to achieve a higher test performance and, interestingly, “lower perceived cognitive demand during testing.”

What does that actually mean and why might it interest teachers?

The inference is, that in addition to knowing more, quizzing (retrieval practice) frees up cognitive capacity or thinking space. As Dr Heitmann commented:

The post-test performance was better because their mental resources weren’t as exhausted in the learning phase.’ Dr. Svenja Heitmann

The students’ ‘mental resources were less exhausted due to the adaptation of the test questions to their level of knowledge’. As a result the students ‘profited from the freed-up capacity for the execution of beneficial learning processes.’ In simple terms, students had more resources left for processes that are beneficial for learning.

So, what can teachers take from Dr Heitmann’s research?

First, the benefits of quizzing over note-taking. Hopefully, you knew that already.

Second, the more a student knows going into an exam, the more cognitive capacity they will have to attend to the mechanics of that exam. Importantly, that is not exclusive to adaptive testing. Quizzing of any sort, ahead of exams, helps free up cognitive capacity, which could be allocated to attending to the mechanics of the exam. That can only be a good thing right?

Third, perceived cognitive demand-based ratings, how difficult pupils find questions rather than how many marks they were awarded, might be a more useful measure for personalisation and whether a pupil should relearn or revise a topic area. If nothing else, it is a simple indicator to collect and consider alongside question outcomes.

What might this look like in practice?

End-of-term assessments provide excellent opportunities to explore the ‘perceived cognitive demand’ of the questions you set. First, very high ratings for perceived cognitive load would indicate that learners have not yet acquired the knowledge necessary to master a question. They might benefit more from quiz questions of lower complexity.

Similarly, very low cognitive load ratings would indicate that learners have already acquired the knowledge necessary to master a question and might benefit more from quiz questions of higher complexity.

However, Wisely, Heitmann et al., (2022) advise caution with any metacognitive judgement, due to the common biases and heuristics of learners’ self-assessment. A simple rating scale next to the question would suffice in this case.

I would also add my position, that any assessment with metacognitive judgements (be that confidence or perceived cognitive load), with feedback, promotes metacognitive accuracy. And metacognitive accuracy brings with it a crucial academic advantage.

Now, with these two pieces of data (the rating and the question outcome), there is plenty to discuss with your pupils. First the perceived cognitive load ratings and second the difference between the rating and the performance.

Do we have to use software for adaptive quizzing?

There are plenty of digital platforms on the market using phrases like AI, or adaptive, personalisation. However it does not have to be so. As Dr Heitmann implored:

Adaptive quizzing could just as easily be done with different folders containing differently difficult questions… and then the students use some kind of rating scale (maybe even smiley faces for the young students) to then choose the folder their next question would be coming from. There is still a whole lot of paper pencil schooling going on out there – and adaptive quizzing is available there too.

I asked Dr Heitmann her professional thoughts on the benefits of needing to make the perceived cognitive demand. Are the performance gains part-memorial and part-metacognitive?

It’s not all about reflecting, I think that’s more of a nice side effect (to strengthen metacognition as you wrote). The adaptation is more focused on providing students with fitting questions when you do not have the resources to sit down with every single student to adapt the difficulty yourself according to your personal assessment that’s based on your interaction with that student… because no teacher teaching in regular schools has those resources!

Why has adaptive quizzing stayed in the shadows?

Dr Heitmann argues, “What’s been missing is informed teachers in classrooms, teaching with adaptive quizzing and teachers with a broader audience who can make adaptive quizzing better known. Teachers need to know that it’s a good idea to adapt questions.”

Our thanks to Dr Svenja Heitmann.

 

References:

Heitmann, S., Grund, A., Berthold, K., Fries, S., & Roelle, J. (2018). Testing is more desirable when it is adaptive and still desirable whencompared to note-taking. Frontiers in Psychology, 9. https://doi.org/10.3389/fpsyg.2018.02596. Article 2596.

Heitmann, S., Obergassel, N., Fries, S., Grund, A., Berthold, K. and Roelle, J. (2021) Adaptive practice quizzing in a university lecture: a pre-registered field experiment. Journal of Applied Research in Memory and Cognition, 10(4), 603–620. https://doi.org/10.1037/h0101865

Heitmann, S., Grund, A., Fries, S., Berthold, K., & Roelle, J. (2022). The quizzing effect depends on hope of success and can be optimized by cognitive load-based adaptation. Learning and Instruction, 77, 101526. https://doi.org/10.1016/j.learninstruc.2021.101526

Latimier, A., Peyre, H. and Ramus, F. (2021) A meta-analytic review of the benefit of spacing out retrieval practice episodes on retention. Educational Psychology Review, 33, 959–987. https://doi.org/10.1007/s10648-020-09572-8

Author

Kristian Still is Deputy Head Academic at Boundary Oak School in Fareham. Author of the soon-to-be-released Test-Enhanced Learning: A Practical Guide To Improving Academic Outcomes For All Students.

Write A Comment