AI in Education: Positives and Problems

 By Jonathan O’Donnell

 

In November 2022, even Computer Science teachers well-versed in Artificial Intelligence (AI) and Machine Learning were caught by surprise. ChatGPT emerged: a publicly accessible, free Large Language Model (LLM) with a simple user interface that produced remarkably consistent, almost unbelievably good outputs. In education, no one saw it coming.

Just six months later, the 2023 Pearson School Report showed that 45% of educators expect to see increased use of AI in schools over the next decade. Now, over a year since ChatGPT’s debut, this percentage will no doubt be higher. And with many, many generative AI tools available, it can all be very confusing.

It’s crucial to understand that AI is trained on a vast amount of data. Whilst we’re promised by some companies that the data used to train models is up to date and trustworthy, under scrutiny – even with basic understanding of particular topics – it’s clear that data can be outdated, or the AI has incorrectly processed the data to generate a hallucination.

This is a key reason why, when I teach AI to students and teachers, they all tend to experience the same stages:

Amazement: The initial reaction is often one of awe. Students and teachers alike might exclaim, “This technology is incredible!” There’s a belief that AI will significantly ease educators’ workload, reducing the time spent planning, marking, report writing, and even transforming assessments.

Doubt: However, this enthusiasm is soon tempered by frustration. Common remarks include, “It’s not doing what I asked it to”, or “Why is it repeating itself?”. Users might also find that the AI fails to complete all assigned tasks or they can’t verify certain provided information.

Dismissal: Frustration can lead to outright rejection. Comments like “This is useless!” or “I can do it faster myself” are common, as users grapple with the limitations of the technology.

Clarity: Eventually, a balanced understanding emerges. Users recognise that while AI might be inefficient at certain tasks, it excels in others. They learn to adjust their approach, realising, for example, that while AI may not be adept at creating lesson plans, it can suggest potential activities. This stage marks an evolution in understanding, as users see AI as a tool with specific strengths and weaknesses, rather than a catch-all solution.

These stages, in fact, cycle with each new use case and tool encountered. This technology causes strong emotions – excitement and frustration and everything in between, which is why it’s so important to look at the possibilities and drawbacks of AI in education.

Digital inequalities

Training effective AI models incur costs of thousands or even millions of pounds, and each user interaction with the model incurs a cost to the company. Consequently, the majority of AI tools are paid for. This means schools’ budgets could limit access to the most effective tools and create inequalities between the private and state sectors. The impact could be more pronounced for students from low-income families who may lack access to devices at home or AI-driven revision and learning tools. Students who have more frequent access to devices at school are also at an advantage, as they can leverage AI tools and assessments more effectively. Addressing the digital divide should now, more than ever, be a high priority for everyone involved in education.

Supporting students with additional needs

While access to AI technology has the potential to widen certain gaps, it can also be effectively employed to narrow others, particularly in providing exceptional support for learners with special educational needs and disabilities (SEND) and English as an additional language (EAL). For instance, one tool I’ve utilised for students relocating to the UK from Ukraine, who possess little to no English language skills, is Microsoft PowerPoint Live. This tool creates live, translated transcripts for students. Teachers speak into a microphone during lessons, and the students can see a real-time translation of the teacher’s words in their own language on a device.

By providing students with the script and resources and allowing them to pose further questions to an AI about the material – simplifying meanings, providing examples, etc. – we can offer even more support. Access to a live script is a remarkable way to assist our hearing-impaired students. Companies like Be My Eyes are also using ChatGPT-4 to assist visually impaired users. Through the app, users can capture an image and ChatGPT describes the content, promoting independence for students and reducing their dependence on others to explain experiment results, chart descriptions, or textbook images.

Workload

A significant advantage of using AI in education is its capacity to save teachers’ time, aiding them in completing the multitude of tasks required of them.

When it comes to marking, there are tools for some subjects that can generate exam questions, provide marks, and offer feedback. However, these tools often come at a cost and require students to type their answers into a computer. In secondary schools, most students predominantly write in books, so we have yet to see a significant reduction in marking workload.

The game-changer we’re waiting for is the effective integration of Optical Character Recognition (OCR) with AI. This technology would allow teachers to take a picture of students’ handwritten work and convert it into digital text. While my experiences with current OCR software have been disappointing, once AI effectively reads students’ handwriting, it could revolutionise marking. Imagine an app where you can select a student, take a picture of their work, and have the software potentially indicate spelling mistakes, grammatical errors, generate custom feedback, mark answers, provide insights into the student’s understanding, and record everything. Furthermore, the app could analyse this data to offer resources for activities or retrieval questions based on common misconceptions or knowledge gaps.

Training AI for marking complex answers requires extensive effort, involving detailed instructions and numerous marked examples with justifications. While time-consuming initially, this investment can save considerable time in the long run, especially for frequently asked questions.

Planning

AI can definitely support planning, yet it’s important to understand its limitations. I often ask my Computer Science trainees to list every variable that goes into a lesson plan. It might seem straightforward, with constants like the subject knowledge to be taught. However, during my visits to 30 secondary schools observing Computer Science lessons, I found few constants. Thus, asking AI to generate an hour-long lesson plan for a specific year group is a tall order. Sure, AI can produce a lesson, but it’s unlikely to be effective without extensive input on various variables such as lesson length, specific pedagogical practices, students’ prior knowledge and understanding, available resources, students’ needs (without providing personal data), common misconceptions, and required assessment for learning (AfL).

But this is missing the point of AI: AI is an assistive tool. I encourage trainees to leverage AI to spark ideas by seeking task suggestions, selecting one, and refining it. This approach opens a vast creative stream, but it’s important to be wary of discredited pedagogical practices, like learning styles when using AI in this capacity.

What next?

Before November 2022, I would have confidently provided predictions about the future of educational technology. However, now, I write with a degree of hesitation, aware that everything mentioned could be outdated in just a month’s time. There’s an undeniable sense that something significant is on the horizon. Some companies are already utilising it, and it won’t be long before we see it in schools: AI working across networks, systems, and multiple applications.

There are, however, significant concerns regarding e-safety threats that students and staff might face. The ease of access to deep fake technology, which allows for the digital replacement of faces in photos or videos, is alarming. The likelihood of cyberattacks, especially more sophisticated spear phishing attacks, is increasing. Therefore, teaching students and staff how to use AI technology safely, legally, and ethically is imperative. More opportunities to support students’ GenAI literacy, like through Pearson’s new EPQ:AI, are most welcome.

The edtech landscape is evolving at an unprecedented pace and while this technology brings forth new challenges and complexities, it also offers immense potential to transform the way we teach and learn. AI is set to play a pivotal role in our educational infrastructure. As we navigate this new era, our focus must remain on harnessing AI’s capabilities to enhance educational experiences while vigilantly safeguarding our students and staff against its potential pitfalls.

Jonathan O’Donnell is the Senior Computer Science Consultant and AI Lead at the Harris Federation and has taught IT and computing for over 10 years. 

Write A Comment