How to Choose Safe and Trustworthy AI Tools for the Classroom
Dr Jon Chippindall
The start of this academic year has brought more than the usual cycle of timetables and new cohorts. Schools in England are now operating under updated Keeping Children Safe in Education (KCSIE) guidance for 2025, alongside the requirements of the Online Safety Act. Both signal a shift in how digital risks are framed, with artificial intelligence (AI) now firmly on the radar.
AI (particularly generative AI) has quickly moved from a novelty to a practical tool across education. It presents tangible benefits; streamlining administrative tasks, supporting lesson planning and personalising learning.
Yet the same technology can generate inaccurate or misleading content, or create safeguarding challenges that staff may not have encountered before.
Rather than treating these developments as obstacles, school leaders have an opportunity to build stronger foundations. By examining how AI systems are designed, tested and overseen, schools can meet statutory requirements while also fostering a culture of safe, confident use.
KCSIE 2025
The 2025 update marks the first reference to AI in safeguarding guidance. Although the reference is brief, it signals that schools must revisit how filtering and monitoring work in practice, and ensure Designated Safeguarding Leads (DSLs) are alert to the new risks AI can introduce.
Until now, schools were expected to consider online safety in broad terms, with filtering and monitoring systems aimed mainly at traditional websites or communications platforms.
AI changes that picture. Students may interact with chatbots, language models or other generative tools that produce outputs resembling trusted sources. A persuasive but incorrect explanation can easily mislead them. Equally, generative AI could produce inappropriate content that slips past conventional filters.
For DSLs and senior teams, the implication is clear: filtering and monitoring policies must evolve. Staff need training to recognise risks that AI presents, and providers must be asked how their systems handle misinformation, detect attempts to bypass restrictions and flag harmful material.
The Online Safety Act and what it means for education
The Online Safety Act, meanwhile, places clear duties on platforms and services children are likely to use. Although it’s often discussed in the context of social media, it also has implications for educational technology.
If a school uses a platform incorporating generative AI, the provider is expected to take proportionate measures to protect children from harmful content and contact. Beyond filtering inappropriate material, it means embedding safeguards into the design of the system so that the risks are reduced from the outset.
For systems incorporating AI, this includes mechanisms to prevent inappropriate outputs, protections against accidental exposure to inappropriate material and clear reporting routes so issues can be flagged quickly.
For school leaders, the practical takeaway is to interrogate providers. Ask how they test their models, what guardrails are in place and how responsive their reporting processes are. Safety features should be integral to the tool, not bolted on afterwards.
What to check before adopting an AI tool
School leaders don’t need to be technical experts, but they do need to know what questions to ask. A few key areas make the difference between a safe, effective system and one that creates more problems than it solves.
- Designed for education: Check that the tool is genuinely built for schools. Is the content age-appropriate? Does it align with the curriculum? Does it lighten teachers’ workload rather than adding to it?
- Safety from the ground up: Look for tools where safeguards are built into the design, not added later. Ask: does the system prevent inappropriate outputs before they reach pupils? Are there clear limits on what the AI can generate?
- Data handling: Find out what information the system collects, how long it is stored and whether it is shared outside the provider. Providers should give clear, plain-English answers. If they can’t, be cautious.
- Teacher in control: AI should support, not replace, professional judgement. Can teachers review or edit outputs before pupils see them? Is there a straightforward way to report or override mistakes?
- Transparency and accountability: Trust depends on openness. Providers should be clear about how their system works, what safeguards exist and where the limits are. Avoid tools that dodge these questions.
Why teacher involvement is essential
Technology only works in education when it aligns with pedagogy. Understanding the needs of their students, teachers will be best-placed to evaluate what tools will achieve the greatest impact in the classroom. That’s why their voices must be central to AI adoption.
Procurement decisions should involve classroom teachers alongside senior leaders and IT staff. Trial or pilot programmes should be shaped by teacher feedback. And once systems are in place, ongoing evaluation should include regular check-ins. If colleagues aren’t making use of new resources, leaders should engage with them to understand why, providing extra support if this is down to a gap in training.
Professional development is also critical. Without time and training, teachers may either over-rely on AI or avoid it altogether out of fear. Neither approach is healthy. Teachers need the confidence to use AI thoughtfully, understanding its strengths and its boundaries.
Moving beyond compliance to culture
The arrival of KCSIE 2025 and the Online Safety Act should be welcomed, but compliance alone isn’t enough. Beyond box-ticking, safeguarding in the age of AI is about building a culture where staff and students can engage with technology confidently and responsibly.
That culture comes from clear policies, yes, but also from the daily practices of teachers and leaders. It’s reinforced when providers design systems with safety and transparency at their core. And it thrives when everyone in the school community feels equipped to question, challenge and improve the tools they use.
So, what should school leaders do now?
Three key steps stand out as school leaders seek to adapt to the challenges and opportunities presented by AI-powered resources.
- Audit current systems: Review the AI tools already in use. Are they aligned with the evaluation questions set out above? Do they meet the requirements of KCSIE 2025 and the Online Safety Act?
- Update policies and training: Make sure safeguarding policies explicitly reference AI. Provide staff with training on both the opportunities and risks.
- Engage with providers: Push for transparency. If a provider can’t answer questions about safety, data and oversight, that’s cause for concern.
AI has real potential to enrich education. It can create new learning opportunities that weren’t possible before. But that potential will only be realised if safeguarding is treated as central, not secondary.
As schools adapt to new legislation and a transformed technological landscape, the focus should be on creating an environment where AI supports learning without compromising safety. That means holding providers to account, involving teachers in decisions and embedding a culture of responsible use.
By doing so, we can embrace innovation while keeping children safe, ensuring technology enhances education rather than distracting from it.
Dr Jon Chippindall is Computing lead at Crumpsall Lane Primary School and CEO of Teachmate
