01622 392 000 info@invicta.law

AI in Education: Why Data Protection Must Come First

Profile picture of Sophie Payne, a valued member of the Invicta Law Child Protection team.

April 28, 2026

It promises a lot. It can undoubtedly do amazing things. But secretly (or maybe not so secretly), it also makes us slightly nervous. Schools’ Data Protection Officer, Adam Halsey, looks at what this means for schools, and why data protection needs to sit at the centre of any decision to use AI.

When it comes to AI, we always hear about risks and opportunities. You’ll be challenged to find an article or piece of analysis on AI, particularly relating to the education sector, that doesn’t refer to both.  

Whether or not your school is using generative AI at the moment, many of us are aware of the potential uses. It may be taking on or assisting with written tasks such as reports and emails, or supporting feedback for students – possibly including that crucial tailored approach for individual learners and their specific needs, which may be particularly pertinent in a SEND context. It can also help to answer more complex questions that a single online search may not be able to. Creation of lesson content, marking/assessment and even support with CPD are other likely uses of AI within our schools.

But alongside the promise, what of the concerns teachers and parents are raising about the use of AI in schools? Can it facilitate academic misconduct? Is there a risk of student over-reliance on technology? And crucially – our focus of this article – what about data protection and privacy risk?

DfE and Regulatory expectations for schools

The Department for Education, while positive about the potential (particularly AI’s potential to assist with the provision of tailored pupil support) has said that safety should be the top priority when deciding whether to use generative AI in an education setting. It is also clear that use of AI should be both specified, and have clear benefits that outweigh the risks.

Ofsted is supportive of the use of AI where it can be shown to improve the care and education of children and learners. Schools must therefore consider the effect AI has on the criteria set out in Ofsted’s existing inspection frameworks and regulations – in other words, there is no cause to deviate from the expectations already in place.

But the clearest point of all – data protection must be at the forefront of AI use in schools.

Why data protection must come first

Why is this? The easiest answer – it’s a legal requirement. The UK GDPR and Data Protection Act 2018 comprise a legal framework rooted in risk, and key to this is the phrase “data protection by design and by default.” Given that the use of AI to process personal data is considered by the ICO to almost always constitute “high-risk processing,” we can be in no doubt that measures are needed to govern and enable the use of AI in education. It’s as clear a sign as any that we can’t just jump at a new tool without having assessed the potential privacy impact.

What about the moral reason? Put simply, a focus on data protection will safeguard those whose personal data is processed using generative AI. So, as with any handling of personal data, whether or not it involves the use of AI, a data protection-centred approach has to be seen as part of our wider safeguarding obligations.

Let’s also not forget the trust that a data protection-focus can help build among stakeholders, including parents, carers, students and staff.  The more our stakeholders can be assured that our approach to introducing and building with AI is rooted in privacy and data protection, the more buy-in we’ll have and the more we’ll be able to make the most of the technological developments, rather than constantly worrying about whether or not we’re doing the right thing.

Practical steps for schools

So, as idealistic as that all sounds, how do we actually go about getting data protection to where it needs to be – at the forefront of schools’ AI onboarding? Here are four crucial starting steps:

  1. Map out what will actually be happening to the personal data. Ask the company providing the tool to help you understand how the data will be handled and whether it will be used to train the Large Language Models (LLMs) that sit behind the tool.
  2. Consult with stakeholders, including staff, IT support – and where appropriate – students, parents and carers too!
  3. Ensure transparency. Be clear with staff, students and parents about why you are proposing to use AI, what it will be used for, and what will happen with individuals’ personal data – remember transparency is a central principle of the UK GDPR.
  4. Use the information from the AI provider to identify, assess and mitigate risk to data subjects – the individuals whose personal data will be processed by the AI.

And the tool that will help us pull all the above together in one place – the DPIA!

Why DPIAs matter

DPIAs (Data Protection Impact Assessments) sound more confusing than they often are or need to be. They are a legal requirement in some cases (and almost certainly whenever AI is being used to process personal data), so the best thing schools can do is to seek advice from their DPO at the outset. The time to draft a DPIA is before a decision has been finalised as to whether the school will be pursuing a given use of AI. This not only ensures legal compliance, but also helps to ensure strong data protection controls are in place throughout the lifetime of the school’s use of the AI-based tool. It is far more effective to ensure safeguards are in place from the beginning than to try introducing them once our use of AI is already underway, by which point we risk finding that the train has long since left the station.

A well-drafted (and crucially, useful) DPIA for use of an AI product will map out:

  • Whose personal data is being processed by the AI
  • Which categories of their data are being processed
  • What exactly we are looking to achieve from the initiative, and therefore the clear purposes for processing this personal data using AI
  • The process and outcome of any consultation we have undertaken with stakeholders, particularly those whose personal data will actually be processed
  • An explanation of how using AI, in this particular instance, is a proportionate and reasonable way of achieving our school’s aims
  • And most importantly, assessing risk to individuals’ privacy and data rights, and weighing these up against the benefits we believe will be derived from the processing

Understanding the risks

And what might those risks be? There could be a risk that, due to margins of error within the performance of the AI system, incorrect feedback or information is attributed to a student, potentially affecting decisions made about that pupil. More fundamentally, we return to the risk that data entered into the model by students (or about students) is actually used by the supplier to train their large language models. This is often problematic from a data protection perspective, as finding a lawful basis to justify this use of personal data can be challenging to say the least.

How can we mitigate these risks, or be assured of the measures the AI supplier has in place to manage them? We can:

  • Put processes in place for teachers to regularly review the output of the generative AI, for example, the output of an AI-based revision feedback app we might be using.
  • Ensure that the version of the product selected does not allow student personal data to be used to train the underlying model. In practice, this often (but not always) means subscribing to a paid-for version of a tool rather than a free one. As experience with social media has shown over time, often if we are not paying for something with money – unfortunately we are likely paying in data.

To maintain the independence of the role, the DPO does not draft the DPIA themselves. Schools should, however, make full use of their DPO for advice at each of the stages outlined above. This is particularly important for DPIAs relating to AI, where it is often better to keep in touch with your DPO throughout the drafting process, rather than risk submitting something that has taken a long time to prepare, but ends up requiring major surgery before any processing can begin.

Given the responsibility that schools carry and how open we must be to public scrutiny, the importance of transparency with staff, parents and students cannot be overstated when it comes to our use of AI. For the education sector, as with most others, using AI to handle personal data without individuals being aware of it is simply not an option. This may involve updating our privacy notices, but we should also make use of existing channels such as parent information apps, newsletters or e-newsletters, and any other channels available to us to ensure stakeholders are aware – helping to build and maintain trust around our use of AI.

Invicta Law, in partnership with EIS, provides a dedicated DPO Service for schools, offering practical advice on data protection, DPIAs and emerging issues such as the use of AI. Find out more about the EIS DPO Service or contact us to discuss support for your school.

More from Invicta Law

Let’s work together