Local Government Lawyer

Haringey Contracts
Demo Midpage Premium

Justin Harrington sets out what UK public sector organisations need to know when it comes to generative AI and data protection.

This is the second article in Justin’s AI and Public Sector series. The first is here.

Generative AI tools—such as ChatGPT, Bard, and DALL·E—are increasingly being adopted across the public sector. These systems can create text, images, audio and code with impressive speed, offering new ways to improve service delivery and efficiency. However, the use of generative AI also raises serious questions about data protection under UK law.

This article explores how using generative AI affects data protection responsibilities under the UK GDPR and Data Protection Act 2018, and offers practical advice for public sector organisations using or considering these tools.

What Is Generative AI?

Generative AI refers to artificial intelligence that can generate new content based on the data it was trained on and the prompts it receives. It doesn’t just analyse data, it creates something new. Examples include:

  • Writing emails, reports or letters
  • Creating images or visualisations
  • Summarising or transforming documents
  • Assisting with code development or bug fixes
  • Generating responses in chatbots or virtual assistants

These capabilities make generative AI particularly attractive for public sector use, from drafting correspondence to automating administrative tasks. But these tools don’t operate in a vacuum: they process, produce and sometimes retain information that may be personal, sensitive or confidential.

Why data protection law applies

Under the UK GDPR and Data Protection Act 2018, any organisation that processes personal data must do so lawfully, fairly, and transparently. This includes local authorities, NHS bodies, education providers, police services and other public bodies.

If a generative AI system is used in a way that involves personal data, for example:

  • Entering identifiable information into a chatbot
  • Using AI to summarise internal HR files or to screen applicants’ CVs for a job
  • Asking the tool to write a letter to a named citizen

then data protection law applies. It is important to remember that processing involves virtually anything you do with personal data.  So holding data on a computer or inputting personal data into a third-party tool are both forms of processing, and as the controller of that data, you remain responsible for what happens to it, even if it is processed by an external AI provider.

Key legal rsks and considerations

1. Data protection by design

The ICO will expect you to document how you have adopted data protection by design and default into your culture and processes, but the complexities of AI make this more difficult.  Further, you cannot delegate these issues to data scientists or engineering teams. Your senior management, including DPOs, are accountable for understanding and addressing them since overall accountability for data protection compliance lies with the controller, ie, your organisation. Completing a data protection impact assessment should assist you to identify and control these risks.

2. Lawful basis for processing

Before using AI tools, you must identify a lawful basis (as set out in Article 6 UK GDPR) for each processing of personal data performed by you. For public sector bodies, this may be a public task or legal obligation, but this needs to be documented and specific.

Using AI just because it’s convenient or efficient won’t satisfy the legal test.

If you are processing special category data, in addition to identifying a lawful basis, you must also meet one of the conditions set out in Article 9 UK GDPR.

3. Data minimisation and necessity

Article 5 UK GDPR requires that the amount of data used is “limited to what is necessary in relation to the purposes for which they are processed”.  But AI systems require large amounts of data.  How can this be reconciled?  It’s about being mindful that when designing AI systems (or buying in AI services provided by a third party), you consider whether all data is required or whether the same purpose can be achieved with less data. Feeding entire documents or datasets into an AI system without redacting identifiable information could breach this principle.

4. Third-party processors and transfers

Most generative AI tools are provided by third parties, many of whom may be based outside the UK. That means the data you input could be stored or processed abroad, including in countries without equivalent data protection laws.

You must ensure appropriate safeguards are in place and avoid using tools that send data outside the UK unless you have conducted a proper risk assessment and put the protections required by the UK GDPR in respect of such transfers in place.

5. Transparency and fairness

If you use generative AI in your services, especially in interactions with citizens, you need to be open about it. This includes updating privacy notices and being clear when AI is generating communications or making suggestions. This is something that you must be proactive about.  You must actively provide details of purposes, retention periods and who you will share a person’s data with before you use it in an AI model or apply the model to these individuals if you collect it directly from individuals. If you collect it from third parties, you must provide this information within one month.

Transparency is closely related to the issue of fairness.  You must consider whether your use of AI could have unjustified adverse effects on people and if so, avoid such use.  This could be because the AI produces biased or inaccurate outputs.

6. Security and confidentiality

Unfortunately, because of their complexity, there are more security risks with many AI systems.  This is a function of the often larger number of third-party integrations that are required and the larger range of people (each with their own experience and practices) involved in AI system designs.  By contrast, public sector data, particularly special category data, must be handled with appropriate security.

Inputting sensitive or confidential information into a generative AI tool could pose serious risks if the provider stores or uses that data to improve its models or worse, uses that data to generate responses for anyone who uses it. This means checking (and obtaining warranties as to) the extent that this can occur or else banning the disclosure of sensitive or confidential information of the authority into a public AI tool.

Practical steps for public sector organisations

  • Conduct a Data Protection Impact Assessment (DPIA): If you plan to use generative AI in a way that involves personal data, a DPIA will enable you to assess risks arising from use of personal data and to address those risks using a data protection by design approach.
  • Review contracts and policies: Ensure you carry out due diligence on third party AI providers and ensure contracts with AI providers include adequate data protection clauses which reflect ICO guidance.
  • Update staff guidance: Train staff on the safe and lawful use of AI, especially around entering personal or sensitive data.
  • Monitor developments: AI technology and regulation are evolving rapidly—stay informed and review your risk assessments regularly.

Conclusion

Generative AI offers real opportunities for innovation and efficiency across the UK public sector, but its use must be underpinned by strong data protection practices. Public bodies remain fully responsible for how personal data is used, regardless of the technology involved.

Justin Harrington is a partner at Geldards.

Locums

 

 

Poll

in association with...

Lexis 200 wide