
AI solutions: the contractual issues
What are the key contractual issues that public sector organisations should prepare for when implementing an AI solution? Justin Harrington explains.
- Details
This is the third article in Justin’s AI and Public Sector series. The first is here and the second is here.
Artificial intelligence (AI) solutions are becoming a common feature of public sector operations across England and Wales—powering chatbots, automating workflows, and enabling smarter decision-making. But whether you’re commissioning a predictive analytics tool for housing services, implementing a generative AI assistant, or rolling out AI to support case management, these technologies bring a new set of challenges to the contracting table.
AI is not just another software purchase. It often involves dynamic systems, evolving datasets, third-party integrations, and complex compliance considerations. For public sector customers, getting the contract right is critical.
In this context, many public bodies have learnt from experience that using their standard terms and conditions to purchase cloud services does not work. This is even more true for the purchase of AI systems. Indeed, even using a standard cloud contract to purchase AI systems is likely to leave you very exposed to some of the key risks arising from using AI.
This article sets out the key contractual issues public bodies should consider when implementing an AI solution and offers practical tips for managing risk and ensuring lawful, effective deployment.
1. Clearly define the scope and purpose of the AI solution
Definition of scope and drawing up a clear specification is fundamental for any IT contract. But this is even more so for AI systems. Unlike traditional software that performs fixed tasks, AI tools are designed to operate in more flexible or autonomous ways. This makes it essential to define and set out in detail in the agreement what the system is intended to do, what data it will use and what its outputs are expected to be. To the extent possible, the agreement should contain measurable outputs that can then be used to assess contractual compliance.
Avoid vague descriptions like “smart automation” or “data insights platform” unless they are clearly explained. A well-defined specification helps manage expectations and provides a foundation for monitoring and accountability.
2. Ownership and use of data
AI systems are only as good as the data they are fed, and managing that data is one of the biggest legal and practical challenges in any AI contract.
Key points to address include:
- Who owns the rights in the input data? Ensure you retain copyright and/or database rights to any data you supply, including personal or sensitive information. This should ideally be backed up by obligations of confidentiality in respect of the data you supply.
- What rights does the supplier have to use that data? Be cautious about granting broad rights to reuse or commercialise your data, especially where it includes confidential or public service information. Be aware that many standard AI contracts prepared on behalf of suppliers include broad rights to reuse data made available to them as a matter of form.
- Who owns the AI model’s outputs? If the system generates reports, letters, summaries, or decisions, your contract should clarify whether you can freely use and adapt these outputs. Note that many contracts drafted on behalf of suppliers reserve all copyright in the outputs to the supplier.
3. Data protection and security
If the AI solution will process personal data, the contract must include robust data protection provisions, in line with UK GDPR and the Data Protection Act 2018.
This means:
- Clearly identifying the controller and processor
- Including standard data processing clauses under Article 28 where the supplier is a processor
- Identifying the processing to be carried out, the lawful basis to apply and the fairness of such processing.
- Complying with your transparency obligations under the legislation
- Defining how data will be stored, secured and deleted and adhering to the principles of data protection by design and data minimisation
- Ensuring data is not transferred outside the UK without safeguards
- Setting out the technical and organisational security steps that will be complied with
Security should be an issue for all data, not just personal data. Consider requiring penetration testing, encryption standards, and audit logs—especially where the AI will handle sensitive, financial, or operational information.
4. Transparency and explainability
Public sector organisations are increasingly expected to ensure that decisions made (or influenced) by AI are transparent and explainable, particularly where individuals may be affected. This is a formal requirement of the UK GDPR, but transparency and explainability are also one of the five AI principles in the UK Government’s 2023 White Paper.
Your contract should include:
- A requirement for the supplier to explain how the AI system works in plain terms, an “explainability statement”.
- Access to documentation on the algorithm’s logic and training data (where possible)
- A right to request audit trails of key decisions or outputs
- Provisions for complying with government transparency standards (e.g. the Algorithmic Transparency Recording Standard)
This is especially important where AI is used to support high-impact services, such as benefits, planning, or social care, where fair and accountable decision-making is essential.
5. IP infringement of third parties’ rights
This has two aspects to it. A customer will look to ensure that the AI system, including all the third-party software and open source software typically making up a system, will not infringe third-party IP rights. This is relatively normal for IT contracts and is usually backed up by a warranty and an indemnity. What is more contentious is where you rely on the supplier to provide you with data sets, where the question arises, does that dataset contain material that has been used in breach of any licence relating to it? Some suppliers may be reluctant to provide the normal IP assurances in respect of this data.
6. Performance and liability
From a customer’s perspective, all IT systems should set out clearly what performance should be achieved. But equally, from the suppliers’ point of view, they will always want to cap their liability.
- Service levels (e.g. uptime/availability for cloud-hosted systems, accuracy thresholds, response times to fix errors or outages typically categorised from P1 to P4). These can be backed up by service credits if appropriate.
- Ongoing support and maintenance to address model drift, errors, or updates
- Warranties and (appropriate) indemnities. These may differ from standard warranties. While suppliers may look to you to comply with laws in your use of an AI system hosted by them (ie not the other way around), you may look for assurance regarding loss or damage to your data and that the AI system will not transfer any virus into your systems.
- Limitations of liability. This is always a difficult area. Most suppliers will not contemplate entering into a contract that does not cap their liability at an appropriate level. The norm these days is to link to annual periods of liability, but to exclude certain categories of loss (eg breach of confidence) or else to have higher caps for particular types of loss (eg data protection).
Public sector bodies should also retain the right to suspend or terminate the contract if the system fails to meet agreed standards or introduces unacceptable risks.
7. Human oversight and control
No AI system should operate without meaningful human oversight, especially in the public sector. It is a requirement of Article 22 UK GDPR and is often cited as good practice.
Ensure your contract:
- Requires human review of significant decisions. This is a consequence of Article 22 UK GDPR which states that a person has the right “not to be subject to a decision based solely on automated processing… which produces legal effects concerning him or her or similarly significantly affects him or her”.
- Provides tools or interfaces that allow staff to interrogate or override AI recommendations
- Mandates regular review points to assess the system’s behaviour and outcomes
It should be borne in mind that suppliers’ standard contracts for AI systems may require customers to carry out human oversight at all times in order to mitigate their risk and liability for erroneous output.
8. Responsible AI use
Finally, following the principles set out in the UK Government White Paper, many public bodies are adopting ethical principles for AI use, including fairness, inclusivity, and non-discrimination. These principles should be reflected in your contract.
You may want to include:
- Clauses preventing the use of biased or discriminatory data. This reflects the requirements of the Equality Act.
- Supplier obligations to report risks or unintended consequences
- Alignment with national standards or charters on responsible AI
These clauses demonstrate your commitment to trustworthy AI and help build public confidence in how new technologies are deployed.
Conclusion
AI is not just another digital tool. It represents a fundamental shift in how decisions are made, services are delivered, and information is processed. For public sector organisations, this means thinking carefully about how contracts are structured and risks are managed in those contracts.
By addressing the key issues of scope, data rights, ip infringement, transparency, security, and ethics up front, you can ensure your AI implementation is not only effective, but also legally sound and publicly defensible.
Justin Harrington is a partner at Geldards.
Head of Governance & University Solicitor
Director of Legal and Governance (Monitoring Officer)
Senior Lawyer - Advocate
Poll
in association with...
Events

05-08-2025 10:00 am
18-08-2025 10:00 am