Artificial Intelligence is becoming an everyday tool for many organisations. Whether you are using it to speed up admin, provide better insights, or support decision making, AI can bring real value. But alongside these opportunities, many businesses are now wondering: How do we make sure our use of AI complies with GDPR?
It is an important question, and one we are hearing more often. AI does not behave like traditional software, and because of that, it introduces new questions around fairness, transparency and individual rights. This blog highlights the most common pitfalls and explains how you can use AI confidently and responsibly.
Choosing the Right Lawful Basis: Why AI Makes This Tricky
Selecting a lawful basis is already an essential step in any data processing activity, but AI adds a unique layer of complexity. This is because AI tools often use data in ways people do not expect. They draw inferences, create new information, or link datasets that were never originally intended for those purposes.
For example, seemingly harmless data such as shopping history, location patterns or online behaviour can allow an AI model to infer sensitive personal attributes. This means the lawful basis you chose for the original data might no longer be sufficient for the AI’s new purpose.
Organisations often run into difficulties where the processing:
• could reveal sensitive or special category data
• is unexpected or not easily anticipated by the individual
• has a significant impact on access to jobs, credit or services
If any of these apply, reassessing your lawful basis is not only sensible but necessary. The safest and most transparent approach is to match your lawful basis to what the AI actually does, not what you hoped or intended it to do.
Profiling: More Common Than You Might Think
Profiling under GDPR is not a niche concept. In fact, modern AI tools carry out profiling all the time, often without organisations realising it. Profiling simply means using automated processing to evaluate personal aspects, such as predicting behaviour, analysing patterns or assigning people to categories.
A helpful way to think about it is this: if the result of an AI process can be linked back to an identifiable person, it is profiling.
Where organisations often get caught out
Even if you describe the activity as
• analytics
• segmentation
• or internal scoring
it doesn’t matter. If it evaluates something about a person, GDPR sees it as profiling.
And this matters because profiling brings additional transparency and fairness requirements. Keeping a clear internal record of every AI system and the outputs they generate can help you maintain oversight and avoid accidental non-compliance.
Automated Decision Making: Why Human Oversight Must Be Genuine
Automated decision-making (ADM), where a decision about a person is reached entirely by automated means without meaningful human involvement – has historically been one of the most restricted areas of UK data protection law. Under the old Article 22 of the UK GDPR, it was largely prohibited unless specific conditions applied.
That position has now changed. Following the Data (Use and Access) Act 2025 (DUAA), which took effect from February 2026, the general prohibition on ADM has been removed for decisions that do not involve special category data (such as health or biometric data). Organisations can now rely on a broader range of lawful bases, including legitimate interests – to carry out ADM, provided mandatory safeguards are in place.
However, this is not a free pass. The safeguards matter. Individuals must be informed that ADM is taking place and given the right to contest the decision and request human review. And where special category data is involved, the stricter rules remain. The ICO is expected to publish updated guidance on ADM in 2026, which will provide further clarity.
The practical point for small businesses remains the same: rubber-stamping an AI recommendation without genuine engagement is not meaningful human involvement. If a person ‘approves’ an AI output without understanding it or having real authority to challenge it, the decision is still effectively automated and the safeguards still apply.
Transparency: People Want Clarity, Not Complexity
Transparency is at the heart of GDPR and it becomes even more important with AI. Individuals want to know how decisions are made about them, especially when technology plays a part.
One common problem is privacy notices that are far too vague. Statements like “We use AI to improve our services” do not help anyone understand what is actually happening. If your AI model analyses sleep data, heart rate patterns or historical information to predict stress levels, that should be clearly explained.
People do not need technical detail. They simply need:
• what data is being used
• what the AI does with that data
• how it affects them in practice
Clear, honest wording goes a long way in building trust.
Data Minimisation: Avoiding the “Just in Case” Approach
Because AI tools tend to perform better with larger amounts of data, organisations sometimes slip into the habit of collecting everything possible “just in case it becomes useful later”. However, this approach goes directly against GDPR.
The best way to stay compliant is to decide early on what data is essential for your AI system to function. Anything outside of that should not be collected. And if the data is no longer needed, it should be deleted.
For small businesses, a useful discipline is to ask a simple question before deploying any AI tool: what personal data does this actually need, and why? If you cannot answer that clearly, it is worth pausing before you proceed.
Not only is this good practice from a compliance perspective, it also makes systems more efficient and reduces the risk of holding unnecessary personal data.
Controller and Processor Roles: AI Can Blur the Lines
Another area where organisations find themselves confused is around controller and processor responsibilities. AI development can make these boundaries harder to define. For instance, if a vendor uses customer data to train their own AI models, they may shift from being a processor to a joint controller.
This can change the legal responsibilities on both sides. To avoid misunderstandings, contracts should clearly set out:
• whether training on client data is permitted
• who owns the data
• who owns the outputs
• how data will move between the parties
Many SMEs use off-the-shelf AI tools and assume the vendor handles compliance. That is rarely the full picture. If the vendor is processing your customers’ personal data, you remain responsible for how it is used. Always check what your contracts say, and if they are silent on AI training, that is a gap worth investigating.
Model Security: New Risks That Need Special Attention
AI introduces security risks that do not always exist in conventional systems. Some models can expose information about the data they were trained on, even when this was never intended. This can happen through techniques that reconstruct or infer training data.
If personal data can be recovered, even partially, GDPR still applies.
To reduce these risks, organisations may want to explore:
• differential privacy
• federated learning
• stronger anonymisation techniques
• regular model testing
• clear governance around updates
If these terms are unfamiliar, the key question to put to any AI supplier is straightforward: can personal data from your training set be extracted or inferred from your model? A responsible supplier should be able to answer that clearly and explain what protections are in place.
Conclusion: Using AI in a Way That Builds Trust
AI is an exciting opportunity for businesses, but it works best when used responsibly. By taking time to choose the right lawful basis, being open about profiling, ensuring genuine human involvement and committing to transparency, you can build AI systems that support your organisation rather than expose it to risk.
If you are reviewing your approach to AI, exploring new tools or updating your internal policies, we are here to help.
You can reach us at enquiries@thrivelaw.co.uk or 0113 861 8101. We would be happy to support you as you take your next steps in building responsible and future ready AI practices.







