AI is everywhere, and it’s moving quickly
AI is currently everywhere! Whether this is in our personal lives, offering assistance with everything from health and fitness to checking our homework, recommending what to watch on TV, or suggesting our next book, it has become an integral part of everyday life.
That same rapid growth is now spilling into the workplace. Fear of missing out and being left behind is real when it comes to AI. The steep increase in AI adoption is leading an increasing number of businesses – even those that are typically risk-averse – to introduce AI into their operations. Often, this starts with good intentions: saving time, improving workflows, or helping people make better decisions under pressure.
However, this is where AI accountability and legal liability really matter.
You can use AI, but you can’t hand over responsibility
One of the biggest issues we see is not that people consciously believe responsibility has shifted, but that AI outputs are often trusted and acted upon without being fully questioned. When information is presented confidently or appears data-led, it can feel reassuring, especially when teams are under pressure. However, when decisions are later challenged, the issue of responsibility comes sharply into focus. If an AI tool has given incorrect advice, made an unfair recommendation, or suggested something your business cannot realistically deliver, the law does not hold the AI accountable. Instead, responsibility sits with the employer or individual using it.
This can feel uncomfortable, particularly in areas like HR, disciplinary decisions, workplace adjustments, or commercial agreements. Many organisations are trying to do the right thing but are unsure where the line sits.
Why frameworks are so important
Without a clear framework, AI use often grows informally. Someone introduces a tool to help; it seems to work well, and before long, it becomes embedded in decision-making without proper checks.
A framework does not need to be complicated. It simply helps you answer some key questions:
- Where are we using AI?
- What level of risk does this create for our business?
- What accountability checks for final decisions are in place?
Most importantly, it helps ensure that higher-risk decisions receive more scrutiny than low-risk, administrative tasks.
Understanding your business risk profile
Not all AI use carries the same risk. Your organisation’s risk profile will depend on factors such as:
- The size and structure of your workforce
- The nature of your sector and regulatory environment
- Whether AI influences people’s rights, pay, or treatment
- Whether AI shapes legal or commercial commitments
For example, using AI to help draft an email or suggest document structure is very different from using it to recommend disciplinary action or reasonable adjustments. As a result, high-risk decisions must always involve meaningful human judgment.
Regulators such as the UK Information Commissioner’s Office (ICO) are clear that where AI affects people’s rights, human oversight is essential (ICO, Guidance on AI and Data Protection).
Turning principles into practical action
This is where many organisations get stuck. Knowing the risk is one thing; managing it in practice is another. Currently, many businesses will not be aware which AI tools are being used in their business, let alone the level of reliance that is being placed on them; establishing this needs to be the starting point.
That’s why we’ve created a one-page AI Risk Checklist, designed to help you:
- Identify where AI is being used.
- Assess the risk level of different tools.
- Build in appropriate human oversight.
- Ensure fairness, transparency, and accountability.
Confidence comes from clarity
AI needs to be embraced if businesses are to avoid falling behind. However, this needs to be done in a proportionate way, with a clear understanding that different tools and different use cases carry varying levels of risk. By taking the time to understand where AI is being used and what it is influencing, and by putting a simple framework in place, businesses can benefit from AI while avoiding unnecessary risk to both the organisation and the people within it.
To check if your current AI use poses a risk, download our one-page AI Risk Checklist.
If you want expert guidance on building an effective AI framework tailored to your organisation, contact Rebecca today for a straightforward conversation about next steps on Rebecca.shah@thrivelaw.co.uk.
If you would like support assessing your AI risk profile or building a practical governance framework, get in touch by emailing enquiries@thrivelaw.co.uk. We can help you move forward with confidence, not fear.







