
Of all the conversations we have with recruitment agency leaders about AI, the governance conversation is the one that's most often avoided - and the one that matters most.
It's easy to understand why. Governance isn't exciting. It doesn't appear in the marketing material for AI platforms. It doesn't make for a compelling conference keynote. And when you're trying to assess whether a new tool will save your consultants two hours a day, thinking about data processing agreements feels like an obstacle rather than an opportunity.
But here's the reality: recruitment agencies that deploy AI without a governance framework are accumulating risk. Legal risk. Regulatory risk. Reputational risk. And in a sector built on trust - with both candidates and clients - reputational risk is commercial risk.
This article sets out what governance means in practice for a UK recruitment agency using AI, what you're obligated to get right, and how to approach it without turning it into a paralysing compliance project.
AI governance is the set of policies, frameworks, and practices that control how artificial intelligence is used inside your organisation. It covers what AI tools are being used, by whom, for what purpose, on what data, with what human oversight, and with what accountability.
In a recruitment context, this is not abstract. It's very specific.
Your consultants are using AI to process personal data - candidate CVs, contact information, employment histories, assessment outputs. Some AI tools are influencing which candidates get seen and which don't. Some are making recommendations that shape hiring decisions. All of this is happening in a regulatory environment that has clear expectations.
Getting governance right means you can use AI confidently and at scale. Getting it wrong means you're one data subject access request or ICO enquiry away from a serious problem.
UK GDPR applies whenever you process personal data. When you introduce AI into your recruitment workflow, you introduce new forms of processing - and new obligations.
Lawful basis for processing. Every use of AI on candidate data requires a clear lawful basis. For most recruitment agencies, this is either legitimate interests or the performance of a contract. But legitimate interests requires a balancing test: your interests in using AI must not override the rights of the data subjects. Documenting this test matters.
Transparency. Candidates have a right to know how their data is being used. If your AI tools are screening, scoring, or filtering candidates, candidates should be informed - either in your privacy notice or at the point of data collection. Transparency is not optional.
Automated decision-making. Article 22 of UK GDPR provides specific protections where individuals are subject to decisions based solely on automated processing that produce a legal or similarly significant effect. In recruitment, shortlisting and rejection decisions can reach that threshold. If your AI is making or heavily influencing these decisions without meaningful human review, you may have an Article 22 obligation - including the requirement to offer human intervention, allow candidates to contest decisions, and explain the logic involved.
Data minimisation. AI tools should only process the personal data necessary for the specific task. Feeding your entire candidate database into a tool without a clear purpose and data minimisation assessment is unlikely to be compliant.
Third-party processors. When you use an AI tool that processes candidate data, the tool provider is typically a data processor acting on your instructions. You must have a Data Processing Agreement in place. You should understand where the data goes, how it's stored, and whether it's used to train the provider's models.
The Recruitment & Employment Confederation has published guidance on AI use in recruitment, and it's worth reading carefully if you haven't already.
The REC's position is clear: AI can improve efficiency and reduce some forms of bias, but it also introduces new risks — particularly around discriminatory outcomes — that agencies have a professional and legal obligation to manage.
Key principles from the REC's guidance include maintaining human accountability for hiring decisions, ensuring candidates are treated fairly and without discrimination, and regularly auditing AI tools for bias and accuracy. The REC explicitly cautions against using AI in ways that could systematically disadvantage protected groups — even unintentionally.
For REC members, adherence to the Code of Professional Practice includes these obligations. And even for agencies who aren't REC members, the guidance reflects both regulatory requirements and sector expectations.
This deserves specific attention because it's the risk most agencies aren't taking seriously enough.
AI recruitment tools learn from historical data. If that historical data reflects patterns of historical bias - certain universities over-represented in successful placements, certain demographic groups under-represented in shortlists - the AI will replicate those patterns. In some cases, it will amplify them.
This isn't a theoretical concern. It's well-documented. Amazon famously scrapped an AI hiring tool when it discovered the model was systematically downgrading CVs from women. The tool had learned from ten years of historical hiring data that reflected existing gender imbalances.
For recruitment agencies, the risk is both legal and commercial. A discriminatory screening process - even an unintentional one - exposes the agency to employment tribunal claims, regulatory action, and loss of client trust.
The safeguard isn't avoiding AI. It's maintaining human oversight, auditing AI outputs regularly for demographic patterns, and ensuring that AI assists rather than determines shortlisting decisions.
A governance framework doesn't have to be a 50-page compliance document. For most recruitment agencies, it starts with four things.
An AI usage policy. A written policy that sets out what AI tools the agency uses, for what purposes, with what human oversight requirements, and what data can and cannot be processed by each tool. This should be accessible to all staff and reviewed regularly as new tools are adopted.
A data processing assessment. For each AI tool that handles candidate data, document the lawful basis, the data flows, the third-party processor relationship, and the risks. This doesn't need to be complex — but it does need to exist.
Candidate transparency. Update your privacy notices to reflect AI use in your processes. Make sure candidates know what's happening with their data.
An audit process. Commit to reviewing AI outputs periodically for accuracy and demographic patterns. This can be light-touch for lower-risk applications, but it needs to be systematic.
The agencies most confident in their AI use are the ones with governance in place. They can deploy tools more quickly, at greater scale, with less anxiety - because they know what they're doing and why.
Governance isn't a barrier to AI adoption. It's the foundation that makes meaningful adoption possible.
If you're deploying AI tools in your recruitment process and you haven't built the governance framework to support it, that's a risk that compounds every week you leave it unaddressed.
FishTank works with recruitment agencies to design AI governance frameworks that are practical, proportionate, and built into your implementation from the start - not retrofitted after the fact.
[Talk to FishTank about AI governance for your agency →]
FishTank is an AI transformation consultancy for UK SMEs. We help recruitment agencies implement AI responsibly - with the governance frameworks that protect your business and your candidates.
This article is for informational purposes. It is not legal advice. For specific legal guidance on UK GDPR compliance, consult a qualified data protection practitioner.