AI in the workplace

The use of AI in the workplace is on the rise – whether you or your employees are using ChatGPT or Microsoft CoPilot to plan your working day, create content, or streamline workplace functions, the use of AI is revolutionizing the workplace, particularly in the recruitment process.

It is no secret that AI offers numerous benefits for workforces, such as efficiency and consistency, particularly when it comes to sourcing candidates, screening candidate competencies, conducting initial interviews and predicting candidate outcomes. But should businesses be compromising trust and privacy for efficiency? The use of AI in the recruitment process raises important legal considerations, both from a UK data protection perspective and a UK employment perspective.

This article summarises some of the potential risks of using AI and the key legal considerations. It also outlines best practices for using AI in the recruitment process.

Discrimination and bias

The Equality Act 2010 prohibits discrimination based on protected characteristics such as age, sex, race, and disability. Employers must ensure that AI tools used in recruitment do not result in discriminatory outcomes. By carrying out regular audits of the tools being used and implementing bias mitigation strategies (i.e. removing bias), businesses can mitigate the risk of dealing with discrimination claims. Employers should be careful not to become influenced by AI outputs.

Transparency and accountability

Under the UK GDPR, employers must identify a lawful basis for processing personal data, and this includes candidate personal data. Employers should only collect data that is necessary for the recruitment process and ensure that it is accurate and up to date. AI systems should be designed to minimise data collection and avoid unnecessary processing.

Article 22 UK GDPR also grants individuals the right not to be subject to decisions based solely on automated processing, including recruitment decisions. Employers must ensure that AI systems are transparent, secure and that candidates are informed about how their data is used, for example by way of a Privacy Information Notice. Candidates should also have the right to obtain human intervention, express their point of view and contest the decision.

Employers should implement appropriate security measures to protect candidates personal data from unauthorised access, loss or damage. This should include ensuring that whatever AI systems are used are secure and tested for any vulnerabilities.

Ethical use of AI

Beyond demonstrating legal compliance, considering the ethical risks of using AI in the workplace is equally as important. Employers should adopt AI assurance mechanisms to evaluate the performance of AI systems and manage risks. This includes ensuring that AI tools are used responsibly and do not lead to digital exclusion or unfair treatment of candidates.

Best practice tips

What can employers do to demonstrate compliance with UK data protection law?

  • Conduct Data Privacy Assessments: before implementing AI in recruitment processes, employers should conduct a Data Protection Impact Assessment (DPIA) to identify and mitigate risks to candidates’ privacy and data protection rights.
  • Data Mapping: undertake a data discovery exercise and map what candidate’s personal data is being processed. This exercise should include identifying the employer’s lawful basis for processing under Article 6 UK GDPR.
  • Monitoring of AI systems: check for bias and fairness. Regular monitoring can help identify and address any issues that may arise.
  • Transparency: be clear with candidates on how AI is used in the recruitment process, as well as inform them of their rights regarding automated decision-making. Ensuring that the relevant privacy notices are up to date can help with transparency.
  • Training: train HR and recruitment staff on the ethical and legal implications of using AI in recruitment. Ensure that they understand how to handle/collect candidate personal data and how to oversee the use of AI systems effectively.
  • Review the contractual arrangements with third-party software providers: where third parties process personal data on behalf of employers, there should be a clear a clear contract in place, and each part should be clear on who is responsible for providing privacy information to candidates. The contract should also cover the relevant data processing agreements in place.

Conclusion

AI is already transforming workforces, but it must be implemented carefully to ensure compliance with UK employment law, as well as UK data protection law. It is often easy to forget about the data protection implications when implementing new systems, such as new HR software programmes to streamline employee management tasks. Employers should adopt best practices to effectively harness the benefits of AI.

Are you using or planning to use AI? Are you confident you have adopted best practices? Are your policies up-to-date and robust?

For assistance on any data protection issues your business may have, you can contact Maria directly at mspencer@prettys.co.uk