How quickly is AI reshaping the HR and employment landscape?

AI is no longer a distant promise, it is presenting HR Teams with new opportunities and new challenges, particularly when it comes to recruitment and the employee lifecycle. AI is a great tool for attracting, managing and retaining the workforce. It transforms mundane tasks, frees up time and increases productivity. But how many of us are considering data protection and AI governance when integrating new AI software/technologies into the workplace?

AI in the employee lifecycle

There are so many AI tools out there which now automate the way in which personal data is managed throughout the employee lifecycle. AI tools are being used in the hiring process, from CV screening to matching the right candidates and scheduling interviews. It is also being used as part of the employee management process, from performance management through to salary reviews, to notetaking in disciplinary and grievance exercises. Yes, it’s clear that AI tools can be beneficial but how much do we actually know about the potential use of employee data to train AI models?  

It is easy to get carried away with the pace at which AI can automate tasks that once took us hours but with this comes an increased demand for employers to be transparent about what they are doing and why. Employers also need to be thinking carefully about the potential domino effect this has on other workplace processes, for example Data Subject Access Requests (DSARs). If AI tools are being used to record or transcribe meetings then these are likely to capture personal data and may fall within the scope of a request. More often than not, this can cause headaches when it comes to the disclosure process.

What challenges are HR Teams facing?

Compliance with UK data protection law

Unlike the EU, which has adopted the comprehensive EU AI Act, the UK has opted to set out principles to support existing regulators, such as the Information Commissioner’s Office (ICO). Currently, there is no AI-specific legislation in the UK.

It is therefore crucial that HR Teams are taking practical steps to ensure that they understand what their company’s compliance obligations are, particularly under UK data protection law. Compliance with the UK GDPR is now more crucial than ever, particularly as HR Teams process large amounts of personal data, probably more so than any other team within a business. Amongst other things, HR Teams must ensure that the use of AI systems do not produce (a) biased results, (b) use or produce inaccurate data and (c) do not engage in surveillance or monitoring employees in the workplace, particularly where to do so would be in direct violation of the UK GDPR.

Transparency (or lack of)

HR Teams need to be making sure that they are being clear with data subjects about the process(es) behind AI results. Employers are also encouraged not to solely rely on AI outputs, we are finding that results are not always correct, and this can ultimately lead to inaccuracies. Many AI systems operate as “black boxes”, which essentially means that decisions are being made based on complex algorithms that can be difficult to explain and justify. The danger here is that AI tools can make decisions about the recruitment process or the employee management process, which can lead to potential discriminatory outcomes. In this scenario, who is liable for the discrimination? In short, this could fall on the employer depending on the nature of the harm caused.

What should HR Teams be doing to demonstrate compliance?

  1. Conduct a Data Protection Impact Assessment (DPIA)

    If you are planning on using AI in the workplace, either in the recruitment process or for performance management, for example, completing a DPIA is a legal requirement. A DPIA can help you identify what risks there are, as well as justify the use of personal data and get you thinking about whether there are less intrusive alternatives.

    The ICO now expects you to evidence what alternatives have been assessed and why these are not being pursued.

    2. Update data protection policies and documents

    Most AI tools process personal data in new ways. Therefore, new processing activities must be reflected in privacy notices. You must clearly inform data subjects, amongst other things, of the following:

    1. What data is being collected.
    2. How it will be used by AI systems.
    3. Who it will be shared with.
    4. How long it will be retained for.
    5. Whether the individual will be subject to any automated decisions.

    3. Ensure you are processing data lawfully

    The UK GDPR requires you to identify a lawful basis for processing personal data. This must be clearly documented in your Written Record of Processing Activities. You must be able to explain what data you are processing and why. Remember, health related data and criminal conviction data requires extra protection.

    4. The power of human oversight

    The ICO has warned against automated decisions that affect individuals without meaningful human review. This is particularly important in the recruitment process, as well as for tasks involving performance evaluation and disciplinary actions.

    AI Governance

    We have not been quiet about the fact that HR Teams should be implementing a Minimum Viable Governance Model (MVG). You may be reading this and thinking “what is a MVG?”. While we are not exploring the ins and outs of MVGs in this article but you can read more about what this looks like in practice here.

    Remember: AI governance is not the sole responsibility for IT and legal teams. AI is becoming increasingly embedded in HR Teams and while senior HR leaders may not want to hear this… it is up to you to ensure that AI tools in the workplace are used ethically, in a transparent manner and in line with UK data protection law.

    What now?

    Our advice to HR Teams? Start small but start now.

    AI governance is not a study in perfection, but HR Teams should be able to demonstrate that they understand the potential to the potential biases and discriminatory outputs from using AI tools in the workplace, and how to mitigate risks like these.

    In short, we shouldn’t wait for legislation to catch up. Instead, HR Teams should act now to ensure that AI is being used in the most compliant and ethical manner while being able to maximise the benefits.

     If you need help on any of the matters raised in this article, please contact Maria Spencer.