

HR is at the forefront of a great deal of AI development. This is inevitable, and the clue is very much in the name: currently, humans exercise intelligence at work. In many cases, that intelligence is going to be replaced by AI systems, therefore understanding AI governance in HR is crucial.
This does not mean that employees will be replaced. Inevitably, AI will cause restructurings. Jobs will be lost, and jobs created. For the next few years, it is most likely that AI will be best used to augment and support human expertise and judgment rather than fully replace it. Longer term? Who knows, but as John Maynard Keynes said: “In the long run, we are all dead”. So, we need to focus on the here and now, and over the next two or so years organisations are going to have to start to grapple very seriously with how AI is used in their businesses. AI usage is not going to be limited to the use of ChatGPT for first-draft policy writing, or to tidy up email language; increasingly AI will be burying itself deeply into the fabric of workplace operations, and this includes HR.
The Purpose of This Article Series
This is the first of a series of articles that I am going to produce over the next few months giving guidance to HR professionals in particular as to what widespread adoption of AI systems is likely to mean for them and to start to provide a framework for responsible AI procurement and implementation. And I have a guarantee for you: I will not be using AI to structure, draft or prepare these articles for me. This is not because I am a Luddite – I am genuinely very excited about the prospect of AI solutions – but because we need to think as humans, and an AI-first approach to everything potentially emasculates the creative side of what we need to be considering, and the solutions that we need to find for some complex and nuanced issues.
Credentials
What are my own credentials for offering to be your guide through this deeply challenging exercise? I am not a scientist or mathematician. I do not know where to start with computer programming. I am, however, an employment lawyer with 30 years’ experience in advising on immediate and strategic HR changes, and during my time as an employment lawyer, many aspects of my job, and of the role of human resources, have changed beyond recognition.
Also, six years ago I became a Certified Information Privacy Professional (known as CIPP/E) of the International Association of Privacy Professionals (or IAPP), the world’s leading organisation for privacy professionals. I gained this qualification because of the work that I have done advising clients on data protection, and in particular implementation of EU GDPR (and subsequently the UK Data Protection Act). I now work with all manner of organisations on data protection implementation and strategy, and this work covers both HR-specific data protection issues and wider privacy and data protection issues within organisations. The deep knowledge that I have of both of these legal areas puts me in a position to provide detailed, robust and pragmatic advice as organisations start to navigate through their own AI journey.
Starting with ‘Why?’
So, where do we start? I think that we start with ‘why?’. Why can we not simply implement HR solutions which may or may not contain an element of AI? Why can’t we leave employees to experiment with Copilot or other Gen AI products? The answer for me is the power and sophistication of the tools that will be at employees’ fingertips and the hugely significant and consequential effect that AI systems can have on employees (and therefore for organisations).
Importance of Control
Therefore, if the question is ‘why?’ my view is that the answer is ‘because you want control.’ Control is everything here: it will enable you to ensure that AI is aligned with your business strategy; it will enable you to manage risk in accordance with your risk management framework and appetite; it will ensure that you comply with the law. And unless you know the risks and put in place a framework for management then AI will control you and your organisation; you will not be able to control it.
So we want control. We want control over decisions, we want control over risk. We can wrestle that control away from the suppliers of AI systems primarily by good governance.
The Main Plea: Establish AI Governance
This will be my main plea from this article: put in place AI governance. Don’t wait until you have an AI project to implement. Do it now. Do it before it is urgent. Do it whilst you have time to educate, and to think about what you want from AI. Do it while you have the time to understand the legal obligations, and to identify and assess the risks to the business. Do it because you want to be in control of your business. Do it before it’s too late and before AI has been allowed to grow through your organisations and its systems like Japanese Knotweed.
Overview of AI Governance
Establishing a Framework: An AI governance committee is essential. This defines the organisation’s overall approach to AI systems, both ethical and strategic; it ensures that decisions about AI procurement and usage are taken which are consistent with that ethical and strategic approach; it identifies, assesses and manages risks; and ensures that the legal and regulatory demands are understood and met. This committee needs to be multi-disciplinary, including representatives of HR, IT, legal and compliance. Over time it is likely to become an increasingly important part of keeping an organisation safe, and so its membership will need to be listened to by senior leaders.
Developing Policies: As a minimum, organisations are going to need to develop and maintain policies to cover safe AI usage, data privacy, AI procurement, bias mitigation and decision-making processes. However, it is likely that many other policies will be affected by the implementation of AI systems within the organisation.
Implementing Risk Management Structures: Identifying and managing risks is a key part of any organisation’s activities. AI risks will need to be added to the risk register and then actively managed. These risks include the risk of regulatory breaches; litigation (copyright and employment disputes in particular); privacy and data protection violations (myriad risks here, particularly in the UK, where the data protection legislation has largely adopted AI compliance in the absence of other legislation); reputational damage etc. Understanding and managing these risks will be key.
Understanding Regulatory Compliance and Legal Considerations: This is an extremely fast-moving and complex subject. Whilst the UK has not specifically enacted AI safety legislation, several other aspects of the UK’s legal requirements will be triggered by AI, often in unintended ways. In addition, the EU’s AI Act, which is now in force, and which will be implemented across the EU over the next few years, will be hugely influential on organisations in the UK, particularly if they have customers, suppliers or operations in an EU member state. As with GDPR, many other jurisdictions are adopting similar-looking laws. However, China and the US are also highly influential and are taking different approaches to the regulation of AI systems.
In addition, I expect that increasingly we will see industry regulators producing guidelines and rules relating to the effective use of AI. These will often be about encouraging the safe use of AI within sectors or professions, as well as prohibiting practices or uses which are not consistent with fundamental rules or practices.
Embedding Values and Ethics: This will be a particularly important area for AI systems which impact HR management. Human managers instinctively take into account the moral dimension and understand that principles of fairness, equality, privacy, and general decency apply at work. AI has no inherent morality, and we need to be vigilant to ensure that it works in a way that reflects the values and culture of an organisation. In addition, employment law often expresses the values that society requires organisations to apply – equality, privacy, reasonableness, etc. We need to ensure that AI systems reflect these values.
Ensuring Safe Data Management: Data privacy, and how large language models in particular acquire and process personal data, is already an issue of huge importance in AI. Currently, this is primarily an issue for the developers and trainers of these models. However, this will change. As we begin to embed AI systems into our workplaces, how we use and process this data will become key. Data governance, access control, information security, and impact assessments will take on an ever-greater degree of importance and sophistication. In addition, tools and processes will be needed to detect and minimize bias, and to ensure that decisions taken by AI systems are explainable and transparent.
Closing Thoughts
So, there is plenty to do. What I have set out above is a few words to provide an overall description of concepts and processes that require much deeper understanding. None of this is simple, and we are all going to be feeling our way for some time to come. This article, however, is intended to be a starting point.
What will follow next is more in-depth guidance on AI, the legal issues that it raises, and how to manage them, particularly by using a governance framework.
You can contact Matthew Cole for assistance on any of these issues by e-mailing Matthew at mcole@prettys.co.uk