Chat GPT and AI in the Workplace: Is it transforming HR and employment practices?

It has everyone talking…but what exactly is ChatGPT? 

ChatGPT is a form of generative Artificial Intelligence (AI) which draws on information from the internet to engage in natural language conversations with users. It uses algorithms from data sets fed into it to understand and respond to questions posed.   ChatGPT learns from users’ conversations with it and uses this information to improve its responses. 

Whilst it is more commonly encountered as a customer service tool, manifesting as a pop-up chat box on websites to assist with customer queries, in reality, it can be employed to generate any type of content – from short responses to simple queries to drafting routine letters, presentations and reports, to analysing large volumes of documents. 

With great power comes great responsibility….

As is often the case, one person's dream can be another person's nightmare. AI offers a cost-effective way to improve business efficiency but can also create additional risks. When the so-called 'godfather of AI', Geoffrey Hutton, resigns from his position claiming that he regrets his work and warning of the dangers of developments in the field[1] and when Elon Musk even calls for AI training to be halted for six months while the risks are assessed, we know it’s time to sit up and listen![2] 

Whilst those comments were made in the context of an overarching concern for the existential risk that AI super-intelligence might pose for humanity, there are more mundane, immediate risks that ChatGPT, in particular, poses for businesses.

Image removed.

Only as good as the weakest link….risks and issues with AI and ChatGPT

ChatGPT and other AI models can be invaluable workplace tools and can help employers streamline their processes.   For instance, it is increasingly used to perform HR and employee management functions, including helping with recruitment and hiring processes, monitoring employees, conducting background checks or automating repetitive tasks. 

Bias and discrimination

However, these tools have limitations, and there is a significant risk that the absence of human oversight and intervention in the process could inadvertently lead to bias and discrimination. The lack of transparency in the algorithms used means that unless employers are aware of who or what is making decisions about the data sets being used, there will be the potential for conscious or unconscious bias within the decision-making process. AI can, for example, make high-risk decisions about employees, such as when to terminate their employment. Therefore, it is important that employees are aware of how this technology is making decisions about them and how it could lead to possible discrimination if the AI is improperly trained, leading to biased decisions. 

Out of date or factually incorrect information

It is also important to bear in mind that AI technology is only as good as the information fed into it. ChatGPT draws much of its information from the internet or learns from what it has been told. However, it does not account for the fact that not everything on the internet is true, impartial or independent. Likewise, the fast-paced nature of the information-age means that the data sets relied upon might not be up-to-date or accurate.   Clever as it is, ChatGPT takes the information fed into it at face value, so independent fact-checking will be necessary. There have been numerous examples where it has generated information based on incorrect data it has been told, for example, in relation to fictitious legal precedents that are being cited in essays by students.

Corner-cutting, information security and copyright infringement

Generative AI systems and ChatGPT can be used to great advantage in the workplace, streamlining processes and increasing productivity.  

However, unless it is used responsibly, with careful management and oversight, it can also pose significant risks for employers.   

Employees, for instance, may use it without their employers being aware.  ChatGPT relies on information being given to it – either from the internet or from other sources (such as the employee themselves). There is a risk that the user may input commercially sensitive and/or confidential information into the chat box. Whilst ChatGPT say that this information will not be stored and retained in its systems, it still uses the information to learn and may re-use this data in future outputs. 

This is problematic from a data protection/security point of view because data subjects will not necessarily be aware that their data is used this way. Therefore, it is essential that privacy policies are updated to reflect personal data may be processed using AI tools. Developing an AI/ ChatGPT policy for employees, which makes them aware of the regulations and implications of using this type of technology, will help define when it can be used in the workplace. It might be that use is banned altogether, especially whilst we all wait to understand more about it, or its use may be subject to strict limitations; for example, no personal data or commercially sensitive information must be inputted into it. It may also be necessary to provide training in the use of generative AI to mitigate any risks.

The use of generative AI may also breach UK GDPR as much of the processing of personal data may be outside the scope of the original lawful basis. Hence, it is important that policies are updated to ensure this covers the lawful basis relied upon if using ChatGPT for processing personal data in this way. The location of ChatGPT servers has not been openly disclosed, although information suggests that they are based in the US. Therefore it is important to be mindful of where data inputted may be transferred, and whether a restricted transfer under the UK GDPR is taking place.

It is also important to bear in mind that, depending on the question posed to the chat box, the results generated may be heavily lifted from other documents or articles found on the internet. This means businesses may be unwittingly using information in which someone else has a proprietary interest. They may ultimately find themselves at the sharp end of a copyright and intellectual property infringement claim. 

Keep calm and carry on…steps businesses can take to manage and mitigate the risks.

ChatGPT and other AI tools are developing quickly, and it is clear that they are here to stay. This means that the risks outlined above will need to be managed and mitigated.   Businesses should start thinking now about how they want to approach it, how and why they might want to use it, and what safeguards they will put in place to ensure responsible usage. It may be necessary to carry out an assessment before carrying out the processing; for example, when introducing a new technology such as an AI tool into a business, it is important to carry out a Data Protection Impact Assessment to help identify and mitigate the risks of the implementation. Furthermore, if relying on legitimate interest as the lawful basis for processing, it would be necessary to complete a Legitimate Interest Impact Assessment to evaluate the impact on the individual of processing the data this way and ensure it is lawful.

In particular, we advise businesses to have a policy in place which: 

  • explains what ChatGPT and generative AI is and whether or not it can be used by staff in their day-to-day work and by whom;

  • the circumstances in which it may or may not be used;

  • provides guidance about the sorts of information that can be obtained from it or inputted into it;

  • requires independent fact-checking; and

  • confirms the sanctions for any breach and cross-refers to any disciplinary policy. 

We also advise employers to update confidentiality provisions in employment and service contracts to explicitly prohibit employees from uploading confidential and commercially sensitive information/trade secrets onto ChatGPT, to educate staff on the risks and benefits of ChatGPT (and on any policy implemented as above) and to provide regular training – including data protection training. 

Finally, with the likelihood that more mundane, run-of-the-mill tasks will be performed by ChatGPT (or similar) in the future, employers may need to consider reviewing their performance processes and adjusting KPIs as necessary to take this into account. 

If you require further information, assistance with any of the issues raised above or would like any help with your policy, please contact Emma Loveday-Hill at elovedayhill@prettys.co.uk 
 

[1] (https://www.bbc.co.uk/news/world-us-canada-65452940),

[2]https://www.bbc.co.uk/news/technology-65110030

Expert
Emma Loveday-Hill
Partner
Sheilah Cummins
Senior Associate