Agentic AI will be coming to many workplaces over the next year or two. We will experience autonomous systems capable of independent decision-making, multi-step task execution, and complex problem-solving without continuous human intervention. It has true transformative potential for workplace efficiency and innovation, but will create unprecedented legal, regulatory, and compliance challenges that extend far beyond traditional AI governance frameworks, and will severely test the limits of legal regulation in the UK and worldwide.
What is agentic AI?
Conventional AI systems respond to user prompts or follow predetermined workflows. In contrast agentic AI systems have three distinctive characteristics.
First, they demonstrate autonomous decision-making. They can be assigned high-level objectives and independently determine the optimal methods to achieve those goals without step-by-step human direction. This autonomy extends beyond simple task automation to include strategic planning and adaptive problem-solving capabilities.
Second, agentic AI systems do not simply process information in isolation. They can actively engage with external software tools, databases, and digital platforms enabling them to perform complex workflows spanning multiple systems and applications.
Third, these systems can function indefinitely without human supervision. They can operate around the clock and at scale.
Current workplace applications and market adoption
Market research indicates that 82% of HR leaders plan to implement agentic AI capabilities within the next 12 months, with projections suggesting that half of all HR activities will be AI-automated by 2030.
Agentic AI is coming to recruitment and talent acquisition, undertaking all tasks from job-posting and candidate sourcing to initial screening and interview coordination. They can process hundreds of applications simultaneously and conduct preliminary assessments with minimal human intervention.
Performance management and employee evaluation are also likely to be revolutionised by agentic AI. Systems can continuously monitor employee productivity, analyse performance metrics, and generate recommendations for career development, performance development or disciplinary action.
Agentic AI systems can automatically scan for regulatory changes across multiple jurisdictions, updating workforce cost models, and flagging potential compliance violations before they escalate into legal issues.
The regulatory landscape
The examples given above demonstrate the power and range of agentic AI, and also the profound effect that their decisions can have on candidates’ and employees’ livelihoods. The profound effect that employers’ decisions can have on the livelihood and wellbeing of individuals is why most jurisdictions regulate the employment relationship and provide protection to employees. So agentic AI is going to directly engage these rights, and employers are going to have to ensure that their systems adequately reflect those rights and obligations, and also take into account the increasingly complex regulatory landscape that applies to AI. Navigating this landscape is a complex but vital exercise for organisations, and is likely to remain the focus of regulatory compliance for the foreseeable future.
The EU AI Act
UK based employers may quite legitimately ask why the EU AI Act is relevant to them, given that we have left the EU. But there are number of reasons why the Act may apply to UK-based organisations seeking to implement AI, or otherwise why its principles and structure will be influential in building compliance and oversight systems.
The Act establishes the world’s most comprehensive regulatory framework for artificial intelligence and has particular significance for the use of agentic systems in the workplace. It creates a risk-based approach to AI regulation, and in this context AI systems used in employment and worker management are explicitly classified as high-risk.
Many of the requirements for managing high-risk AI systems in the EU AI Act will provide robust and defensible safeguards for UK employers, particularly where those UK employers are already familiar with UK GDPR obligations.
The UK regulatory approach
The United Kingdom has adopted a distinct ‘pro-innovation’ approach to AI regulation, relying primarily on existing legal frameworks rather than comprehensive AI-specific legislation.
This approach is having the effect of creating a complex regulatory environment where multiple overlapping laws govern different aspects of AI deployment in workplaces.
The UK GDPR and Data Protection Act 2018 form the cornerstone of UK AI regulation, establishing fundamental principles for how agentic AI systems must handle employee personal data. Article 22 of the UK GDPR provides employees with the right not to be subject to solely automated decision-making that produces legal effects or significantly affects them. Agentic AI systems in the workplace can undoubtedly have a significant effect on individuals.
The Equality Act 2010 provides additional protection against discriminatory AI applications, prohibiting both direct and indirect discrimination based on protected characteristics. Employers deploying agentic AI systems face potential liability for discriminatory outcomes even when discrimination was not intended, creating a requirement for proactive bias auditing and continuous monitoring.
The Employment Rights Act 1996 will also affect AI deployment through provisions governing fair dismissal procedures and employee rights. Agentic AI has the ability to take profound decisions about an employee’s future. It is not feasible for the courts of tribunals to find that agentic AI should have no involvement in decisions about employees, but we can expect great scrutiny of the safeguards that an employer has put into place to ensure fairness and that an employee is aware of how and why decisions affecting their future have been taken.
Whilst the UK government has no appetite to legislate on AI in the workplace, legislative developments are occurring in other jurisdictions. For example in California, AB 2930 (which ultimately failed) was a law which proposed comprehensive algorithmic discrimination prohibitions. New York City’s Local Law 144 has fared better, and establishes mandatory bias audit requirements for automated employment decision tools. The picture is mixed, but undoubtedly the trend globally is towards more stringent regulation of workplace AI systems, particularly those with autonomous decision-making capabilities, but this trend is slow, and we can expect existing employment rights to have to adapt to the new landscape, particularly if there is rapid adoption of agentic AI systems.
Data protection challenges
Agentic AI systems present unprecedented challenges for data protection compliance. Unlike traditional AI tools, which process specific datasets for defined purposes, agentic systems are designed to continuously collect, analyse, and cross-reference vast amounts of personal data from multiple sources and then to use that data to make independent decisions. Agentic AI systems may autonomously decide to collect additional data types or combine datasets in novel ways as they adapt to achieve their assigned objectives. These characteristics create significant challenges for traditional data protection compliance and governance models, based upon the principles set out in the GDPR.
Workplace monitoring applications of agentic AI create particular risks when if continuously analyse employee behaviour patterns, potentially inferring sensitive information about health conditions, personal relationships, or political affiliations. The inferential capabilities of advanced AI systems mean that even seemingly innocuous data inputs can reveal protected characteristics, creating obligations under both data protection and equality legislation.
Agentic AI systems often operate through ‘black box’ algorithms that make it difficult or impossible to provide meaningful explanations of their decision-making processes, which is a fundamental requirement for GDPR compliance.
Discrimination and algorithmic bias risks
We have known for some time that training AI systems needs to be handled extremely carefully so as to avoid bias becoming baked into the system. Agentic AI systems adds a further dimension to this problem. Because they do not always produce predictable outputs, and can develop new decision-making patterns as they learn, they can create discriminatory outcomes that were not present during initial system design or testing.
Transparency and procedural fairness
Employment law procedural requirements require transparent and fair processes, and the ability to create significant challenges for agentic AI deployment, given the importance of being able to demonstrate transparent, fair, and reasonable processes in employment decision-making. The black box nature of many agentic AI systems directly conflicts with these requirements.
Human oversight requirements and liability frameworks
Human oversight is required by the GDPR and by the EU AI Act. This aspect of data protection/AI compliance is likely to take on greater importance as agentic AI systems become more common place. Under the EU AI Act represent human oversight is one of the most stringent requirements: AI systems must be designed to enable effective oversight by natural persons during their operational period, requiring organisations to implement appropriate human-machine interface tools that facilitate meaningful supervision.
Human oversight must be conducted by individuals with adequate AI literacy, training, and authority to understand system operations and intervene when necessary. This requirement creates significant organisational obligations: employers must not only designate qualified supervisors but also provide ongoing training to ensure their oversight capabilities remain effective as AI systems evolve.
Meaningful oversight extends beyond mere monitoring to include the capability to prevent or minimise risks to health, safety, and fundamental rights. The European Data Protection Supervisor has emphasised that effective oversight requires active involvement that improves decision quality rather than serving as a mere procedural formality. Real-time intervention capabilities are also essential: Supervisors must be positioned to review AI decisions before they take effect.
When AI agents operate continuously across multiple functions and time zones, maintaining adequate human supervision is going to become resource-intensive and may require significant organisational restructuring.
In addition research indicates that human supervisors tend to over-rely on algorithmic recommendations even when provided with oversight authority. This ‘automation bias’ is acknowledged within the EU AI Act which requires deployers to implement measures that help supervisors remain aware of automation bias and maintain critical evaluation capabilities.
Implementing a governance framework
Effective internal governance is going to be key to implementing safe and effective agentic AI solutions. Many organisations are still at the stage of developing AI governance models without even factoring in new challenges presented by agentic AI, but traditional approaches to AI governance frameworks that are designed for static systems may already be out of date. AI governance programmes will need to be adapted to address the unique challenges posed by agentic AI systems, including their autonomous operation, learning capabilities, and potential for unpredictable behaviour.
The UK government’s AI Insights report on agentic AI acknowledges the unique governance challenges posed by systems with greater autonomy and recommends proactive regulatory adaptation rather than reactive responses to incidents.
International coordination efforts are increasing as regulators recognise that agentic AI systems often operate across borders and may require harmonised approaches to effective oversight. The EU AI Act’s extraterritorial application creates precedents for cross-border AI regulation, but significant jurisdictional differences remain in implementation approaches and enforcement mechanisms.
Strategic recommendations and best practices
Organisations considering agentic AI deployment will need to implement comprehensive risk assessment frameworks that evaluate legal, ethical, and operational implications across all relevant jurisdictions. This assessment should precede any deployment decisions and be updated regularly as systems evolve and regulatory frameworks develop.
Human oversight functions must be designed and implemented before agentic AI deployment, ensuring that qualified personnel have the training, authority, and technical capabilities necessary to provide meaningful supervision. This infrastructure should include clear escalation procedures, intervention protocols, and continuous monitoring capabilities that can scale with system deployment.
Transparency and explainability need to be prioritised in system selection and design, favouring AI solutions that provide meaningful insights into their decision-making processes over more opaque alternatives. Organisations should also develop communication frameworks for explaining AI-driven decisions to affected employees and stakeholders.
Stakeholder engagement including employee consultation, trade union involvement, and regulator liaison will need to commence early in the planning process and continue throughout deployment and operation. This engagement helps identify potential concerns, ensures compliance with consultation requirements, and builds organisational support for AI implementation.
Conclusion
Deploying agentic AI in workplace settings represents both an unprecedented opportunity for organisational efficiency and a fundamental challenge to existing legal and regulatory frameworks. As these autonomous systems become increasingly sophisticated and prevalent, organisations must navigate a complex landscape of privacy, employment, and compliance obligations while preparing for continued
If you need help on any of the matters raised in this article, please contact Matthew Cole.