AI use is soaring, but employment law hasn’t caught up. Proactive steps like policy updates, clear oversight, and contract reviews can better protect your business.
Artificial Intelligence isn’t coming, it’s already here, and it’s starting to reshape how many organisations work, decide, and manage. OpenAI, the company behind ChatGPT, recently announced an uptick of 1-million business users between February and June of this year and confirmed it is currently supporting 500 million weekly users. Having only entered the mainstream in late 2022 that is exponential growth, and, having not been designed with these technological developments in mind, employment law is struggling to keep the pace.
Earlier this year, BBC News polled 4,500 individuals across 30 different employment sectors and, whilst 56% felt excited by AI, 63% also felt overwhelmed by the speed of its development. This fear was seemingly heard, and the Trades Union Congress (TUC) have assembled a plan with some practical recommendations that it would like to see implemented, to protect workers and give them a voice in how AI is introduced to the workplace. Their proposals include:
- introducing new legislation mandating AI impact assessments, human oversight, and enhanced worker data rights;
- extending regulatory powers to protect workers, including reforms to the regulator’s responsible for enforcing data protection rules (i.e., the ICO); and
- requiring large companies to have worker representation on boards and report on AI’s workforce impact.
The Government has its own plans for consultation about AI in the workplace, so more will certainly come on this topic in the not-so-distant future. But right now? We are in some form of legal limbo. AI is being used by some businesses in all manner of areas of the employment journey, from recruiting, to training and in some cases even helping to make ‘recommendations’ for key employment processes such as redundancy exercises. The legal issues that can fall from this are equally wide reaching – discrimination, unfair dismissal and data protection complaints are three of the many areas that could be questioned if AI tools are used unwisely. Without legislation or case law to guide us, we do not yet know the stance that Tribunals will take so caution is likely to be wise.
So, what can you do to mitigate your risk? Be prepared. Change is coming, if it hasn’t already arrived, and to minimise your risk of discrimination claims or data headaches, you should consider taking measures such as:
- introducing an Artificial Intelligence Policy that is tailored and appropriate for your business and ensure your employees are left in no doubt about what they can and can’t use AI for, and, if use of any kind is permitted, what information can or cannot be uploaded;
- taking a cautious approach when using AI to make key decisions or recommendations relating to employees. Current AI tools are generally considered to lack common sense, meaning that they are less able to make more nuanced decisions (at least for now);
- updating your Data Protection Policy, privacy notices and any other affected terms and conditions; and
- updating employment contracts, disciplinary policies and/or job descriptions to reflect implementation of AI, particularly if yours is a business where AI use is likely to be prohibited.
If you need guidance or support navigating these changes, or you would be interested in a contract and handbook review, don’t hesitate to contact Jake King at [email protected] for assistance.

