Use of AI is spreading all the time, but what does it mean for SMEs grappling over how to use it? We take a look at some of the legal issues to consider…
The use of AI systems in day-to-day business life is increasing all the time, with systems such as ChatGPT and Microsoft Copilot 365 being used by staff to help them with their work across a wide spectrum of industries.
The EU’s AI Act into force 1 August 2024, although most provisions don’t come in until August 2026. It will prohibit certain practices with a view to improving the safety and transparency of AI systems and require accountability by organisations supplying and using them. It will apply not only to EU-based organisations but also to those outside the EU which supply AI systems into the EU, or which use AI output in the EU. The UK hasn’t (as yet) announced plans to take a similar path, instead favouring a more piecemeal approach – in February 2024 the government asked various regulators for an update on their approach to AI with a view to producing a regulatory framework which is “effective, proportionate and pro-innovation”.
While these developments may seem some way off for many UK businesses, there are a number of practical issues which can be considered now to reduce the legal risks around the use of AI and maximise the benefits the technology can bring.
Input and usage of data
The question profile of the user of an AI system (the identity of user and the sorts of things they are asking) may, in a business context, involve relatively little personal data relating to the user – their login details, the fact that they are asking business-related questions so this may be regarded as relatively low risk.
However, the risk will increase if the user inputs large quantity of personal data, for example if they ask the AI to perform some sort of task involving sorting out staff or client lists or use the AI to help sort out job applications. The terms of use of the AI system will most likely make the user responsible for ensuring they have the right to input personal data, have provided privacy notices to those whose personal data will be input into the AI, etc.
Where AI is used to make decisions about people, there is a risk of it developing prejudices and biases which could lead to discrimination claims being made, for example in the context of hiring people or deciding whether to offer them credit or insurance.
The subject matter of the questions may disclose confidential business information, trade secrets, etc which could give a competitive advantage if accessed, for example if the information input into the AI is then used by the AI to generate answers to other questions and/or the AI suffers a security breach. To combat this, some companies (e.g. in defence sector) allow staff only to use private AI systems. That said, public AI systems tend to take data privacy very seriously, not least because their business reputation would be at serious risk if they failed to keep data secure.
Some public AI systems enable the user to opt-out of the system using the input content, but there may be concerns as to the reliability of the opt-out and opting-out may impact usefulness of the output, particularly if large numbers of users opt-out.
Data ownership
Currently there are no standard legal rules as to who owns data after it is put it into an AI system.
The EU’s AI Act will introduce transparency requirements which is likely to have implications for intellectual property (IP) ownership – information will need to be provided to users by the providers and deployers of AI. Developers will need to explain what data they use to train the AI model, where it comes from and whether it was created by humans or generated by AI. This is likely to shine a spotlight on whether copyright material has been used and whether the developer has obtained the necessary licences to use it. In tandem, rights’ holders are looking to prevent tech companies from using copyrighted works to train AI systems.
The UK seems to be taking similar approach to the EU. The UK IPO has indicated that it doesn’t have plans for a voluntary code of practice for copyright and AI. Instead, the last government said it would work with AI developers and copyright holders to explore mechanisms for greater transparency.
Whether AI-generated works themselves are capable of being protected by copyright and who would own that copyright are unresolved issues with no government organisations or regulators taking a lead on this so far.
Despite no specific regulation to date, tech companies’ terms of use seek to apply current IP principles to the use of their AI systems. For example, Open AI’s terms of use:
- Prohibit the use of their AI services in a way that infringes, misappropriates or violates anyone’s rights.
- Require the user to warrant they have all rights needed to input information.
- Provide that, as between Open AI and the user, the user owns the IP in the input and the output.
- Assign the IP in the output to the user – but only to the extent it is not the same as other users’ output. This recognises that the output provided to one user may be similar to output provided to others but creates uncertainty for users as to who owns what.
However, an AI system cannot ensure that no infringing material is input into it, and if this happens it is quite possible that output which infringes a third party’s IP rights could be generated by the AI system without the user realising this.
All of the above points to the importance of users taking care with the output they obtain from an AI system, for example, treating material as a first draft only.
Reliability of output
There is also the question of the reliability of answers generated from AI. As a workplace tool responsibility for the use of the output still rests with the user and, ultimately, their employer. While there are different views as to the quality of AI output it is generally accepted as a starting point rather than final product, and it is best to independently check the “facts” provided in any output.
On the flip side, as AI becomes better than humans at performing certain tasks could it be possible that eventually someone might be negligent (i.e. found to have failed to have discharged their duty of care) if they have failed to use an AI tool to perform a task where it would have reasonable to do so?
Practical precautions – strategy, policy & training
AI systems are here to stay, and younger generations in particular are likely to be open to taking advantage of them. It is therefore important that business owners and senior management inform themselves about AI and its potential impact on their business (whether good and bad). Increased regulation of AI is on the way, but in any event, businesses should consider adopting an AI strategy.
If a business intends to allow its workforce to use GenAI for business purposes, it is best practice to put in place a workplace policy to set out the rules around its use. This can sit alongside other workforce policies already in place that cover related content, e.g. IT and communications policy, BYOD policy, data protection policy, diversity/equity/inclusion policy.
A clear workforce policy as to what staff should/shouldn’t do when it comes to use of AI supported by training to help ensure staff are aware of, and understand, the requirements of the policy should help mitigate many of the risks associated with using AI. It should also provide the employer with a platform to take disciplinary action if necessary.
Insider threats
Internal AI systems can make it easier for staff get access to information they should not have access to. AI can make it easier for staff to get access. This risk can be mitigated by ensuring that appropriate access controls are set. However, this can be quite complicated to achieve in practice so seeking advice from technical specialists in this area may be necessary.
Buying AI services
If a business decides to buy AI services, the issues to consider during due diligence and contract negotiation are likely to include:
- How the tool is trained and information about the inputs and outputs
- Treatment of IP rights
- Confidentiality
- Data privacy
- Compliance with laws
- Liabilities
- Reporting on risk issues (such as security incidents, discrimination, hallucinations, IP infringement, supplier having a “human in the loop” – integrating human oversight and feedback).
If you have any questions as a result of this article, please contact: [email protected]