Using AI for automatic decisions: risk, opportunity and the ICO

Using AI for automatic decisions: risk, opportunity and the ICO

As organisations are increasing their use of new technologies to tackle pressures such as time, resources and expenditure, it comes as no surprise some are using AI large language models (LLMs) to streamline business processes. But what happens when organisations use AI to make automatic decisions?

Organisations are using AI increasingly to streamline internal processes, particularly ‘form-based’ processes. These processes can involve automated decision making and/or profiling.

Automated decision making can be carried out using simple, linear based technology. For example, examination boards may use an automated system to mark multiple choice examination papers that determine a pass or fail mark without human intervention.

With the advancement in AI, this concept can be taken further to profile individuals i.e. using large data sets to identify links between different behaviours and attributes to determine trends, and then automatically profile individuals. For example, AI systems using generative AI can carry out an assessment of whether a particular medical treatment is more or less likely to work for a particular patient, or carry out an assessment of a candidates’ responses to recruitment questions to determine who is best suited for the role.

Prohibitions

Automated decision making (including profiling) is generally prohibited by the UK GDPR if:

  • it is ‘solely’ automated; and
  • it will have a legal or similarly significant effect on individuals (without distinction made between the more ‘linear’ based technology or more complicated profiling aspect).

Solely automated:

A decision which is ‘solely’ automated will be one where there is no human influence on the outcome. Accordingly, if a person interprets the result of the AI-automated decision then this is likely to be considered a human influence on the outcome. For example, if an employee’s attendance is registered on an internal system which is set up to automatically issue a formal warning to an employee who has ‘clocked in’ late after a defined number of occasions, then this is likely to be a solely automated decision. If, however, the information is flagged to an HR manager, who decides to issue a formal warning on the basis of that information, then this is unlikely to be considered a solely automated decision.

Conversely, if a person inputs data which is then interpreted by the AI system then this may still be considered a solely automated decision.

Legal or similarly significant effect:

An example of an automated decision having legal effect on individuals would be a system that determines whether an individual is entitled to government benefit schemes (and how much this should be). However, what is meant by ‘similarly significant’ is a bit murkier. Examples from the ICO of what this may include are:

  • A loan application made online, which will use algorithms and automated credit searches to provide an immediate yes / no answer.
  • An organisation deciding to interview certain people based entirely on the results achieved in an online aptitude test.

Conversely, it would not include recommending new television programmes based on an individual’s previous viewing habits.

Exemptions

If you are carrying out solely automated decision making (including profiling) then there are three possible exemptions that the UK GDPR provides, if the decision is:

  1. authorised by law (for example, to detect and prevent crime);
  2. based on the individual’s explicit consent; or
  3. necessary for a contract.

(There are further rules for where there is special category personal data, which is not discussed in this article.)

  1. Authorised by law does not mean that you have to identify a law that explicitly states that solely automated decision-making is authorised for a particular purpose. Simply that, if you have a statutory or common law power to do something, and automated decision-making/profiling is the most appropriate way to carry out that purpose, then you may use automated decision-making. You may need to show, however, that it is reasonable to do so in all the circumstances.
  2. The usual UK GDPR rules on obtaining consent also apply for this type of consent as well, namely that it must be a freely given, specific, informed and an unambiguous, affirmative indication of the individual’s wishes. For it to be “explicit”, the ICO ideally wants this confirmation as a written statement, filling in an electronic form or sending an email. Whilst ‘opt-in’ tick boxes have not been expressly identified, the ICO seems to generally accept this as explicit consent, provided that the individual is specifically informed that the decision will be entirely automated before the box is ticked.
  3. The ICO’s guidance as to what is “necessary” (for contractual purposes) states that this does not have to be “essential”, but that it should be a targeted and reasonable way of meeting contractual obligations, and should be undertaken in the least privacy-intrusive way. Organisations should, therefore, consider whether the decision is appropriate in the circumstances of the particular contract in question.

Note that the contract is envisaged to be between the data controller and the data subject. However, it can be extended to cover the actions of a data processor if it can be shown that it is necessary for the contract to be filled. For example, if a bank relies on an automatically generated credit score carried out by a third party to decide whether or not to agree the loan then the third party is likely to be covered by extension.

In any event, an overarching agreement should be considered carefully so that appropriate protections (such as indemnities) are included, particularly if arrangements will lead to the data subjects ending up on the processor’s website or platform.

If relying on either consent or contract necessity, organisations are required to provide information in appropriate privacy policies/notices explaining to individuals, in easy-to-understand terms, how automated decisions are made and informing them of their right to appeal the automated decision, and request human intervention.

As always, a data protection impact assessment should also be carried out before the technology is put into use.

If you have any questions arising from this article or about any other legal aspect of using automated technologies/AI in your organisation, please do get in touch with our Head of Commercial, IP & Technology at [email protected]