The CMA has published the outcome of a consultation it held concerning the potential harms to competition and consumers caused by algorithms. Across the channel, the European Commission has produced its own draft regulation to deal with the threat of AI to consumer rights and data.
In April 2021, the European Commission (EC) published a draft regulation for a legal framework for Artificial Intelligence (AI). The regulation sets out rules on the use of AI systems in the EU market, with specific requirements for high-risk AI systems and obligations for the operators of such systems, with a particular focus on AI specialised in market monitoring and surveillance. The regulation would apply to providers of AI systems in the EU, users of AI systems in the EU, and providers and users of AI systems that are located in a third country where the output produced by the system is used in the EU.
The proposed regulation takes a risk-based approach, with four different levels of risk, ranging from “minimal” to “unacceptable”:
- Unacceptable risk AI systems: Include AI systems that exploit vulnerabilities due to age, physical or mental disability, and the use of real-time remote biometric identification systems in public spaces for law enforcement. These would be prohibited under the regulation.
- High-risk AI systems: Such as those intended to evaluate the creditworthiness of natural persons and those used for real-time and post remote biometric identification of natural persons.
- Limited risk AI systems: Include any systems where there is a clear risk of manipulation (such as chatbots). These would be subject to transparency requirements.
- Minimal risk AI systems: These can be developed and used under the existing legislation without any additional legal obligations. However, the EC will encourage the adoption of voluntary codes of conduct by industry associations and other representative organisations.
In January 2021, the UK’s Competition and Markets Authority (CMA) published a research paper on the harms to competition and consumers caused by algorithms. The paper covered the potential harms that may arise from the misuse of algorithmic systems in the market, including the personalisation of prices, the exclusion of competitors, and the facilitation of collusion. The CMA’s research was predominantly focused on the risks to market competition and consumers’ financial wellbeing.
The CMA then held a consultation on this research paper and has recently published the responses to the consultation. The responses revealed concerns beyond the purely financial. Many of those consulted raised concerns over the increasing asymmetry of information and power between companies and consumers, and the implications this has for consumer privacy. It is feared that the accelerating development of algorithms through the use of AI could serve to exacerbate these power and privacy issues.
Some respondents commented that algorithms are already being used as a way of concealing certain information from consumers – whether that be in the context of making rational purchasing decision, or for more sinister corporate image or political purposes.
In the context of the market, it was suggested that advanced algorithms and AI systems may, as an unintended consequence on the part of their developers, find that the most efficient outcome is to tacitly collude with other such systems. This could result in unprecedented levels of tacit collusion, where the companies themselves are not even aware that they are engaging in tacit collusion, given the AI’s independent actions.
Against the backdrop of data privacy becoming hugely more significant in the last 5+ years, both the CMA and EC seem to recognise the insidious threats posed by AI and algorithms to civil liberties and the integrity of the free market alike. The CMA recognised comments from the consultation suggesting that a regulatory framework should be introduced requiring companies to publish statistics on their algorithms. Alternatively, it was proposed that there could be an open register of AI systems and algorithms, to encourage transparency and provide a clear benchmark of the levels of self-policing firms should engage in. Whether the UK Government ultimately decides to follow the EC’s lead, with legislation categorising and forbidding certain forms of AI, remains to be seen.