AI-driven technologies are transforming the way we work and live. This article discusses some data protection aspects of the European Union’s AI Act.
What is the EU AI Act and which businesses does the EU AI Act apply to?
The EU AI Act, which recently came into force, aims to regulate the development, deployment, and use of AI systems in a way that protects fundamental rights and ensures safety and accountability.
Both EU and non-EU organisations which either supply AI systems to EU customers or have products or services that involve AI models used in the EU must comply with the EU AI Act provisions, most of which take effect in August 2026. The EU AI Act has some specific rules for where ‘high-risk AI systems’ are involved, and this article sets out some points to consider for organisations dealing with them.
What are high-risk AI systems?
High-risk AI systems are those which pose significant risks to fundamental rights, safety and security, and typically include, biometric identification (such as facial recognition), recruitment tools (such as AI-driven recruitment processes), credit scoring or AI used in healthcare and critical infrastructure. What these examples have in common is that they often involve processing personal data and directly impact individuals’ rights and freedoms.
These types of organisations will need to consider how the EU AI Act will affect their use of personal data when they either deploy an AI system or supply one in the EU. Both the EU AI Act and the GDPR work together to safeguard individuals’ data, but the EU AI Act introduces additional layers of responsibility for organisations supplying or deploying high-risk AI systems.
What are the key data takeaways for high-risk AI systems in the EU AI Act?
Here are some key areas where the EU AI Act discusses data protection:
- Data Governance and Quality
High-risk AI systems must which use techniques involving the training of models should be developed on the basis of training, validation and testing data sets. The training, validation and testing of data sets should:
- be subject to appropriate data governance and managing practices such as relevant design choices, data collection, relevant data preparation processing operations, the formulation of relevant assumptions, assessments as to the availability, quantity and suitability of the data sets, an examination in relation to possible biases, and the identification of any possible data gaps and addressing them.
- be relevant, representative, free of errors and complete.
- take into account the characteristics or elements that are particular to the specific geographical, behavioural or functional setting within which the high-risk AI system is intended to be used.
- appropriately safeguard the fundamental rights and freedoms of natural persons where it is necessary for the purposes of ensuring bias monitoring, detection and correction.
Where in some high-risk AI systems there is no use of techniques involving the training of models, proper data governance and management practices should still apply.
This is critical in sectors like recruitment, where biased data could lead to unfair decisions.
- Risk Management and Mitigation
Organisations should implement a risk management system to regularly assess the risks. The EU AI Act requires certain deployers of AI systems to conduct a fundamental rights impact assessment (FRIA) which is similar to a data processing impact assessment (DPIA) under the GPDR. One important aspect to note about FRIAs is that data controllers will need to consult with third parties, just as they do when conducting a DPIA.
- Transparency
Organisations should provide clear information about how the AI system was trained, how it processes data, and how decisions are made. If AI systems use personal data, users and consumers should be given the information they need to be able to understand how their data is being processed and what role the AI system plays in decision-making. This aligns with the GDPR’s emphasis on transparency and the right of individuals to know how their data is used.
- Human Oversight
A degree of human oversight will be needed to ensure that decisions made by the high-risk AI system can be reviewed or overridden if necessary. This is particularly important when AI is used for critical automated decisions, such as hiring or financial services, which can significantly impact individuals’ lives (this is also covered in the GDPR). Ensuring human intervention when required helps to mitigate risks associated with errors or bias.
The EU AI Act imposes strict obligations on high-risk AI systems to safeguard data protection and ensure accountability. Organisations caught by the EU AI Act should align their AI operations with both the EU AI Act and GDPR, focusing on data quality, transparency, risk management, and human oversight. Looking forward, compliance with the EU AI Act and GDPR in order to safeguard data will be essential, not only to avoid penalties but also to ensure trust in the inevitable continued development and use of AI into the future.
If you have any questions as a result of this article, please contact: [email protected].