The EU’s Proposed AI Regulation: Balancing Protection and Innovation
On 21 April 2021, the European Commission published its highly awaited Proposal for a Regulation on Artificial Intelligence (AI) (Proposed AI Regulation) following the European strategy on AI unveiled in 2018, the Ethics guidelines for Trustworthy AI of 2019 and the White Paper on AI of 2020.
The Proposed AI Regulation introduces a comprehensive set of rules for the governance of AI technologies in the EU which comprise in particular: (i) rules for the development, placement on the market and use of AI systems; (ii) prohibition of certain AI practices; (iii) specific requirements to be fulfilled for high-risk AI systems; (iv) transparency rules for AI systems that interact with natural persons, emotion recognition systems and biometric categorization systems, and AI systems used to generate or manipulate image, audio or video content; and (v) rules on market monitoring and surveillance.
1. Scope of application
The Proposed AI Regulation will govern any use of AI technologies within the EU and will affect AI systems whose output is used in the EU. Like the GDPR, the rules are likely to become a legal standard that AI developers based outside the EU will have to respect. Rules should not apply to AI systems exclusively developed or used for military purposes, although it remains unclear whether the regulation will cover also the use of dual use devices and technologies.
2. Definition of the AI and the AI system
The Proposed AI Regulation recognizes AI as a fast-evolving family of technologies that brings various economic and societal benefits in almost all spheres of life (e.g. improvement of prediction, optimization of resources allocation, personalization of service delivery), while at the same time these technologies may bring serious risks for individuals and society. The Proposed AI Regulation defines an AI system as software that is developed following various techniques, such as Machine-Learning, knowledge-based or statistical approaches, and that can generate outputs such as content, predictions, recommendations, or decisions.
3. Risk-based approach
The Commission decided to adopt a risk-based approach to regulate AI. AI systems are therefore classified on the basis of (i) an unacceptable level of risk, (ii) high-level risk and (iii) low or minimal risk.
3.a. AI systems presenting an unacceptable level of risk
The Proposed AI Regulation prohibits placing on the market, putting into service or using AI systems that (i) involve subliminal techniques beyond a person’s consciousness to distort a person’s behavior or (ii) exploit vulnerabilities of specific vulnerable groups (e.g. children or persons with mental disabilities) to distort their behavior in a manner that causes or is likely to cause physical or psychological harm; (iii) serve for the evaluation or classification of the trustworthiness of natural persons, including social scoring leading to discriminatory treatment unrelated to the context in which the data was originally generated or to a treatment that is unjustified or disproportionate to the type or gravity of social behavior, and (iv) the use of “real time” remote biometric identification systems (facial recognition) in publicly accessible spaces for the purpose of law enforcement, except in certain cases and under strict conditions. The application of those prohibitions requires to assess the harms or discriminatory treatment produced by the technology, and the language used leaves room for interpretation. The AI Regulation therefore does not exclude any technology per se, and requires first a thorough analysis of the effects of any AI system.
3.b. High-risk AI systems: types and requirements to be fulfilled
The Proposed AI Regulation differentiates AI systems that are safety components of products or are themselves products, and are subject to a third party conformity assessment prior to placing on the market or putting into service (such as machinery, toys, medical devices) and AI systems that are not a component of products or products themselves (stand-alone AI systems) that create a high risk to the health and safety or fundamental rights of natural persons, and whose level of risk is assessed based on their purpose, severity of the possible harm and the probability of its occurrence.
Stand-alone high-risk AI systems undergo internal control, i.e. the provider should perform the relevant conformity assessment and these systems should be thereafter registered in an EU database to be managed by the Commission. The Proposed AI Regulation provides a list of areas where stand-alone systems are used, which includes critical infrastructures (e.g. transport, supply of water or electricity); systems that may determine the educational and professional course of someone’s life (e.g. scoring of exams); employment, workers management and access to self-employment (e.g. for the recruitment and selection of persons, for task allocation, monitoring or evaluation of persons in work-related contractual relationships); essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan); law enforcement that may interfere with people’s fundamental rights and lead to e.g. surveillance, arrest or deprivation of a natural person’s liberty; migration, asylum and border control management (e.g. verification of authenticity of travel documents, assistance in examining the applications for asylum); administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).
High-risk AI systems need to meet various requirements in order to comply with the Proposed AI Regulation. Among others, they have to establish and maintain an adequate risk assessment and management system; to secure the high quality of the datasets whose training, validation and testing have to be relevant, representative, free of errors and complete; to draw up and update technical documentation; to record logging of the AI system’s activities and ensure traceability of its functioning throughout its lifecycle; to provide users with all information necessary to understand the AI system and use it appropriately; to ensure human oversight measures to prevent or minimize risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used and to be created in a way that enables achievement of an appropriate level of accuracy, robustness and cyber security.
Providers of high-risk AI systems and manufacturers of otherwise regulated products (toys, etc.) integrating AI systems should ensure that their systems comply with the above requirements and fulfill all the other obligations specified in the Proposed AI Regulation, including the obligation to label systems with the CE marking to indicate their conformity with the Proposed AI Regulation. Besides obligations that should be imposed on producers, the Proposed AI Regulation also envisages certain obligations for importers, distributers and particularly for users of high-risk AI systems.
3.c. Low- or minimal-risk AI systems
The Proposed AI Regulation allows the free use of AI systems with low or minimal risk. However, providers of these AI systems are encouraged to apply voluntarily the mandatory requirements set up for high-risk AI systems, by drafting and implementing codes of conduct.
The proposed scheme, in particular the rather heavy obligations in case of high-risk AI systems, might discourage the fast deployment of those technologies in the EU, at least for the providers such as SMEs that lack the resources to properly address the regulatory requirements. However, much will depend on the assessment of the possible harms and adverse effect on fundamental rights generated by the AI tools. The difficulty to operationalize this assessment might impact the use of those technologies in the EU.
4. Transparency rules for certain AI systems
Providers of AI systems that (i) interact with humans, (ii) are used to detect emotions or determine association with (social) categories based on biometric data, or (iii) generate or manipulate content (image, audio or video) should ensure that involved natural persons are informed that they interact or are exposed to an AI system. As for the third case, it is necessary to disclose that the content has been artificially generated or manipulated. Certain exceptions are anticipated for each of these three cases.
5. Modification of AI systems
Whenever an AI system is modified in a way that it may affect the purpose or the compliance of the system at issue with the Proposed AI Regulation, a new conformity assessment should be done. Providers of AI systems which continue to ‘learn’ after being placed on the market or put into service have to ensure that changes to the system and its performance will not constitute a substantial modification.
6. Measure in support of innovation
In order to ensure an innovation-friendly framework, the Proposed AI Regulation encourages national competent authorities to set up AI regulatory sandboxes, which should provide a controlled environment that facilitates the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan. The option of having those sandboxes and to test the new systems might somewhat counterbalance the negative effective of the compliance system on the speed of deployment of innovative system.
7. Governance
At Union level, the Proposed AI Regulation establishes a European Artificial Intelligence Board, composed of representatives from the Member States and the Commission. The European Artificial Intelligence Board should facilitate the implementation of proposed rules and the development of standards for AI.
At national level, Member States will have to designate one or more national competent authorities and, among them, the national supervisory authority, for the purpose of supervising the application and implementation of the regulation.
8. Non-compliance with the Proposed AI Regulation
It is up to the Member States to decide on penalties, including administrative fines, and to take measures to ensure that they are properly and effectively implemented. The Commission must be notified of the adoption of such rules and measures. The AI Regulation sets administrative fines for certain infringements, ranging from 2% to 6% of the total worldwide turnover if a company is the offender.
9. Next steps
The draft legislation will now have to undergo the EU legislative process which might take about two years before it is finally adopted. Most provisions will then come into force two years after adoption, which means the new obligations for AI providers and users will start to apply during the first half of 2025. The proposed text is likely to be amended by the European Parliament and the Council of the EU, and there is still room for improving the regulatory scheme. Various stakeholders will have the opportunity to raise their comments and objections during the process. Time will tell whether the Proposed AI Regulation strikes the right balance, and whether European operators will be able to innovate and to remain competitive within that regulatory framework.
Do not hesitate to contact us should you require further information and assistance on the issues discussed in this note, or any other data protection related matter.