Artificial Intelligence Act is coming – here are 6 key take-aways

11 Jan 2024

After intense negotiation, the Council, Parliament and Commission of the EU have finally reached an agreement on the Artificial Intelligence Act (“AI Act”). The draft of the AI Act which was introduced by the Commission in April 2021 as part of the EU digital strategy, will now be finalized and then formally approved by the Council and Parliament to establish the new regulatory framework for AI in the EU. It is anticipated to fully enter into effect in early 2026, exact date depending on the publication of the approved version of the AI Act in the Official Journal.

This article summarizes the must-know points about the AI Act and its impact on businesses – whether those act as deployers or providers of AI systems.

 

1. WHAT CONSTITUTES AN AI SYSTEM?

The exact definition of the AI system is still unclear; however, the Commission has confirmed that the AI Act would adopt a technology-neutral and uniform approach, with the main criteria for the definition likely being the generative aspects of AI.

2. RISK-BASED APPROACH TO REGULATING AI

The AI Act introduces many obligations for the developers (“providers”) of AI solutions which will apply based on the level of risk attributed to the given AI system. There will be three main categories of AI systems:

Prohibited AI Systems

AI systems that entail unacceptable risk will be banned as they are considered a clear threat to individuals and their fundamental rights. The following techniques will likely fall under the scope of prohibited AI systems:

  • Cognitive behavioural manipulation of individuals or specific vulnerable groups
  • Untargeted scraping of facial images from the internet or CCTV systems
  • Emotion recognition at the workplace
  • Biometric categorisation of individuals for the purpose of deriving sensitive information, such as sexual orientation or political orientation or
  • Real-time remote biometric identification systems in publicly accessible spaces, with the exception of systems deployed by law enforcement authorities under certain conditions.

 

High-Risk AI Systems

AI systems that may potentially cause harm to the safety or fundamental rights of individuals, environment or democracy will be considered high-risk.

Providers of high-risk AI systems will be subject several obligations, including (i) conformity assessment before placing the AI system on the market and throughout its lifecycle, (ii) registration and notification obligations, (iii) transparency obligations, (iv) obligations to draw up and implement various internal processes, policies and technical documentation in the areas such as data governance, quality monitoring and testing, and (v) implementation of requirements on the allocation of responsibility within the providers’ supply chain.

Limited Risk AI Systems

AI systems that fall within the category of limited risk will likely only have to meet minimal transparency requirements. In particular, individuals should be made aware that they are interacting with AI to be able to make informed decisions whether they want to continue using the AI system and its outputs.

3. SPECIAL RULES FOR GENERAL-PURPOSE AI AND FOUNDATION MODELS

General-purpose AI (“GPAI”) and foundation systems, i.e., generative AI such as ChatGPT provided by Open AI, Bard provided by Google Gemini or Mistral AI provided by Mistral Enterprise, will have to comply with a specific set of obligations. Providers will have to disclose that the content was generated by AI, prevent the system from generating illegal content, and disclose summaries of copyrighted data used for training.

GPAIs and foundation models that may pose systematic risk across the value chain will be considered high-impact foundation models which will be subject to a more stringent regime that includes additional obligations such as notification of serious incidents to the supervisory authorities.

4. OBLIGATIONS FOR USERS OF AI SYSTEMS

While the majority of the regulatory burden lies with the providers of AI systems, the AI Act will impose certain obligations on the users of AI systems (“deployers”) as well. Key aspect is the obligation to perform a fundamental rights impact assessment before putting the AI system to use. Other obligations will likely include information obligations (i.e., inclusion of information notices where relevant) and registration obligations in case of high-risk AI systems.

Deployers that intend to allow the AI system to process personal data will also need to bear in mind the obligations arising from the GDPR. In particular, they will need to update the relevant privacy policies and conduct a DPIA.

5. NEW SUPERVISORY AUTHORITIES

At the EU level, the AI Act will introduce a new governance and supervisory body referred to as AI Office which will have direct enforcement powers. The AI Office will be advised by a scientific panel of independent experts.

Further, the AI Board, comprised of representatives of Member States, will continue to act as a coordination platform and will be advised by advisory forum comprised of industry representatives and academia.

At the national level, individual Member States will be required to appoint supervisory authorities tasked with the enforcement of the AI Act. It is anticipated that national data protection supervisory authorities will be designated as competent authorities, as recommended e.g., by the EDPS.

6. PENALTIES

The AI Act imposes hefty fines for non-compliance, including misclassification of a high-risk AI system. Depending on the category of the breach of the AI Act, the infringement could potentially result in a fine of maximum of (a) 7.5 million euro or 1.5 % of global turnover, or (b) 35 million euros or 7 % of global turnover. When deciding on the amount of the fine, the authorities shall also take into account the nature, gravity and duration of the infringement, whether other fines have already been imposed for the same infringement by other authorities and the size of the company.

Next Steps

As a first step, companies should map the areas where introduction of AI has already taken place or is likely to happen soon. Deployers, and even more so providers, of AI systems should start working on assessing the level of risk associated with each AI system while taking into account the criteria established in the AI Act and its annexes.

Many businesses that only use AI-powered tools without any further integration or customization will qualify as “deployers” under the AI Act and should already be working on rolling out AI policies for internal use of AI systems. While the effective date of the AI Act remains uncertain at this point, expedited adoption of internal AI policies is advisable to ensure that AI systems, irrespective of their type, are used in a transparent and responsible manner, with the company retaining ultimate control over the use of each AI system and proprietary company information.

Privacy aspects should be of primary focus, especially where the AI system makes decisions without human interaction and/or gains access to special categories of personal data. It is likely that the introduction of AI systems processing personal data will require new or updated data protection impact assessments.

Reach out to us if you have any questions relating to the development or use of AI.