Get ready for the AI Act

Get ready for the AI Act

8 July 2024

On 21 May 2024, the Council of the European Union officially adopted the Artificial Intelligence Act ("AI Act"), concluding a three year legislative process. The AI Act is an EU regulation and is considered the world's first comprehensive law on AI.

Entry into force and timing

The AI Act is expected to be published in the Official Journal of the EU on 12 July 2024 and will enter into force 20 days thereafter. Most provisions of the AI Act will become applicable in the EU after an implementation period of two years, with some exceptions:

  • Unacceptable risk AI Systems will be prohibited six months after entry into force;
  • Transparency obligations for general purpose AI systems will become applicable one year after entry into force; and
  • Obligations for high risk AI Systems will become applicable after three years.

Scope

The AI Act defines an AI system as: a "machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".

This definition is broad. It distinguishes AI systems from regular software applications which lack autonomy and the ability to adapt to the input they receive.

Explicitly out of scope of the AI act are AI systems and models developed and deployed for the sole purpose of, inter alia, scientific research and development, or commercial research, development and prototyping prior to introducing the product on the market. Also out of scope are individuals using an AI system for a purely personal, non-professional activity.

Territorially, the AI Act is applicable to AI systems that are placed on the market or put into service in the EU, users of AI systems that are located within the EU, and providers and deployers of AI systems established outside the EU, where the output produced by the AI system is used in the EU.

The AI Act regulates AI systems themselves as well as the providers, deployers (users), importers, and distributors of AI systems in both the public and private sector. In this news update, we focus on the obligations for providers and deployers. Deployers are the parties using an AI system under their authority.

Risk classification AI systems

The AI Act regulates AI systems based on a risk-based approach, by dividing them into four categories:

  1. Unacceptable risk: systems that induce an unacceptable risk, such as AI systems that manipulate cognitive behaviour, social scoring AI systems, AI systems used to infer emotions of a natural person in the areas of workplace and education, and real-time biometric identification AI systems, with some exceptions for law enforcement and national security.
  2. High risk: AI systems are considered high risk if they meet the following two conditions: the AI system is intended to be used as a safety component of a product, or the AI system itself is a product, covered by certain EU legislation listed in annex I to the AI Act, such as the Toy Safety Directive (2009/48/EC) or the Medical Device Regulation (EU 2017/745), and the product which safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment pursuant to the legislation listed in annex I to the AI Act. Examples are AI systems as safety components of medical devices and AI systems that constitute industrial machinery. Additionally, the AI Act specifies that certain specific systems are considered high risk, such as AI systems used for remote biometric identification, critical infrastructure, education, employment, credit scoring, law enforcement, migration and the democratic process, except if such systems do not pose a significant risk or harm to the health, safety or fundamental rights of natural persons.
  3. Limited risk/transparency risk: this includes systems that interact directly with natural persons, such as chatbots, systems that generate audio, images, video's or text content, emotion recognition systems, and systems that generate deepfakes.
  4. Minimal risk: all other AI systems.


Consequences of risk classification

The consequences of the applicable risk category are as follows:

  • Unacceptable risk AI systems are prohibited.
  • High risk AI systems are subject to the majority of obligations, as set out in more detail below.
  • Limited risk AI systems are only subject to transparency requirements. For example, People interacting with chatbots have to be informed by the providers of such systems that they are interacting with an AI system, and deployers of AI systems that generate deepfakes have to disclose that the content has been artificially generated.
  • Minimal risk AI systems are not subject to the AI Act.

Obligations for providers and deployers of high risk AI system

The majority of the obligations included in the AI act are directed towards providers of high-risk AI systems. Such providers should, among others:

  • Establish, implement, document, and maintain a risk management system and a quality management system.
  • Use training, validation and testing data that meet certain criteria. This means, for example, that such data has to be sufficiently representative and free of errors and complete in view of the intended purpose.
  • Draw up technical documentation and user instructions.
  • Ensure that the AI system allows for automatic logging of events over the lifetime of the system. Providers should also keep such logs.
  • design and develop an AI system in such a way that it can be effectively overseen by natural persons, and that it achieves an appropriate level of accuracy, robustness and cybersecurity.
  • Ensure that the AI system undergoes a conformity assessment and that a declaration of conformity is drawn up and a CE marking is affixed to the high risk AI system.

 

The AI Act additionally sets out that deployers of AI systems have to:

  • Take appropriate technical and organizational measures to ensure that they use a high-risk AI system in accordance with the instructions for use.
  • Assign human oversight to persons with the necessary competence, training and authority.
  • Monitor the operation of the high-risk AI system and keep the logs generated by the system.
  • Perform a fundamental rights impact assessment prior to deploying the system.

General Purpose AI Models

The AI Act also regulates General Purpose AI models, such as OpenAI's ChatGPT, Microsoft's Copilot and Google's Gemini. AI models are not AI systems in and of themselves, but AI systems could be based on AI models. The AI Act requires providers of such models to, among others, create and maintain technical documentation describing the model, put in place a policy to comply with EU copyright law and create a detailed summary of the data used for training the model.

The Commission has the authority to classify general purpose AI models as a systemic risk, which imposes more strict rules on such a model.

Governance and enforcement

The AI Act establishes an extensive regulatory framework for enforcement. Within the European Commission, an AI Office will be set up to enforce parts of the AI Act. Additionally, a European Artificial Intelligence Board will be established. The Board will be composed of one representative per member state and will provide, among others, advisory opinions on the application of the AI Act, similar to the European Data Protection Board.

On a national level, a notifying authority and a market surveillance authority will have to be appointed to enforce certain provisions of the AI Act regarding compliance of high risk AI systems. In the Netherlands, it is expected that the Dutch Data Protection Authority will be appointed as market surveillance authority.

Non-compliance with the AI Act can result in fines of up to 7% of the company's global annual turnover for the preceding financial year or a fixed amount up to 35 million, whichever is higher. Start-ups and small and medium-sized enterprises will be subject to proportional administrative penalties.

Please do not hesitate to contact us if you would like more information on the AI Act's implications for your organisation.

Written by:
Thomas de Weerd

Key Contact

Amsterdam
Advocaat | Partner

Key Contact

Amsterdam
Advocaat | Counsel

Key Contact

Amsterdam
Advocaat | Associate

Key Contact

Amsterdam
Advocaat | Associate