Frequently Asked Questions (FAQ)

The EU AI Act

Lorem ipsum
  • Service
  • 20 Minute Read
  • 11/01/24
Philipp Rosenauer

Philipp Rosenauer

Partner Legal, PwC Switzerland

Fatih Sahin

Fatih Sahin

Director, AI & Data Leader Tax & Legal Services, PwC Switzerland

The EU Artificial Intelligence Act is in its final stage of negotiations. A consolidated legal text is expected in February/March 2024. In order to get a first understanding of the new rules, we have summarized below the most important aspects.

Why do we need to regulate the use of Artificial Intelligence?

The EU AI Act is the world's first comprehensive legal framework for artificial intelligence (AI). It aims to address the risks and opportunities of AI for health, safety, fundamental rights, democracy, rule of law and the environment in the EU. It also seeks to foster innovation, growth and competitiveness in the EU's internal market for AI.

AI is a rapidly developing technology that can bring significant benefits to society and the economy, but also poses new challenges and risks that need to be addressed in order to avoid undesirable outcomes. For example, some AI systems may be opaque, biased, inaccurate or harmful to users or third parties. Therefore, the EU has decided to act as one to regulate the use of AI in a human-centric and proportionate manner based on its values and principles. 

To whom does the AI Act apply?

It will apply to both public and private actors inside and outside the EU, as long as the AI system is placed on the EU market or its use affects people located in the EU.

It can concern both providers (e.g. the developer of a CV-screening tool) and deployers of high-risk AI systems (e.g. a bank buying this screening tool). Importers of AI systems will also have to ensure that the foreign provider has already carried out the appropriate conformity assessment procedure, bears a European Conformity (CE) marking, and is accompanied by the required documentation and instructions of use.

In addition, certain obligations are foreseen for providers of general-purpose AI models, including large generative AI models.

Providers of free and open-source models are exempted from most of these obligations. This exemption does not cover obligations for providers of general-purpose AI models with systemic risks.

Obligations also do not apply to research, development and prototyping activities preceding the release on the market and, furthermore, the regulation does not apply to AI systems that are exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities.

What are the risk categories?

The Commission proposes a risk-based approach, with four levels of risk for AI systems as well as an identification of risks specific to general-purpose models:

  • Minimal risk: All other AI systems can be developed and used subject to the existing legislation without any additional legal obligations. The vast majority of AI systems currently used or likely to be used in the EU fall into this category.
  • High-risk: A limited number of AI systems defined in the proposal which may potentially create an adverse impact on people's safety or their fundamental rights are considered to be high-risk. Annexed to the Act is the list of high-risk AI systems, which can be reviewed in order to align with the evolution of AI use cases.
  • These also include safety components of products covered by sectorial EU legislation. They will always be considered high-risk when subject to third-party conformity assessment under that sectorial legislation.
  • Unacceptable risk: A very limited set of particularly harmful uses of AI that contravene EU values because they violate fundamental rights, and will therefore be banned:
    • Social scoring for public and private purposes;
    • Exploitation of vulnerabilities of persons and the use of subliminal techniques;
    • Real-time remote biometric identification in publicly accessible spaces by law enforcement, subject to narrow exceptions (see below);
    • Biometric categorisation of natural persons based on biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs or sexual orientation. Filtering of datasets based on biometric data in the area of law enforcement will still be possible;
    • Individual predictive policing;
    • Emotion recognition in the workplace and education institutions unless for medical or safety reasons (i.e. monitoring the tiredness levels of a pilot);
    • Untargeted scraping of the Internet or CCTV for facial images to build-up or expand databases.
  • Specific transparency risk: Specific transparency requirements are imposed for certain AI systems, for example where there is a clear risk of manipulation (e.g. via the use of chatbots). Users should be aware that they are interacting with a machine. 

In addition, the AI Act considers systemic risks which could arise from general-purpose AI models, including large generative AI models. These can be used for a variety of tasks, and are becoming the basis for many AI systems in the EU. Some of these models could carry systemic risks if they are highly capable or widely used. For example, powerful models could cause serious accidents or be misused for far-reaching cyberattacks. Many individuals could be affected if a model propagates harmful biases across many applications.  

How do I know whether an AI system is high-risk?

The AI Act provides a clear definition of high-risk AI systems as well as a methodology to identify them within the legal framework. The high-risk AI systems are either listed in Annex III of the proposal, which contains a number of use cases in specific sectors or areas of application, or fall under the scope of Annex II, which contains a list of existing EU harmonisation legislation that covers certain products or services which rely on AI.

The AI Act also empowers the Commission to amend or update these annexes by delegated acts, taking into account the advice of the European Artificial Intelligence Board and the scientific panel of independent experts, as well as the feedback from stakeholders and the public. 

What are the obligations for providers of high-risk AI systems?

Before placing a high-risk AI system on the EU market or otherwise putting it into service, providers must subject it to a conformity assessment. This will allow them to demonstrate that their system complies with the mandatory requirements for trustworthy AI (e.g. data quality, documentation and traceability, transparency, human oversight, accuracy, cybersecurity and robustness). This assessment has to be repeated if the system or its purpose are substantially modified.

Providers of high-risk AI systems will also have to implement quality and risk management systems to ensure their compliance with the new requirements as well as minimise risks for users and affected persons, even after a product is placed on the market.

High-risk AI systems that are deployed by public authorities or entities acting on their behalf will have to be registered in a public EU database.

What are some examples of high-risk use cases as defined in Annex III?

  • Certain critical infrastructures, for instance in the areas of road traffic and the supply of water, gas, heating and electricity;
  • Education and vocational training, e.g. to evaluate learning outcomes and steer the learning process and monitoring of cheating;
  • Employment, workers management and access to self-employment, e.g. to place targeted job advertisements, analyse and filter job applications, and to evaluate candidates;
  • Access to essential private and public services and benefits (e.g. healthcare), creditworthiness evaluation of natural persons, and risk assessment and pricing in relation to life and health insurance;
  • Certain systems used in the fields of law enforcementborder control and the administration of justice and democratic processes;
  • Evaluation and classification of emergency calls;
  • Biometric identification, categorisation and emotion recognition systems (outside the prohibited categories);
  • Recommender systems of very large online platforms are not included, as they are already covered in other legislation (DMA/DSA).

How are general-purpose AI models being regulated?

General-purpose AI models, including large generative AI models, can be used for a variety of tasks. Individual models may be integrated into a large number of AI systems.

It is important that a provider wishing to build upon a general-purpose AI model has all the necessary information to make sure its system is safe and compliant with the AI Act. Therefore, the AI Act obliges providers of such models to disclose certain information to downstream system providers. Such transparency enables a better understanding of these models.

Model providers also need to have policies in place to ensure that they respect copyright law when training their models. In addition, some of these models could pose systemic risks because they are highly capable or widely used.

Is the AI Act future-proof?

The AI Act can be amended by delegating and implementing acts to add criteria for classifying the GPAI models as presenting systemic risks (delegated acts) as well as to amend modalities to establish regulatory sandboxes and elements of the real-world testing plan (implementing acts).

What is a fundamental rights impact assessment? Who has to conduct such an assessment and when?

The use of a high-risk AI system may have an impact on fundamental rights. Therefore, deployers which are bodies governed by public law or private operators providing public services, as well as operators providing high-risk systems, shall perform an assessment of the impact on fundamental rights and notify the respective national authorities of the results.

The assessment shall consist of a description of the deployer's processes in which the high-risk AI system will be used, of the period of time and frequency in which the high-risk AI system is intended to be used, of the categories of natural persons and groups likely to be affected by its use in the specific context, of the specific risks of harm likely to impact the affected categories of persons or group of persons, as well as a description of the implementation of human oversight measures and of measures to be taken in the event of the risks materialising.

If the provider has already met this obligation through the data protection impact assessment, the fundamental rights impact assessment shall be conducted in conjunction with the data protection impact assessment.

When will the AI Act be fully applicable?

Following its adoption by the European Parliament and the Council, the AI Act shall enter into force on the twentieth day following its publication in the official Journal. It will become fully applicable 24 months after entry into force, with a staggered approach as follows:

  • 6 months after entry into force, member states shall phase out any prohibited systems;
  • 12 months: obligations for general-purpose AI governance shall become applicable;
  • 24 months: all rules of the AI Act become applicable, including obligations for high-risk systems defined in Annex III (list of high-risk use cases);
  • 36 months: obligations for high-risk systems defined in Annex II (list of EU harmonisation legislation) apply.

How will the AI Act be enforced?

Each member state should designate one or more competent national authorities to supervise the application and implementation of the AI Act as well as carry out market surveillance activities.

To increase efficiency and to create an official point of contact regarding the public and other counterparts, each member state should designate one national supervisory authority which will also represent the country in the European Artificial Intelligence Board.

Additional technical expertise will be provided by an advisory forum representing a balanced selection of stakeholders, including industry, start-ups, SMEs, civil society and academia.

In addition, the Commission will establish a new European AI Office within the Commission, which will supervise general-purpose AI models, cooperate with the European Artificial Intelligence Board and be supported by a scientific panel of independent experts.

What are the tasks of the European AI Office?

The AI Office has as its mission to develop expertise and capabilities in the field of artificial intelligence within the European Union and to contribute to the implementation of EU legislation of artificial intelligence in a centralised structure.

In particular, the AI Office shall enforce and supervise the new rules for general-purpose AI models. This includes drawing up codes of practice to detail out rules, its role in classifying models with systemic risks, and monitoring the effective implementation and compliance with the Regulation. The latter is facilitated by its powers to request documentation, conduct model evaluations, investigate in the case of alerts and request providers to take corrective action.

What are the penalties for infringement?

When AI systems are put on the market or put into use that do not respect the requirements of the Regulation, member states will have to enact effective, proportionate and dissuasive penalties, including administrative fines, in relation to infringements, and communicate them to the Commission.

The Regulation sets out thresholds that must be taken into account:

  • Up to €35m or 7% of the total worldwide annual turnover of the preceding financial year (whichever is higher) for infringements on prohibited practices or non-compliance related to requirements on data;
  • Up to €15m or 3% of the total worldwide annual turnover of the preceding financial year for non-compliance with any of the other requirements or obligations of the Regulation, including infringement of the rules on general-purpose AI models;
  • Up to €7.5m or 1.5% of the total worldwide annual turnover of the preceding financial year for the supply of incorrect, incomplete or misleading information to notified bodies and competent national authorities in reply to a request;
  • For each category of infringement, the threshold would be the lower of the two amounts for SMEs and the higher of the two for other companies.

In order to harmonise national rules and practices in setting administrative fines, the Commission will draw up guidelines with advice from the Board.

Since EU institutions, agencies or bodies should lead by example, they will also be subject to the rules and also to possible penalties, and the European Data Protection Supervisor will have the power to impose fines on them.
 

Contact us

Philipp Rosenauer

Partner Legal, Zurich, PwC Switzerland

+41 58 792 18 56

Email

Matthias Leybold

Partner Cloud & Digital, PwC Switzerland

+41 58 792 13 96

Email

Yan Borboën

Partner, Leader Digital Assurance and Cybersecurity & Privacy, PwC Switzerland

+41 58 792 84 59

Email

Fatih Sahin

Director, AI & Data Leader Tax & Legal Services, PwC Switzerland

+41 58 792 48 28

Email

Sebastian Ahrens

Director Risk & Regulatory and AI Center of Excellence Leader, PwC Switzerland

+41 58 792 16 28

Email