Artificial Intelligence Act

Deal on comprehensive rules for trustworthy AI

Philipp Rosenauer
Partner Legal, PwC Switzerland

Fatih Sahin
Director, TLS Artificial Intelligence & Data Lead, PwC Switzerland

Last Friday, Parliament and Council negotiators reached a provisional agreement on the Artificial Intelligence Act. This regulation aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected against high-risk AI, while simultaneously boosting innovation and making Europe a leader in the field. The rules establish obligations for AI based on its potential risks and level of impact.

  • Safeguards agreed on general-purpose artificial intelligence
  • Limitations on the use of biometric identification systems by law enforcement
  • Bans on social scoring and AI used to manipulate or exploit user vulnerabilities
  • Right of consumers to submit complaints and receive meaningful explanations
  • Fines ranging from 35 million euros or 7% of global turnover to 7.5 million or 1.5% of turnover

Banned applications

Recognising the potential threat to citizens’ rights and democracy posed by certain applications of AI, the co-legislators agreed to prohibit:

  • biometric categorisation systems that use sensitive characteristics (e.g. political, religious or philosophical beliefs; sexual orientation; race);
  • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
  • emotion recognition in the workplace and educational institutions;
  • social scoring based on social behaviour or personal characteristics;
  • AI systems that manipulate human behaviour to circumvent their free will;
  • AI used to exploit the vulnerabilities of people (due to age, disability, social or economic situation).

Any questions?

Artificial Intelligence is a complex and hot topic. Do you have any questions? We are here to help you.
Talk to our experts

Law enforcement exemptions

Negotiators agreed on a series of safeguards and narrow exceptions for the use of remote biometric identification (RBI) systems in publicly accessible spaces for law enforcement purposes, subject to prior judicial authorisation and for strictly defined lists of crimes. “Post-remote” RBI would be used strictly in the targeted search for a person convicted or suspected of having committed a serious crime.

“Real-time” RBI would comply with strict conditions and its use would be limited in time and location, for the purposes of:

  • conducting targeted searches for victims (abduction, trafficking, sexual exploitation),
  • preventing a specific and present terrorist threat, or
  • localising or identifying a person suspected of having committed one of the specific crimes mentioned in the regulation (e.g. terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organisation, environmental crime).

Obligations for high-risk systems

For AI systems classified as high-risk (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law), clear obligations were agreed. MEPs successfully managed to include a mandatory fundamental rights impact assessment, among other requirements, which are also applicable to the insurance and banking sectors. AI systems used to influence the outcome of elections and voter behaviour are also classified as high-risk. Citizens will have a right to submit complaints about AI systems and receive explanations about decisions made based on high-risk AI systems that impact their rights.

Guardrails for general artificial intelligence systems

To account for the wide range of tasks that AI systems can accomplish and the quick expansion of those systems’ capabilities, it was agreed that general-purpose AI (GPAI) systems, and the GPAI models they are based on, will have to adhere to transparency requirements as initially proposed by Parliament. These include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training.

For high-impact GPAI models with systemic risk, Parliament negotiators managed to secure more stringent obligations. If these models meet certain criteria, they will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the Commission on serious incidents, ensure cybersecurity and report on their energy efficiency. MEPs also insisted that, until harmonised EU standards are published, GPAIs with systemic risk may rely on codes of practice to comply with the regulation.

Measures to support innovation and SMEs

MEPs wanted to ensure that businesses, especially SMEs, can develop AI solutions without undue pressure from industry giants controlling the value chain. To this end, the agreement promotes so-called regulatory sandboxes and real-world-testing that are established by national authorities to develop and train innovative AI before placement on the market.

Sanctions and entry into force

Non-compliance with the rules can lead to fines ranging from 35 million euros or 7% of global turnover to 7.5 million or 1.5% of turnover, depending on the infringement and the size of the company.

Next steps

The agreed text will now have to be formally adopted by both Parliament and the Council to become EU law. Parliament’s Internal Market and Civil Liberties committees will vote on the agreement in a forthcoming meeting.

In our upcoming webinar on 20 February, we will help you to understand the AI Act in detail and the impact it has on Swiss companies.

Contact us

Philipp Rosenauer

Philipp Rosenauer

Partner Legal, PwC Switzerland

Tel: +41 58 792 18 56

Matthias Leybold

Matthias Leybold

Partner, Cloud & Digital, PwC Switzerland

Tel: +41 58 792 13 96

Yan Borboën

Yan Borboën

Partner, Leader Digital Assurance and Cybersecurity & Privacy, PwC Switzerland

Tel: +41 58 792 84 59

Fatih Sahin

Fatih Sahin

Director, AI & Data Leader Tax & Legal Services, PwC Switzerland

Tel: +41 58 792 48 28

Sebastian Ahrens

Sebastian Ahrens

AI Center of Excellence Leader, PwC Switzerland

Tel: +41 58 792 16 28