Data Protection and Artificial Intelligence

How to assess your AI systems from a legal and regulatory perspective

Philipp Rosenauer
Partner Legal, PwC Switzerland

Fatih Sahin
Director, TLS Artificial Intelligence & Data Lead, PwC Switzerland

Artificial intelligence (AI) is a rapidly evolving field which offers many opportunities and challenges for various sectors and applications. AI systems can process large amounts of data, learn from patterns and feedback and perform complex tasks that may otherwise require human intervention or expertise. However, AI systems also raise significant questions and risks regarding data protection, privacy, ethics and human rights. How can we ensure that AI systems are designed, developed and deployed in a way that respects the rights and interests of individuals and society? How can we prevent or mitigate the potential harms and biases that may result from the use of AI systems? How can we foster trust and accountability in the use of AI systems?

This article aims to provide some guidance and best practices for data controllers who wish to use AI systems in a lawful, fair and transparent manner. It is based on the recommendations and questions proposed by the French data protection authority (CNIL). The white paper covers six main aspects of the AI system lifecycle:

  • Basic questions to start with
  • Use of training data
  • Developing the AI algorithm
  • Use of AI systems
  • Security considerations
  • Data subject rights

For each aspect, we provide a summary of the key points and a list of questions that data controllers should ask themselves and document when analysing their AI systems.

Artificial Intelligence is a complex and hot topic. Do you have any questions? We are here to help you.

Talk to our experts!

Basic questions to start with

Before deciding to use an AI system, data controllers should clearly define the purpose and scope of the processing, and assess whether the use of AI is necessary and proportionate to achieve the intended objective. Data controllers should also identify the potential impacts and risks of the processing on the fundamental rights and freedoms of individuals, especially if the processing involves personal data, targets vulnerable groups or has legal, financial or physical consequences for the data subjects. Data controllers should ask themselves the following questions:

  • What is the specific objective of the processing that requires the use of an AI system?
  • Is the use of an AI system justified by a significant advantage over other existing systems or methods?
  • Does the processing involve personal data, either collected directly from individuals or obtained from other sources?
  • Who are the individuals affected by the processing, either as users or as subjects of automated decisions?
  • What are the potential consequences of the processing for the individuals' rights and interests, such as their privacy, dignity, autonomy or non-discrimination?
  • How can the risks and harms of the processing be prevented or mitigated?
  • Who is responsible for the design, development, deployment and monitoring of the AI system and what are their roles and obligations?

Use of training data

Training data is the data used to train and validate an AI system, usually through a supervised or unsupervised learning process. The quality and quantity of the training data are crucial for the performance and reliability of the AI system, as well as for the respect of data protection principles. Data controllers should ensure that the training data is lawfully obtained, relevant, accurate, representative and unbiased. Data controllers should ask themselves the following questions:

  • What is the source and origin of the training data and how was it collected or obtained?
  • What is the legal basis for the processing of the training data, and how is it documented and communicated to the data subjects?
  • How is the compliance of the training data processing monitored and evaluated, for example through a data protection impact assessment (DPIA) or a risk analysis?
  • How is the data anonymised or pseudonymised, and what are the measures taken to prevent or limit the risks of re-identification or inference?
  • How is the data minimised, and what are the criteria used to select the variables and values that are necessary and relevant for the learning task?
  • How is the data quality ensured, and what are the methods used to check and correct the data for errors, inconsistencies or outliers?
  • How is the data bias identified and corrected, and what are the tools used to measure and mitigate the bias in the data or the proxies for sensitive or protected characteristics?

Developing the AI algorithm

An algorithm is a set of rules or instructions that defines how an AI system processes the input data and produces the output data. The choice and implementation of the algorithm depend on the type and complexity of the learning task, the available data and the expected performance and fairness of the AI system. Data controllers should ensure that the algorithm is robust, reliable and transparent, and that it can be tested and explained. Data controllers should ask themselves the following questions:

  • What is the type and logic of the algorithm used, and how does it fit the purpose and scope of the processing?
  • Why was this algorithm chosen, and what are the advantages and disadvantages of using it compared to other algorithms or methods?
  • How is the algorithm validated and verified, and what are the sources and criteria used to assess its quality and accuracy?
  • How is the algorithm documented and communicated, and what are the means and formats used to explain its functioning and limitations to the stakeholders and the data subjects?
  • What are the tools and frameworks used to develop and implement the algorithm, and how are they selected and evaluated for their reliability and security?
  • How is the algorithm trained and tested, and what are the strategies and metrics used to measure and optimise its performance and fairness?
  • How is the algorithm updated and maintained, and what are the mechanisms and protocols used to monitor and control its behaviour and output over time?

Use of AI systems

Production is the phase where the AI system is deployed and used in the real environment, either as a standalone system or as part of a larger system or process. The use of the AI system in production may have either direct or indirect effects on individuals and society, and may also be subject to changes or challenges in the environment or the data. Data controllers should ensure that the AI system is supervised, transparent and adaptable, and that it respects the rights and interests of the individuals and the society. The following questions may be relevant here:

  • How is the human oversight ensured, and what are the roles and responsibilities of the human operators who interact with or monitor the AI system?
  • How is the transparency ensured, and what are the information and communication channels used to inform and explain the AI system to the users and the data subjects?
  • How is the quality ensured, and what are the measures and indicators used to assess and maintain the quality of the input and output data of the AI system?
  • How is the adaptability ensured, and what are the methods and procedures used to update and adjust the AI system to the changes or feedback in the environment or the data?

AI systems raise questions and risks regarding data protection, privacy, ethics and human rights. How can we ensure that AI systems are designed, developed and deployed in a way that respects the rights and interests of individuals and society? How can we prevent the potential harms and biases that may result from the use of AI systems?

Philipp Rosenauer,Partner and Head of Data Privacy, PwC Switzerland

Security considerations

Security is a key aspect of the AI system lifecycle, as it affects the integrity, availability and confidentiality of the data and the system. AI systems may be vulnerable to various types of attacks or flaws that may compromise their functioning or output or may harm the individuals or the society. Data controllers should ensure that the AI system is protected, resilient and accountable, and that it follows the security standards and best practices. The following aspects should be considered: 

  • How is the risk analysis conducted, and what are the methods and tools used to identify and assess the potential threats and vulnerabilities of the AI system?
  • How is the protection implemented, and what are the measures and techniques used to prevent or mitigate the attacks or flaws that may affect the AI system or the data?
  • How is the resilience ensured, and what are the mechanisms and plans used to recover or restore the AI system or the data in case of an incident or a breach?
  • How is the accountability ensured, and what are the records and logs used to track and document the activities and events of the AI system or the data?
  • How is the security monitored and audited, and what are the processes and procedures used to review and evaluate the security of the AI system or the data?

Data subject rights

Rights are the legal and ethical guarantees that individuals have in relation to the processing of their personal data or the automated decisions that affect them. AI systems may pose challenges or limitations to the exercise of these rights, either due to their complexity, opacity or unpredictability. Data controllers should ensure that the AI system respects and facilitates the exercise of these rights, and that it provides the necessary information and means for the individuals to access, rectify, erase, object or challenge the AI system or the data. Data controllers should ask themselves the following questions:

  • How are the individuals informed, and what is the content and format of the information provided to the individuals about the AI system or the data?
  • How are the rights exercised, and what are the channels and methods available for the individuals to request or exercise their rights in relation to the AI system or the data?
  • How are the requests handled, and what are the criteria and procedures used to respond or comply with the requests of the individuals in relation to the AI system or the data?
  • How are the decisions explained, and what are the elements and formats used to provide meaningful and understandable explanations of the decisions made by the AI system or the data?
  • How are the decisions challenged, and what are the mechanisms and remedies available for the individuals to contest or appeal the decisions made by the AI system or the data?

At PwC, we support our clients in their AI journey. Besides the legal regulatory requirements described above, there are even more aspects that a company needs to consider when rolling out their AI solutions. For this, we have developed PwC’s Responsible AI Toolkit. This is a suite of customisable frameworks, tools and processes designed to help you harness the power of AI in an ethical and responsible manner – from strategy through to execution.

Please get in contact with our experts if you are interested in learning more about it.


#social#

Do you have any questions?

https://pages.pwc.ch/core-contact-page?form_id=7014L000000kkHMQAY&embed=true&lang=en

Contact us

Philipp Rosenauer

Philipp Rosenauer

Partner Legal, PwC Switzerland

Tel: +41 58 792 18 56

Fatih Sahin

Fatih Sahin

Director, AI & Data Leader Tax & Legal Services, PwC Switzerland

Tel: +41 58 792 48 28

Matthias Leybold

Matthias Leybold

Partner Cloud & Digital, PwC Switzerland

Tel: +41 58 792 13 96

Yan Borboën

Yan Borboën

Partner, Leader Digital Assurance and Cybersecurity & Privacy, PwC Switzerland

Tel: +41 58 792 84 59