No Match Found
Artificial intelligence (AI) is a rapidly evolving field which offers many opportunities and challenges for various sectors and applications. AI systems can process large amounts of data, learn from patterns and feedback and perform complex tasks that may otherwise require human intervention or expertise. However, AI systems also raise significant questions and risks regarding data protection, privacy, ethics and human rights. How can we ensure that AI systems are designed, developed and deployed in a way that respects the rights and interests of individuals and society? How can we prevent or mitigate the potential harms and biases that may result from the use of AI systems? How can we foster trust and accountability in the use of AI systems?
This article aims to provide some guidance and best practices for data controllers who wish to use AI systems in a lawful, fair and transparent manner. It is based on the recommendations and questions proposed by the French data protection authority (CNIL). The white paper covers six main aspects of the AI system lifecycle:
For each aspect, we provide a summary of the key points and a list of questions that data controllers should ask themselves and document when analysing their AI systems.
Before deciding to use an AI system, data controllers should clearly define the purpose and scope of the processing, and assess whether the use of AI is necessary and proportionate to achieve the intended objective. Data controllers should also identify the potential impacts and risks of the processing on the fundamental rights and freedoms of individuals, especially if the processing involves personal data, targets vulnerable groups or has legal, financial or physical consequences for the data subjects. Data controllers should ask themselves the following questions:
Training data is the data used to train and validate an AI system, usually through a supervised or unsupervised learning process. The quality and quantity of the training data are crucial for the performance and reliability of the AI system, as well as for the respect of data protection principles. Data controllers should ensure that the training data is lawfully obtained, relevant, accurate, representative and unbiased. Data controllers should ask themselves the following questions:
An algorithm is a set of rules or instructions that defines how an AI system processes the input data and produces the output data. The choice and implementation of the algorithm depend on the type and complexity of the learning task, the available data and the expected performance and fairness of the AI system. Data controllers should ensure that the algorithm is robust, reliable and transparent, and that it can be tested and explained. Data controllers should ask themselves the following questions:
Production is the phase where the AI system is deployed and used in the real environment, either as a standalone system or as part of a larger system or process. The use of the AI system in production may have either direct or indirect effects on individuals and society, and may also be subject to changes or challenges in the environment or the data. Data controllers should ensure that the AI system is supervised, transparent and adaptable, and that it respects the rights and interests of the individuals and the society. The following questions may be relevant here:
AI systems raise questions and risks regarding data protection, privacy, ethics and human rights. How can we ensure that AI systems are designed, developed and deployed in a way that respects the rights and interests of individuals and society? How can we prevent the potential harms and biases that may result from the use of AI systems?
Security is a key aspect of the AI system lifecycle, as it affects the integrity, availability and confidentiality of the data and the system. AI systems may be vulnerable to various types of attacks or flaws that may compromise their functioning or output or may harm the individuals or the society. Data controllers should ensure that the AI system is protected, resilient and accountable, and that it follows the security standards and best practices. The following aspects should be considered:
Rights are the legal and ethical guarantees that individuals have in relation to the processing of their personal data or the automated decisions that affect them. AI systems may pose challenges or limitations to the exercise of these rights, either due to their complexity, opacity or unpredictability. Data controllers should ensure that the AI system respects and facilitates the exercise of these rights, and that it provides the necessary information and means for the individuals to access, rectify, erase, object or challenge the AI system or the data. Data controllers should ask themselves the following questions:
At PwC, we support our clients in their AI journey. Besides the legal regulatory requirements described above, there are even more aspects that a company needs to consider when rolling out their AI solutions. For this, we have developed PwC’s Responsible AI Toolkit. This is a suite of customisable frameworks, tools and processes designed to help you harness the power of AI in an ethical and responsible manner – from strategy through to execution.
Please get in contact with our experts if you are interested in learning more about it.
Partner Legal, PwC Switzerland
Tel: +41 58 792 18 56
Director, AI & Data Leader Tax & Legal Services, PwC Switzerland
Tel: +41 58 792 48 28