What private equity and venture capital firms should consider when investing in AI companies

Philipp Rosenauer
Partner Legal, PwC Switzerland

Fatih Sahin
Director, TLS Artificial Intelligence & Data Lead, PwC Switzerland

Artificial intelligence (AI) is transforming various industries and sectors, from healthcare and finance to education and entertainment. According to PwC’s Global Artificial Intelligence Study, the global AI market is expected to reach $15.7 trillion by 2030. As a result, many investors are eager to tap into the potential of AI and support innovative start-ups and scale-ups that are developing and deploying AI solutions.

However, investing in AI companies is not without challenges and risks, especially from a legal regulatory perspective. AI is a complex and dynamic field that raises various ethical, social and legal issues, such as data protection, privacy, security, bias, accountability, transparency and human rights. These issues are attracting increasing attention and scrutiny from regulators, policymakers, civil society and the public, who are demanding more responsible and trustworthy AI.

Therefore, investors need to be aware of the existing and emerging AI regulations that might affect their portfolio companies, as well as the potential liabilities and penalties that might arise from non-compliance or misconduct. Moreover, investors need to conduct a thorough and comprehensive legal regulatory risk management and due diligence process before and after making their investments, to ensure that they are not exposed to unforeseen or unacceptable risks.

What are the legal regulatory risks when investing in AI companies?

The legal regulatory risks when investing in AI companies can vary depending on the type, scope and location of the AI activities, as well as the industry and sector involved. However, some of the common and significant risks include:

  • Data protection and privacy: AI often relies on large and diverse datasets to train, test and improve its algorithms and models. However, these datasets might contain personal or sensitive information that is subject to data protection and privacy laws, such as the General Data Protection Regulation (GDPR) in the European Union or the Swiss Data Protection Act. These laws impose various obligations and restrictions on the collection, processing, sharing and transferring of personal data, as well as grant various rights and remedies to the data subjects. For example, under the GDPR, data subjects have the right to access, rectify, erase, restrict, port and object to the processing of their personal data, as well as the right to not be subject to automated decision-making, including profiling, that has legal or significant effects on them. Moreover, data controllers and processors have to comply with the principles of lawfulness, fairness, transparency, purpose limitation, data minimisation, accuracy, storage limitation, integrity and confidentiality, as well as implement appropriate technical and organisational measures to ensure data security and protection. Failure to comply with these laws can result in hefty fines, lawsuits, reputational damage and loss of trust and customers.
  • Security and cybersecurity: AI systems and applications can also pose security and cybersecurity risks, such as unauthorised access, manipulation, theft or destruction of data, algorithms or models, as well as malicious attacks, hacking or sabotage of the AI systems or infrastructure. These risks can compromise the confidentiality, integrity and availability of the AI systems and applications, as well as cause harm or damage to the users, customers or third parties. Therefore, AI companies have to ensure that they have robust and resilient security and cybersecurity policies, practices and safeguards in place, as well as comply with any relevant security and cybersecurity laws and standards.
  • Bias and discrimination: AI systems and applications can also generate or amplify bias and discrimination, either intentionally or unintentionally. Bias and discrimination can affect the accuracy, fairness and quality of the AI outputs or outcomes, as well as the rights, interests and welfare of the users, customers or third parties, especially those who belong to vulnerable or marginalised groups. For example, AI systems and applications can produce biased or discriminatory results or decisions in areas such as hiring, lending, insurance, education, healthcare or law enforcement. Therefore, investors should ensure in their due diligence to check if AI companies have mechanisms to prevent, detect, mitigate and correct any bias or discrimination in their AI systems and applications.
  • Accountability and transparency: AI systems and applications can also raise issues of accountability and transparency, especially when they involve complex, opaque or autonomous algorithms or models. These issues can affect the trustworthiness, reliability and legitimacy of the AI systems and applications, as well as the rights, interests and welfare of the users, customers or third parties, especially when they are affected by the AI results or decisions. Investors should check whether AI companies have clear and consistent policies, procedures and practices to ensure the accountability and transparency of their AI systems.


How to mitigate the legal regulatory risks when investing in AI companies?

To mitigate the legal regulatory risks when investing in AI companies, investors need to adopt a proactive and holistic approach to legal regulatory risk management and due diligence, both before and after making their investments. Some of the steps and strategies that investors can take include:

  • Conduct a comprehensive legal regulatory risk assessment and due diligence of the AI companies, their AI systems and applications, and their data sources and practices.
  • Review the legal regulatory policies, procedures and practices of the AI companies, such as their data protection and privacy policies, security and cybersecurity policies, bias and discrimination policies.
  • Review and verify the legal regulatory contracts and agreements of the AI companies, such as their data processing and sharing agreements, security and cybersecurity agreements.
  • Review and verify the legal regulatory incidents and breaches of the AI companies, such as their data protection and privacy incidents, security and cybersecurity breaches, bias and discrimination incidents.
  • Negotiate and include legal regulatory clauses and conditions in the investment agreements and contracts, such as legal regulatory warranties, representations, indemnities, covenants and remedies, to ensure that the AI companies are legally and contractually bound to comply with the relevant laws and regulations.
  • Monitor and update the legal regulatory compliance and performance of the AI companies, as well as the legal regulatory developments and changes in the industry and sector, to ensure that the AI companies are aware of and adhere to the latest and applicable laws and regulations.


Conclusion

Investing in AI companies can offer significant opportunities and benefits for investors. However, investing in AI companies also entails significant challenges and risks, especially from a legal regulatory perspective. Therefore, investors need to be diligent in managing the legal regulatory risks by conducting a comprehensive legal regulatory risk management and due diligence process, both before and after making their investments. By doing so, investors can reduce the potential or existing legal regulatory liabilities and penalties. Our experts are happy to support you in your investment due diligence process.

Any questions?

Artificial Intelligence is a complex and hot topic. Do you have any questions? We are here to help you.
Talk to our experts

Contact us

Philipp Rosenauer

Philipp Rosenauer

Partner Legal, PwC Switzerland

Tel: +41 58 792 18 56

Matthias Leybold

Matthias Leybold

Partner Cloud & Digital, PwC Switzerland

Tel: +41 58 792 13 96

Yan Borboën

Yan Borboën

Partner, Leader Digital Assurance and Cybersecurity & Privacy, PwC Switzerland

Tel: +41 58 792 84 59

Fatih Sahin

Fatih Sahin

Director, AI & Data Leader Tax & Legal Services, PwC Switzerland

Tel: +41 58 792 48 28

Sebastian Ahrens

Sebastian Ahrens

Director Risk & Regulatory and AI Center of Excellence Leader, PwC Switzerland

Tel: +41 58 792 16 28