By Morgan Badoud
Director Digital Assurance & Trust at PwC Switzerland
Without assurance, AI systems pose risks that can undermine trust and reliability. Investing in AI assurance today not only mitigates current risks but also prepares businesses for future regulatory demands.
As AI becomes an integral part of business and public life, assurance frameworks are critical to mitigate risks. Companies must build transparency, trust, and governance to ensure the responsible use of AI at all levels.
Artificial intelligence (AI) has become a cornerstone of business operations, integrated into countless tools and systems used daily by organisations, their employees, and the general public. While organisations and employees are consciously adopting AI, the general public often interacts with AI without realising it – but both organisations and the public are exposed to potential risks that they may not fully understand or control. Companies need to ensure transparency, build trust, and implement robust governance frameworks to mitigate these risks. This underscores the critical importance of AI assurance, which provides the structures and frameworks necessary for the responsible and safe use of AI at all levels of interaction.
“Don’t be afraid to use AI – but use it responsibly.”
Morgan BadoudDirector Digital Assurance & Trust at PwC SwitzerlandGlobally, AI regulation has struggled to keep pace with technological advances. The European Union has made notable progress with its AI Act, introduced in autumn 2024, which uses a risk-based framework to categorise AI systems as unacceptable, high, limited or minimal/no risk.. The Act requires organisations to create an inventory of their AI systems, assess risks, and ensure compliance through measures such as conformity assessments and robust documentation for high-risk systems. Public sector organisations face additional guidance on the responsible use of AI.
In Switzerland, the absence of AI-specific regulation creates additional complexity for companies. While not legally bound by the EU AI law, Swiss companies serving EU customers must comply with its requirements, similar to the precedent set by the General Data Protection Regulation (GDPR), where EU standards became the de facto norm for Swiss companies with cross-border operations. Domestically, Switzerland is relying on existing legislation on data protection, liability, and consumer rights. FINMA’s December 2024 guidelines for AI governance in the banking sector are a step towards targeted regulation, but broader regulatory measures are still lacking, leaving companies to navigate a fragmented landscape.
“AI assurance is not just a safety net – it's the foundation for building trust, accountability, and innovation in a rapidly evolving digital landscape.”
Morgan BadoudDirector Digital Assurance & Trust at PwC SwitzerlandAn increasing number of Swiss companies are proactively developing AI policies and appointing roles such as a Chief AI Officer to address governance. However, implementing control frameworks is particularly challenging for smaller businesses, which often do not have the resources or expertise needed to effectively manage AI governance. Despite the absence of regulation, investing in AI controls now is essential to manage risks and anticipate future requirements.
As AI has become an integral part of critical processes such as financial reporting and auditing, building trust in these systems is key – and AI assurance helps organisations define control frameworks and implement responsible practices. International frameworks such as the NIST AI Risk Management Framework (AI RMF) and international standards like ISO/IEC 42001 provide guidance on developing robust governance structures that focus on fairness, accountability, transparency, and data protection.
A key challenge in AI assurance is data management. High-quality, well-managed data is the foundation of reliable AI. Organisations need to understand where their data comes from, how it is processed, and what risks it poses. This includes ensuring that sensitive information is protected, especially when using third-party tools or cloud-based solutions. Without this foundation, even the most sophisticated AI system is vulnerable to producing biased or unreliable outputs.
“Reliable AI starts with high-quality data and strong governance – without these, even the most advanced systems can fail to deliver trustworthy results.”
Morgan BadoudDirector Digital Assurance & Trust at PwC SwitzerlandFor businesses implementing AI, there are two common use cases: developing in-house AI solutions or integrating third-party tools. Both approaches require careful oversight. Third-party assurance, for example, conducts assessments of external AI providers to validate their controls and systems, giving companies confidence in the tools they use. Readiness assessments, another cornerstone of AI assurance, help organisations identify gaps in their current AI practices, define action plans, and prioritise upskilling and training to build internal expertise.
From a personal perspective, curiosity and continuous learning are at the heart of effective AI assurance. The field is evolving so rapidly that relying on outdated approaches is no longer viable. Organisations must constantly challenge AI outputs, verify their accuracy, and maintain professional judgement. Blind trust in AI is a risk in itself; human oversight remains essential to ensure ethical and accurate use. For practitioners, asking the right questions and looking beyond immediate tasks are critical to navigating this dynamic landscape.
Internally, we recognise that we are all working for the same company and on the same topic. Sharing knowledge, fostering constant exchange, and working together has always been central to our approach. With the rapid pace of change and increasing complexity in the AI landscape, this collaborative mindset is more important than ever.
Externally, the urgency to act is a significant challenge. Many clients don’t yet see the importance of implementing control frameworks, often questioning their necessity in the absence of regulation. However, making clients aware of the risks and the need for robust frameworks is essential. As trusted advisors, we guide them through these challenges and ensure they are prepared for future regulatory and operational requirements.
AI has enormous potential to streamline processes and unlock innovation, but it also poses risks that cannot be ignored. The responsible use of AI requires a balance between harnessing its capabilities and maintaining robust controls. Although Swiss-specific regulations are not yet in place, they are expected to take shape soon, and companies that act now to establish AI assurance frameworks will be better positioned to meet future challenges. Ultimately, AI isn’t just about automation – it’s about building trust, ensuring accountability and enabling a future where innovation, accountability, and responsibility go hand in hand.