Client-driven insights in our responsible AI work

AI at the tipping point: From ambition to accountability

AI at the tipping point
  • Insight
  • 6 minute read
  • 06/10/25

Artificial intelligence (AI) is transforming how organisations operate, compete, and create value. Yet as adoption accelerates, responsible governance often lags. PwC’s 2026 Digital Trust Insights survey reveals that for many Swiss organisations, the main barrier to using AI effectively is not technology – it’s governance. Forty-three percent of Swiss firms cite unclear risk appetite as the main hurdle to using AI in cyber defence. The next two barriers are skills gaps and limited knowledge of how to apply AI, each affecting about one third of organisations.

Fabian Plattner

Fabian Plattner

Senior Manager, Digital Assurance & Trust, PwC Switzerland

Angelo Mathis

Angelo Mathis

Senior Manager, Digital Assurance & Trust, PwC Switzerland

From our AI assurance & trust project practice with clients across industries, we see a similar pattern: AI is widely used, but often without the structured oversight needed to manage emerging risks and turn trust into a competitive advantage. Responsible AI is not just about compliance – it’s about ensuring that innovation, governance, and ethics move forward together.

Take the next step toward trusted AI. Learn how to embed ethics, governance, and resilience into your AI strategy – and turn responsibility into a competitive edge.

43%

 of Swiss organisations cite unclear risk appetite as the main hurdle to using AI in cyber defence.

1 in 10

only  has implemented responsible AI practices across their entire organisation.

38%

plan to expand responsible AI measures within the next 12 months – but  20% have no plans at all.

56% 

say technology modernisation is now the main driver of cyber and data investments.

One third

have implemented data controls across the full lifecycle of their AI systems.

Strategy and governance are lagging AI adoption

Most organisations today are using AI tools in one form or another – from spam filters and chatbots to productivity boosters and language models. Many AI initiatives emerge from individual business units, often without a clear link to enterprise-wide risk management or strategic objectives. In many cases, organisations have solid data foundations but lack a defined governance model for AI development and deployment. 

Strong intentions alone are insufficient without clear processes and accountability. When AI systems fail or cause unforeseen risks, the absence of defined responsibilities and escalation paths leaves organisations vulnerable to operational, reputational, and regulatory damage. Tesla’s full self-driving feature illustrates this risk: without a clear accountability structure or audit mechanism, critical safety issues went undetected before rollout – showing how even advanced technologies can erode trust when oversight is missing. 

PwC’s Digital Trust Insights 2026 shows the approach that companies leading in digital trust are taking: they treat AI as a strategic capability rather than a collection of tools. They align technology decisions with their broader transformation agendas, define clear accountability for outcomes, and make the chief data, risk, and compliance functions active partners in AI design. This integrated approach reduces fragmentation, ensures oversight, and builds a culture where innovation and accountability reinforce each other.

Ethics and trust by design

Ethical principles in AI are increasingly recognised as a critical success factor. Yet many organisations still address ethics retrospectively – for example, when a model is already in production and issues such as bias, privacy, or explainability arise. PwC’s work with clients shows that integrating ethical considerations from the start delivers both compliance and confidence. 

Embedding ethical design principles into model development – such as fairness, transparency, alignment with organisational values, and human oversight – helps prevent unintended outcomes and strengthen stakeholder trust. According to PwC’s Digital Trust Insights 2026, only about one in ten Swiss firms has implemented responsible AI practices organisation-wide, and another third have done so in parts of their business. While 38 percent plan to expand these measures in the next 12 months, one in five report no plans for responsible AI practices at all. The challenge now is to move from ethics by reaction to ethics by design

A recent example illustrates the point: Microsoft’s Windows Recall feature was designed to enhance productivity but quickly drew criticism over privacy concerns – highlighting the consequences of treating ethics as an afterthought. The key is to embed ethical thinking throughout the lifecycle, ensuring that responsibility and trust are built in – not added later.

Risk management often ends at deployment

Technology alone will not secure responsible AI. AI models are not static – they evolve with use, adapt to new data, and can drift over time. This makes post-deployment risk management essential. In PwC’s client engagements, we see the same challenge: strong technical expertise often coexists with inconsistent risk management and weak post-deployment monitoring. 

To manage AI risk effectively, organisations need to map where AI is used, define clear ownership for model oversight, and establish mechanisms for continuous review. Post-deployment monitoring – tracking model drift, bias, and performance over time – is becoming a regulatory expectation, not an option. Those that embed responsibility by design now will be better prepared for upcoming AI governance frameworks, including the EU AI Act. 

Swiss organisations are aware of the need to modernise: 56 percent say technology modernisation is now the main driver of cyber and data investments, according to the DTI. The challenge lies in ensuring that this spend translates into measurable resilience through risk quantification and strong governance. 

Data quality and transparency: building trust in the foundation

AI’s reliability depends on the quality and transparency of its data. Despite many organisation having a solid data foundation, data governance often remains fragmented across systems, teams, and third-party providers. PwC’s advisory experience shows that leading organisations treat data quality as a continuous process – aligning sourcing, validation, and access rights with clear accountability and documentation. 

The Digital Trust Insights 2026 further emphasises that just one third of Swiss firms have implemented data controls across the full lifecycle, with many limiting them to selected domains. To close this gap, companies must integrate AI-related data into enterprise governance frameworks, ensuring visibility, traceability, and compliance from development to deployment.

How mature is your company? Responsible AI Survey

Participate in our survey to gain a deeper understanding of your current status, compare yourself to your peers and shape the future of responsible AI.

Start Responsible AI Survey

From responsible AI to resilient growth

Responsible AI is more than risk management – it’s a strategic enabler of resilience and innovation. Organisations that invest in clear governance, robust data foundations, and ethical design can unlock AI’s full potential while maintaining trust. The next frontier lies in execution: defining risk appetite, embedding controls, and ensuring accountability at every level. 

The path forward is clear: organisations should treat AI governance as an enterprise-wide priority, embedding ethics, transparency, and compliance from the design phase. They need to strengthen data protection and monitoring across the entire lifecycle, define clear accountability, and measure results continuously.

Start by identifying where AI is already being used in your business and where it can solve real problems. From there:

Identify business pain points and AI opportunities

Define a practical use case and assess regulatory risk

Choose the right tools and trusted partners

Pilot, train, and monitor – with strong controls in place

Measure results, scale responsibly, and stay compliant

AI is moving fast. The good news? You don’t need to have everything perfect today. But you do need to start now – by laying the groundwork for responsible AI. In an era where trust defines resilience, acting responsibly with AI is no longer optional – it’s how organisations build sustainable success and long-term advantage.

Build the right foundation today – and turn responsible AI into tomorrow’s growth engine. Start your responsible AI journey today – and turn trust into a lasting competitive advantage. Act deliberately now to make responsible AI the driver of your organisation’s growth and resilience.

Get in touch

Contact us to learn more about how responsible AI can become a catalyst for resilience and business success.

Yan Borboën

Partner, Leader Digital Assurance & Trust, PwC Switzerland

+41 58 792 84 59

Email

Mark Meuldijk

Director AI Assurance & Trust, PwC Switzerland

+41 58 792 44 00

Email

Morgan Badoud

Director, Digital Assurance & Trust, Geneva, PwC Switzerland

+41 58 792 90 80

Email

Angelo Mathis

Senior Manager, Digital Assurance & Trust, PwC Switzerland

+41 79 795 01 11

Email

Fabian Plattner

Senior Manager, Digital Assurance & Trust, PwC Switzerland

+41 79 878 01 27

Email