Whitepaper Blog

Why AI pilots stall, and what to do about it

Written by Achieve Breakthrough | Mar 5, 2026 11:28:58 AM

Across the pharmaceutical sector, AI activity continues to accelerate. Tools are being piloted, data pipelines are being built, and teams are being upskilled. In many organisations, there is genuine momentum.

But many find that AI transformation efforts stay contained within individual functions. Others find that initially promising projects are built on incomplete data foundations, making them difficult to scale. They generate interesting outputs without generating the learning that moves an organisation forward.

In our whitepaper, Beyond the Algorithm, we describe three types of organisations navigating AI right now. Those observing cautiously from the sidelines. Those tentatively testing the water with isolated projects. And those who have committed fully and started learning at speed.

The middle group is the most common position. And it’s a precarious one, because the technology life cycle keeps compressing, and the organisations already learning are building institutional knowledge that compounds over time. So for leaders who know they need to move, but aren't sure where to start, here are three practical entry points we cover in the whitepaper.

 

Lauch a “dare” project with clear guardrails

Some leaders are finding a useful distinction between "care" and "dare". In core business functions, where mistakes carry compliance or safety risk, precision and control remain essential. But in AI-enabled projects and innovation efforts, a different standard applies. Experimentation needs to be encouraged, including the real possibility of failure.

A "dare" project gives a team explicit permission to learn fast. The goal is understanding what works, what doesn't, and what couldn't have been known until they started. Clear boundaries around data privacy, regulatory compliance, and the appropriate use of sensitive information are what make this possible.

And, instead of worrying whether a project might fail, the important question is whether leaders have created an environment where learning from that failure is genuinely valued. As Bryn Roberts, SVP & Global Head of Data, Analytics & Research at Roche, puts it: "It's only by failing that we're actually going to see what works and what doesn't."

 

Start a cross-functional conversation about data

One of the most consistent barriers to scaling AI is the way data is held. Functions accumulate data over years, and human tendencies towards protectionism mean it rarely flows freely across the organisation. The result is that AI tools get deployed in isolation, generating local insights that can't connect to the broader picture.

A practical first step is to convene a cross-functional conversation about data ownership and what it's costing the organisation. Where are the silos? What decisions are being made in isolation that would benefit from shared context? These conversations can be uncomfortable, because they surface ego and territorial thinking that often goes unaddressed. But they are exactly the conversations that separate organisations learning to use AI collectively from those generating friction and stalled prototypes.

 

Build psychological safety deliberately

Perhaps the most important condition for successful AI adoption is psychological safety. Without it, concerns aren’t raised and tools sit underused because people aren’t confident to experiment or are uncertain about what the technology means for their role.

Psychological safety is built through consistent leadership behaviour, because people take their cues from what they observe rather than what they're told. When a leader asks a question without already knowing the answer, when a concern is raised and treated as useful information, when a project falls short and the response focuses on what was learned, it signals to the rest of the organisation that honest engagement is genuinely safe.

A concrete starting point is to create a structured space, a standing team conversation or a cross-functional session, explicitly designed to surface concerns about AI. What are people actually worried about? Which concerns reflect genuine risks? Which reflect misunderstandings that clearer communication could address? The message that AI augments rather than replaces human capability needs to land, not just be said. And it only lands when people feel safe enough to push back on it.

 

Why these three things

The three entry points above might seem like distinct initiatives, but they're addressing the same underlying problem. AI's potential in any given function is limited by what surrounds it: whether data can flow across boundaries, whether people feel able to engage honestly, and whether leadership has created genuine permission to learn. A "dare" project without psychological safety produces cautious experiments dressed up as bold ones. Cross-functional data conversations without trust produce defensive negotiations. None of this works in isolation.

This is why the organisations generating real value from AI tend to look different from those still waiting for results. The technology is often comparable. What differs is the environment they've built around it.

 

The choice is still yours to make

The organisations that have already committed are building knowledge that becomes harder to catch up with over time. The three entry points above aren't a complete transformation roadmap, but hopefully they give you a starting point to stop waiting and start generating the kind of learning that scales.

As Christian Diehl, Chief Data & Digital Officer at Novartis Biomedical Research, puts it in our whitepaper: "The real proof in the pudding with AI will come when we open our minds and ask: 'Let's fundamentally challenge how we're operating today. Is there a way to completely re-engineer our business processes?'"

That question is worth asking now. If you want support building the culture, leadership, and human capabilities to thrive in an AI-powered world, download the whitepaper or get in touch.