Pharmaceutical leaders are investing heavily in AI. New platforms, new models, new capabilities. Yet despite similar levels of technological access, outcomes vary wildly between organisations.
Some see meaningful gains in speed, productivity, and insight. Others remain stuck in pilots and prototypes. So what explains the difference?
In our whitepaper Beyond the Algorithm: How pharmaceutical leaders can navigate cultural transformation in the age of AI, we introduce an equation to explain why AI succeeds in some organisations and stalls in others:
This equation reflects something we see repeatedly in our work across the pharmaceutical sector: technology alone is never enough. In fact, we estimate that technology accounts for only around 20% of success. The rest is determined by leadership behaviour, human mindset, and the environment people are working in.
Let’s break this equation down.
The numerator: What actually drives AI impact
There’s no question that the technology matters. Without access to capable tools, reliable data, and appropriate infrastructure, you won’t get anywhere. But this is where many organisations over-invest their attention. They assume that if they get the technology right, the rest will follow.
In reality, most pharmaceutical companies now have access to comparable AI capabilities. What differentiates outcomes is less the technology itself, but more how people engage with it.
In other words, technology enables possibility, but it doesn’t guarantee impact.
This is where momentum is either unlocked or suffocated.
Leaders play a decisive role in shaping how AI is perceived and used. Do they model curiosity, or caution? Do they invite experimentation, or default to control? Do they see AI as something to be governed tightly, or as a capability to be explored responsibly?
In the whitepaper, we argue that leaders need to shift from supervision to enablement. From having answers to asking better questions. From minimising risk at all costs to creating conditions where smart experimentation can happen within clear guardrails.
When leaders engage openly with AI themselves, talk honestly about uncertainty, and visibly support learning-by-doing, it sends a powerful signal. It tells the organisation that AI is not something to fear or defer, but something to engage with.
Insight without action doesn’t move anything. Many organisations understand AI’s potential intellectually, but remain passive in practice. They linger on the steps of the pool, waiting for certainty before committing. But AI doesn’t reward perfect plans. It rewards movement.
The organisations seeing real impact are those that take deliberate action, even when things feel incomplete. They run experiments, learn quickly, adapt, and iterate. They accept that not everything will work first time, and that learning is the point.
Action, even if it doesn’t yield results first time around, builds experience and familiarity. It also builds organisational confidence far faster than analysis ever could.
The denominator: What erodes impact
Even with strong technology, leadership, and intent, impact can still be dramatically reduced by what sits in the denominator.
These are the default human responses that surface under pressure. Overcontrol. Micromanagement. Excessive governance. Fear-driven decision making. Retreating to familiar ways of working when uncertainty rises.
In highly regulated environments like pharma, these tendencies are understandable. But when left unchecked, they can strangle AI initiatives before they mature.
Instead of accelerating progress, leaders unintentionally slow it down. Approval cycles lengthen. Experimentation narrows. Teams wait to be told what’s safe, rather than exploring what’s possible.
This is where context and psychological safety matter most. If people fear making mistakes, they won’t experiment. If they fear job erosion, they won’t engage fully. If silos and protectionism dominate, collaboration breaks down.
An unhelpful environment typically presents itself in hesitation, compliance without commitment, and innovation that doesn’t scale sufficiently.
That’s why we argue that leaders must actively shape the environment their people operate in. One where concerns can be voiced, questions asked, and learning happens openly. Without this, even the best AI strategy will underperform.
Why this equation matters
This equation reframes AI transformation as a human challenge first, and a technical one second. It explains why organisations with similar tools see radically different outcomes. And it highlights where leaders should focus their attention if they want to increase impact.
If technology is only 20% of the story, where is the other 80% of your effort going?
Are you actively shaping leadership behaviours, encouraging action, and creating an environment where people feel safe to experiment and learn? Or are reactive tendencies and cultural friction limiting what you’re achieving with AI investment?
Our whitepaper, Beyond the Algorithm: How Pharmaceutical Leaders Can Navigate Cultural Transformation in the Age of AI, explores this equation in more depth and offers practical guidance for leaders who want to unlock real, scalable impact from AI.
The organisations that win won’t just be the ones with the most advanced tools. If you want to get the human equation right too, download the whitepaper here.