Whitepaper Blog

Uncovering the hidden context holding you back: The secret to unlocking AI’s full potential

Written by Achieve Breakthrough | Dec 11, 2025 11:52:21 AM

Pharmaceutical leaders are rightly investing time, energy, and resources into understanding AI’s potential. They’re exploring new tools, piloting use cases, and thinking hard about how AI can reshape their operations.

But there’s another important dynamic at play that has nothing to do with algorithms or data pipelines. It’s the invisible context your organisation operates in. It could be the single biggest factor limiting your AI progress.

This article is the second in our series exploring the human side of AI transformation. We’re not talking about the technology itself, we’re more interested in the leadership behaviours, cultural conditions, and organisational mindsets that determine whether AI becomes a breakthrough force or a stalled ambition.

To understand this, we want to borrow a simple but powerful analogy: the fish tank.

 

The fish tank: Why context matters more than you think

A fish lives its entire life in water, yet never notices it. The water shapes everything (even its very survival). Yet it’s completely invisible to the fish itself.

Organisations are no different. They operate inside their own equivalent of a tank. Full of unspoken assumptions, ingrained habits, and long-held beliefs about what’s safe, possible, risky, or allowed. Leaders swim in this water every day. Just like the fish, they rarely see it.

This unseen organisational context determines whether AI thrives, or whether transformation programmes suffocate before they have the chance to deliver real value. In our work with pharmaceutical organisations, we see two contextual forces show up regularly:

 

  1. Risk aversion: A deeply embedded caution and/or siloed data ownership. (A legacy of a sector where risk-management has historically been non-negotiable.)
  1. Fear of job erosion: A powerful concern that AI will encroach on roles, status, or professional identity.

 

These contexts operate beneath the surface. No matter how advanced the technology, they can quietly strangle experimentation, slow down collaboration, and cause promising AI initiatives to stall.

This is why leaders must learn to jump out of the tank, see the context for what it is, name it, and then consciously reshape it.

 

Why the most advanced tools fail in the wrong context

A common issue with AI deployment is when organisations hope that if they invest in the right models and hire the right talent, results will follow. But technology alone can’t overpower ingrained working culture and attitudes.

If the unspoken context says things like “We can’t take the risk”, “AI might threaten my role”, “We’ve always done it like this”, then even the most promising AI investments are likely to stall. You’re likely to see teams resisting and leaders retreating to familiar behaviours. Ultimately, this means that experimentation remains cautious and limited, and people revert to their old patterns and ways of working when pressure arises.

That’s why we argue in our new whitepaper that AI implementation isn’t just a technology challenge, it’s a deeply human and cultural challenge. Breakthroughs come not when people understand AI, but when they understand themselves and the environment they’re operating in.

 

So how do you “jump out of the tank”?

Recognising the invisible forces shaping your organisation is just the first step. The real challenge is helping your teams see them, and then creating the conditions where people can work differently.

Our new whitepaper emphasises that breakthroughs happen when leaders actively shape context, not just enforce new rules or deploy technology. Key actions include:

 

  • Communicating the purpose of AI clearly: Emphasise that AI is designed to amplify human capability, helping people focus on higher-value work and deliver faster, more meaningful outcomes.
  • Fostering psychological safety: Create spaces where people feel confident to raise questions, challenge assumptions, and explore new ways of working without fear of judgment.
  • Encouraging experimentation within guardrails: It’s important to balance freedom to explore with clear boundaries to ensure compliance, privacy, and ethical standards are maintained.
  • Modelling curiosity and openness: Leaders who actively engage with AI, experiment themselves, and share lessons learned signal to the organisation that it’s safe (and expected) to learn by doing.
  • Connecting work to purpose: People are more willing to embrace new ways of working when they understand how their efforts contribute to the organisation’s broader goals and mission.

 

In other words, jumping out of the tank is a continuous process of seeing and reshaping context so that people can operate at their best. It’s about creating an environment where collaboration and experimentation are embedded in the way the organisation works, not just in isolated projects.

 

Why this matters

Leaders who make context visible unlock the human potential behind AI initiatives. When people understand the “why” and feel safe to experiment, teams move faster and are empowered to make better decisions and break down silos naturally. This also goes a long way to removing some of the anxiety associated with AI and what it means for teams. Organisational learning also accelerates, which helps adapt to change more effectively.

Put simply: the organisations that succeed with AI aren’t just the ones with the best technology, they’re the ones where leaders create the conditions for people to thrive.

 

Final thought: Seeing the water you swim in

Although the technology is (of course) essential, AI transformation is also about understanding the environment in which people work and creating conditions that allow them to flourish.

Are you willing to jump out of the tank? To make the invisible visible and build the culture that will allow AI to reach its full potential?

Our whitepaper, Beyond the algorithm: How pharmaceutical leaders can navigate cultural transformation in the age of AI, explores practical approaches for leaders to shift context, foster psychological safety, and create an environment where AI and human expertise can amplify one other. Download the whitepaper today.