Whitepaper Blog

Communication and psychological safety: The human side of AI transformation

Written by Achieve Breakthrough | Feb 13, 2026 10:25:10 AM

The pharmaceutical industry has been investing heavily in AI for some time. Tools are being piloted, data pipelines are being built, and teams are being upskilled. In areas like drug discovery and early R&D, AI is already delivering remarkable results by accelerating target identification, predictive modelling, and molecule generation.

Yet in other areas, such as regulatory workflows and commercial activities, the transformation is less seamless. Despite the technology being in place, adoption can be slower and more fragmented, and results unconvincing. Even with the best technology and highly skilled teams, something crucial is often missing: the human conditions that maximise the potential of AI transformation.

This article explores why communication, psychological safety, and a culture that supports experimentation are just as critical as the tools themselves.

 

Beyond technology: The human conditions for AI success

It’s tempting to assume that if you have the right tools, processes, and talent, results will follow. But we consistently see the opposite. AI’s potential can only be fully realised when organisations cultivate the right environment for people to engage with it. 

As Samuel Mantle, CEO at Lingaro, says in our whitepaper: “It's the human side of things that will become even more in demand. Our ability to communicate, our ability to influence, our ability to negotiate. They are going to be even more powerful human skills in the future.”

So it’s not about replacing human skill with technology. On the contrary, AI frees teams from frustrating, repetitive tasks, enabling them to focus on higher-value work like strategy, decision-making, and innovation. 

 

Psychological safety: The foundation for a successful AI transformation

A culture of psychological safety allows teams to explore AI tools, test new approaches, and learn from failure. 

Bryn Roberts, SVP & Global Head of Data, Analytics & Research at Roche, explains how this plays out in practice: “Beyond our deeper scientific and technical use-cases, we encourage 'everyday AI' through playful exploration by everybody. Once people appreciate what's possible, a little training and a safe environment enables them to 'go and play' in their daily work.”

But to realise the potential of the ‘play’ Bryn is talking about, people need to feel secure enough to ask questions, challenge assumptions, and experiment without fear of reprisal. That safety is what enables learning, adoption, and ultimately innovation.

Without it, teams hesitate, pilots remain pilots, and promising ideas fail to scale. And when individuals are unsure how AI will affect their role, or fear getting it wrong, they avoid experimentation altogether. Even the most advanced tools deliver only partial results if the environment doesn’t actively support learning, collaboration, and curiosity.

 

Why context matters

The importance of psychological safety becomes even clearer when we look at how AI transformation unfolds across different parts of the organisation. 

For example, drug discovery may benefit from rich datasets and well-defined outcomes, allowing AI to deliver tangible results more obviously. Other areas, like commercial engagement, or cross-functional decision-making, depend heavily on collaboration, judgment, and context.

Even the most advanced AI models will struggle if the organisational culture is rigid, risk-averse, or protective of existing silos. 

As Christian Diehl, Chief Data & Digital Officer at Novartis Biomedical Research, explains: “Hiring someone solely based on their experience with recent inventions and technologies is not enough… We need to bring in more curious minds. People with a mindset that says, ‘I don’t know it yet, but I will figure it out.’”

Where outcomes are less defined and collaboration is critical, communication and psychological safety become even more decisive performance enablers. They’re not just “soft” considerations.

 

Practical steps for building the right conditions

If human conditions shape AI outcomes, leaders must act deliberately to create them. Practical steps for developing the psychological safety required include:

  • Framing AI as augmentation, not replacement: Make clear that AI frees teams for higher-value work rather than replacing them.

  • Encouraging questions and experimentation: Create spaces where team members feel confident to ask questions, challenge assumptions, and explore new approaches.

  • Defining guardrails and boundaries: Provide clear guidance on compliance, privacy, and ethical considerations so experimentation happens safely.

  • Connect work to purpose: Help individuals see how AI-enabled activities contribute to organisational goals and patient outcomes, fostering engagement and motivation.


Taken together, these actions create the human conditions for successful AI adoption.

Ultimately, the organisations that succeed with AI are not just those with the best models, they’re those where people feel safe, empowered, and ready to explore what’s possible. If you’d like to learn more about the human side of AI transformation, download the whitepaper.