Skip to main content
Illustrative article header image

A conversation with Ollie Whiting, CEO of La Fosse

Feb 17 2026 by Management-Issues
Print This Article

In an era of rapid technological transformation, artificial intelligence presents both unprecedented opportunities and complex challenges for businesses. A survey commissioned by La Fosse, a UK tech and transformation specialist, highlights a critical disconnect at the heart of AI adoption: the very leaders meant to guide technological integration may be inadvertently exposing their organisations to significant risks.

The research, which surveyed over 2,000 UK tech workers (mainly senior management), reveals a fascinating and counterintuitive trend. Contrary to expectations, it's not junior staff or immature technology that results in the biggest AI-related vulnerabilities, it’s senior leadership.

We ask Ollie Whiting, CEO of La Fosse, to unpack some of the key findings from the report as we discuss the complex dynamics of AI adoption in business.

Q: What was the most surprising finding from your survey?

Ollie Whiting: What surprised me most was the sheer scale of the gap between confidence and consequence. Senior leaders are making the most significant AI-driven decisions, yet they are also the group most likely to acknowledge those decisions were based on inaccurate data and have already led to serious business issues. That tells us this is not a future risk. It is already embedded in how many organisations operate today.

Q: Your research indicates that 93% of C-level leaders have made decisions based on AI outputs from inaccurate data. How is this happening at the most senior levels of organisations?

Ollie Whiting: Speaking from experience, AI is often used as a shortcut, and it is often well-intentioned. The issue is that implicit trust is frequently placed in the output. Time is the most precious commodity leaders have, and most are balancing competing priorities while trying to optimise AI tools, manage investor and shareholder expectations, and move faster in their decision-making.

What often gets missed is the skill required to use AI well. Many leaders have not had training in prompt engineering, do not know how to properly interrogate outputs, or are unaware that different platforms are drawing from very different data sources. As a result, flawed outputs can look highly credible and end up driving decisions far more quickly than they should.

Q: The confidence gap is striking - 70% of C-suite executives rate themselves as “very confident” in AI, but only 27% of intermediate staff trust their AI expertise. What’s driving this disconnect?

Ollie Whiting: There are two things happening at once. First, senior leaders tend to engage with AI at a strategic level through pilots, dashboards, and high-level success stories, which naturally builds confidence. Teams closer to day-to-day delivery experience the operational reality. They see where AI outputs break down, where processes do not hold up, and where decisions do not always align with what is happening on the ground.

The second factor is pace. AI is being embedded into organisations incredibly quickly, often faster than enterprise communication strategies can keep up. In that environment, intermediate staff may assume a lack of capability at the top simply because they have not been given visibility into the skills, judgement, or safeguards being applied by senior leaders.

In some organisations, the gap is very real. In others, it may actually be narrower than the data suggests, but has widened because leaders have not had time to slow down and clearly communicate how AI decisions are being made and what competence exists at C-suite level. Small improvements in transparency, communication, and shared understanding of capability can go a long way toward closing that trust gap.

Q: Your research shows senior leaders are uploading confidential data into AI tools at nearly double the rate of entry-level staff. Why do you think those with the most responsibility are taking the greatest risks?

Ollie Whiting: It largely comes down to access, autonomy, and competing priorities. Senior leaders face fewer technical barriers, less day-to-day oversight, and greater discretion in how tools are used. They are also working with more sensitive information and higher-stakes decisions.

It is important to say that this is not happening in a vacuum. Cybersecurity has been a top-five priority for CEOs for several consecutive years, and most C-suite leaders are acutely aware of the risks. What we are seeing is leaders trying to balance that focus on security with the pressure to move quickly and unlock value from AI.

When speed becomes the priority, safeguards can start to feel like friction rather than protection. Without clear guardrails, consistent training, and shared standards for responsible AI use, even well-intentioned behaviour can expose organisations to unnecessary risk.

Q: 78% of C-suite executives use AI for work they’re not trained to do. Is this simply senior leaders trying to keep pace with technological change, or is there something more systemic at play?

Ollie Whiting: There is something more systemic at play. If you look back at previous major technological shifts, from the Industrial Revolution to the introduction of the desktop computer and the rise of the internet, productivity gains did not come from top-down expertise. They came from democratisation.

Executives were not expected to operate machinery on the factory floor, design workflows on individual desktops, or build the best e-commerce platforms themselves. The real value was unlocked by upskilling the workforce, embedding the right processes, and enabling people across the organisation to use new technology effectively.

AI is no different. Many organisations are placing disproportionate expectations on senior leaders to personally direct and operate AI tools, rather than focusing on building capability at every level. When responsibility for AI decisions sits at the top but skills and understanding are not broadly distributed, risk becomes systemic. The organisations that will see sustainable productivity gains are those that treat AI as a shared capability, not an executive-only one.

Q: You describe this as a “seniority blind spot.” What makes senior leaders particularly vulnerable to AI-related risks compared to other levels of the organisation?

Ollie Whiting: Senior leaders operate with speed, authority, and relatively limited challenge. They are less likely to have someone closely scrutinising how AI is being used, questioning outputs, or validating assumptions in real time. Decisions made at this level also carry far greater weight, so any error travels quickly through the organisation.

There is also a perception issue. Leaders are often expected to project confidence and progress around AI, which can reduce the space for open challenge or admission of uncertainty. Combined with rapid adoption and limited guardrails, this creates a blind spot where risks are not always visible until they have already scaled.

Q: 40% of C-suite executives report serious business impacts from AI errors. What kinds of consequences are organisations actually experiencing?

Ollie Whiting: First and foremost, organisations are seeing a waste of resources. That can be financial, time-based, or both, and those are two of the most precious commodities any organisation has. AI-driven decisions based on flawed data often lead to misdirected investment, unnecessary rework, or time spent correcting course after the fact.

The second-order impact is trust. When teams see resources being wasted or decisions repeatedly reversed, confidence in leadership judgement and in AI itself starts to erode. That loss of trust has a direct knock-on effect on culture, which is just as critical as financial performance or time efficiency.

In some cases, these issues extend into flawed strategic decisions around investment, workforce planning, or commercial direction, all driven by AI outputs that were incomplete or incorrect. The damage is not always immediate, but it compounds over time.

Q: Despite the risks, 80% of C-suite executives say a dedicated AI specialist is needed at board level. What question did you ask specifically, and how should this be interpreted given the role of sub-committees and NEDs?

Ollie Whiting: We asked tech employees at all levels whether they believe their organisation needs a dedicated AI specialist at C-suite level, and 80% of C-suite respondents said yes. That is a clear signal from leaders who are feeling the weight of accountability for AI-driven decisions.

That said, I do not fully agree with the conclusion. Speaking candidly as a CEO, I think there is a risk in assuming that appointing a single AI specialist at board level is the answer. If we look back at previous major technological shifts, mentioned earlier, productivity gains did not come from centralised control. They came from democratisation.

Executives were not expected to operate machinery on factory floors or design workflows on individual desktops. The real gains came from upskilling people across the organisation and embedding the right processes and ways of working. AI should follow the same pattern. Concentrating capability in one board-level role risks absolving other leaders and teams of responsibility to develop a baseline level of AI literacy.

That is not to say specialist input is not valuable. Advisory support, whether through an AI advisor or an AI advisory board, can be hugely beneficial in helping organisations set direction, establish guardrails, and build confidence early on. At La Fosse, we would be the first to admit we are still learning. Taking a slightly more considered pace, supported by the right advisors, is how we believe organisations can set themselves up for sustainable, organisation-wide productivity gains rather than short-term wins.

Q: Half of tech workers expect AI to lead to job losses within three years. The discussion often focuses on threat - what about opportunity, and why is that being lost?

Ollie Whiting: Every major technological shift in human society has triggered fear and protectionism. People immediately ask what it means for them, for their families, and for their children, and concern about job losses rises to the top.

The future of society does feel uncertain, but I am confident that if AI ends up in the right hands and is broadly accessible rather than tightly controlled, the gains will outweigh the losses. The real opportunity comes from putting powerful technology into the hands of people who want to do meaningful, productive work.

If that happens, the productivity gains we could see from AI have the potential to exceed those of previous technological revolutions. That is what makes this moment so exciting. Done well, AI can unlock greater innovation, greater creativity, and ultimately, more jobs and opportunity for society as a whole.

Latest book reviews

MORE BOOK REVIEWS

Super Adaptability: How to Transcend in an Age of Overwhelm

Super Adaptability: How to Transcend in an Age of Overwhelm

Max McKeown

Max Mckeown's heavyweight new book draws from neuroscience, psychology and cultural evolution to develop a practical framework for human adaptability. It might also help you move from paralysis into abundance

The Voice-Driven Leader

The Voice-Driven Leader

Steve Cockram and Jeremie Kubicek

How can managers and organisations create an environment in which every voice is genuinely heard, valued and deployed to maximum effect? This book offers some practical ways to meet this challenge.

Hone - How Purposeful Leaders Defy Drift

Hone - How Purposeful Leaders Defy Drift

Geoff Tuff and Steven Goldbach

In a business landscape obsessed with transformation and disruption, Hone offers a refreshingly counterintuitive approach to today's organisational challenges.