Fahed Bizzari is Managing Partner of Bellamy Alden AI Consulting. He is a leading international speaker and expert on AI with over 20 years of experience in business and technology, helping companies navigate the complexities of the AI landscape.
In this conversation, Fahed draws on his journey from early AI experimentation to a focused practice built around what he calls "AI empowerment as a management discipline." He talks candidly about why so many organisations get the sequence of AI adoption wrong; shares real-world examples of how AI has shifted business outcomes and explains why now is the time for an important broader discussion on the kind of AI-enabled society we want to build and how to prepare people for this future.
1. Looking back, was there a defining moment or decision that shifted your focus toward working in AI?
It was not a single moment. It was a sequence that happened fast enough to feel like one.
At the end of 2022, I was on sabbatical in London when ChatGPT was released. I had already been working with GPT directly, so I assumed I understood what this was. My eighteen-year-old daughter changed that assumption when she asked me a question I could not dismiss: "Now that ChatGPT exists, what's the point of continuing university?"
That forced me to take a proper look. And because the sabbatical gave me uninterrupted time, I kept going deeper.
Over the following weeks, I stopped seeing AI as a faster way to do existing work and began to see it as something that changed how I thought, reasoned and made decisions.
The defining decision came a few months later, when I chose to focus on organisations rather than individuals.
I had been running public workshops and the response was strong. But I kept seeing the same pattern: even when people developed some fluency with AI, they returned to workplaces that had no structures in place to make that fluency count. Leadership was unclear. Governance was absent. Culture was hesitant. Individual capability without organisational readiness just produced isolated pockets of good work that never scaled.
That is what led me to start thinking about AI empowerment as a management discipline rather than just AI fluency.
2. What's an experience you've had while working with AI that fundamentally changed the way you think about technology or decision-making?
Early in 2023, I used AI to build a complete business plan for a non-profit project I had been putting off for years. I had forgotten my laptop charger on a train, so all I had was my phone. I opened ChatGPT and said, "You and I are going to work this out together. I’ll ask you questions and you’ll answer them. You’ll ask me questions and I’ll answer them. Then we’ll turn it into a plan."
Over five conversations across a few days, the plan came together. What would have taken me a month took less than a week. But the real shift was not speed. It was clarity.
The plan was the clearest piece of strategic thinking I had ever produced. So clear that I chose to shelve the project entirely. Once everything was laid out, I could see that the commitment required was far greater than I was willing to make. Without AI, I would have pushed ahead enthusiastically and paid the price later.
That experience changed how I think about AI and decision-making in two ways.
First, AI did not replace my judgment. It extended my capacity to think. The reasoning was mine. AI helped me surface assumptions I had been glossing over and test them against questions I had not thought to ask.
Second, the best outcome was a decision not to proceed. The value was not in producing something. It was in preventing a costly mistake.
Most organisations still measure AI success by what it produces: faster reports, more content, higher throughput. That experience taught me to look for where AI improves the quality of the decision itself, including the decision to stop.
3. When you reflect on your journey as an AI leader, what internal mindset or habit do you believe contributed most to your long-term success?
The habit that has mattered most is an instinct to make sense of chaos before acting on it.
AI arriving in the world has been deeply confusing for everyone. People have been overwhelmed by possibilities, inconsistent results and conflicting advice. What I noticed early on was that when AI worked well, it did so for reasons that were understandable and repeatable. And when it failed, those reasons were equally identifiable.
That instinct, the move from "this is interesting" to "this is usable", is what allowed me to build frameworks rather than chase tactics.
Over time, it led me to see that the real challenge is not fluency with AI. It is how leadership, governance, judgment and culture interact with AI inside an organisation. Individual capability without those structures produces isolated success that never compounds.
There is a practical dimension to this as well. I learn AI by interacting with it constantly. I am always asking "what happens if I try this?" and "why did that change?" Capabilities shift frequently and quietly, often before they are formally announced. The only way to stay current is to keep exploring without waiting for clear signposts.
But making sense of chaos is not just about staying ahead of the technology. It is about translating what you observe into something other people can use.
The organisations I work with do not need more information about AI. They need coherence. They need someone who can look at the confusion they are experiencing and say: here is the structure, here is where you stand within it and here is a way forward that does not require you to understand everything before you begin.
4. Can you walk us through a situation where AI shifted the outcome of a business decision or strategy in a meaningful way? What changed as a result?
In one organisation, contract reviews below a certain value threshold were handled by the commercial team rather than legal. The business had created a checklist of issues to consider before approval.
On paper, it worked. In practice, outcomes varied widely. Different people interpreted the checklist differently. Some were overly cautious. Others missed significant risks. The list existed, but the application of judgment was subjective and inconsistent.
A member of their commercial team built an AI-supported review process that consistently applies the same criteria, every time. Contracts are assessed against the agreed checklist, relevant risks are highlighted and reasoning is made explicit. The AI then proposes new clauses in the same writing style as the contract being reviewed.
The commercial team still makes the decision. But they are no longer starting from a blank page or relying purely on personal instinct.
What changed was not speed, although that improved. What changed was consistency and defensibility. Decisions became easier to review after the fact. Leadership gained visibility over risk patterns they had previously been unable to see. And the legal bottleneck that would have been required to achieve that consistency through traditional means was avoided entirely.
AI did not replace the judgment of the commercial team. It made their judgment more reliable by removing the variability that had accumulated over years of informal interpretation.
That is the pattern I see most often when AI genuinely shifts a business outcome. It does not introduce a new capability so much as it makes an existing one trustworthy at scale.
5. What's a belief about AI that you often hear from executives or business owners that you strongly disagree with and why?
The belief I encounter most often and disagree with most strongly is that AI adoption should begin with governance and policy. I understand why leaders reach for this. AI carries real risk. Putting guardrails in place before anyone starts feels responsible. Policies are written, approval processes are defined and restrictions are circulated.
This looks like leadership. But in practice, it produces paralysis.
When people have no lived experience of working with AI, rules feel abstract. They do not yet know what good use looks like, where the real risks sit or when judgment matters most. Policies written in the absence of experience tend to be either overly restrictive or quietly ignored. Experimentation moves underground. Leadership loses visibility rather than gaining control.
A more effective sequence is supervised activation. People work with AI on real tasks that matter to them, with experienced practitioners providing guidance and oversight. They quickly learn what AI can do, where it fails and why human judgment remains essential.
Only then do you move into formal governance.
By that point, policies are grounded in experience rather than fear. People understand what the boundaries are protecting against, so compliance becomes natural rather than enforced.
The deeper issue is one of sequencing. Governance is essential. I am not arguing against it. But governance that arrives before experience is governance without context. It cannot distinguish between real risks and imagined ones. It cannot tell you which behaviours to reinforce and which to redirect.
6. If a company could only focus on one AI-driven improvement today, where do you think they'd see the fastest or most impactful results?
Start with the cognitive burden your people already face in their day-to-day work.
Every organisation has tasks that are mentally heavy, repetitive in their thinking demands and time-consuming relative to their value. Contract reviews, research synthesis, first drafts of proposals, compliance checks, data interpretation. Work that requires concentration and judgment but follows patterns that can be made more consistent with AI support.
The fastest results come when you pick one recurring piece of work that feels heavier than it should, make it more stable and reliable with AI, keep the human firmly in charge and tie the outcome to something the business already measures.
Most organisations overcomplicate their first move. They commission strategy documents, build AI roadmaps, evaluate platforms and wait for clarity before doing anything. Meanwhile, the people doing the work already know where the burden sits. They just have not been asked.
The approach I use starts by helping people identify the documents they create and handle in their roles, the tangible outputs of their cognitive work. From there, AI helps surface where the burden is highest and where the opportunity for improvement is most concrete. In one organisation, a group of 30 people identified over 500 usable AI opportunities in a few hours using this method.
Leadership finally had something concrete to evaluate, prioritise and invest in. The principle is straightforward. AI does not reward readiness. It rewards engagement. Start with a real problem, solve it well and let that success pull the next one forward.
7. When organisations face complex challenges, what patterns do you see where AI consistently adds clarity or leverage? Can you share a few examples?
Three patterns recur across nearly every organisation I work with.
The first is where expertise is concentrated in too few people. In one global services company, the marketing team depended heavily on a small number of engineers and technical specialists to validate every piece of content. Campaigns stalled because the experts were stretched across too many priorities. AI did not replace that expertise. It made it accessible. Technical knowledge was structured into AI-assisted workflows, enabling the marketing team to produce accurate content without constant specialist input.
The second is where judgment is applied subjectively across the same process. Whenever a process depends on individual interpretation of shared criteria, whether contract reviews, quality checks or risk assessments, AI can bring consistency without removing the human decision. The result is not automation. It is reliability.
The third is where institutional knowledge is locked inside one person's head. A senior leader in one organisation had spent years developing an intuitive process for evaluating commercial opportunities, built through pattern recognition and experience. He worked with AI to codify that process into a structured workflow that guides the same questions and checks he had been running mentally. He still reviews and decides. But the organisation is no longer dependent on one person's memory. Judgment has become more consistent, teachable and transferable.
In each case, AI did not introduce something new. It made something that already existed, whether expertise, judgment or institutional knowledge, more durable and more widely available.
8. For companies just starting out with AI, what's the smartest first step that delivers value without overengineering the solution?
Stop trying to understand AI in the abstract and start working with it on something real.
I see organisations delay for months because they believe they need a strategy before they can act. They commission readiness assessments, build governance frameworks and evaluate tools, all before anyone has used AI on actual work. By the time they are ready to begin, the people who were initially enthusiastic have moved on to other priorities.
The more effective approach is supervised activation. Take a small group of people from different functions and have them work with AI on tasks that genuinely matter to them, with experienced guidance to keep things safe and productive.
Do not ask them to imagine what AI could do. Ask them to bring the work they are already doing and explore how AI can improve it.
When I facilitate this, I start by having each person identify the documents they create and handle, the tangible outputs of their cognitive effort. From there, AI itself helps surface possibilities. People discover opportunities they would never have imagined in a brainstorming session, because the AI is reasoning about their actual work rather than abstract scenarios.
The results are immediate and concrete. Instead of vague ideas, you get specific, evaluable opportunities with a clear connection to business outcomes. Leadership can see what is worth pursuing, what needs governance and what can be deprioritised.
This is not a shortcut. It is a better sequence. Experience first, then governance. Evidence first, then strategy.
9. In your experience, what usually sits beneath the resistance to AI adoption and how can leaders move past that hesitation?
The first thing I would say is that the caution most leaders feel is not irrational. AI has already produced very public failures. Lawyers have been sanctioned for submitting work they did not properly verify. Public-sector reports have been withdrawn after scrutiny. Organisations have been embarrassed by outputs that should never have left the building.
Anyone paying attention would be cautious.
The issue is not hesitation itself. It is what hesitation turns into. Whether leaders hesitate or not, AI use is already happening inside their organisations. Employees are experimenting quietly, informally, sometimes well and sometimes poorly. When leadership stays distant, it does not slow adoption. It makes adoption invisible. And invisible adoption means invisible risk.
Beneath the resistance, I usually find three things.
The first is a rational fear of public failure. Leaders are accountable for the outputs their organisations produce and they are not yet confident they can explain or defend AI-assisted work.
The second is a sequencing instinct, a belief that rules must come before experimentation.
The third, which is less often admitted, is a personal discomfort with not understanding the technology well enough to lead credibly. This is particularly acute among senior leaders who have built their careers on domain expertise and now feel that something they cannot fully explain is reshaping the work underneath them.
The way past this is not education in the traditional sense. It is a supervised experience. When leaders see AI applied to real work inside their own organisation, with proper guidance, clear accountability and visible outcomes, the abstract fear is replaced by informed judgment.
10. When you imagine the business world a decade from now, what role do you believe AI will play that most people are still underestimating today?
The role most people are underestimating is AI as a permanent institutional capability. Something as embedded in how organisations operate as financial management, quality assurance or health and safety.
Today, most organisations treat AI as a tool, a project or an efficiency lever. It sits in a technology category. Someone owns the tools, someone else owns the training and someone else writes the policy. The organisational response is fragmented because the framing is fragmented.
A decade from now, the organisations that have gained the most from AI will not be those that deployed the most advanced models or automated the most processes. They will be the ones who built the leadership, governance, capability, culture and systems required to make AI-assisted work reliable, visible and scalable. That is not a technology achievement. It is a management achievement.
What I see being underestimated is the permanence of this. AI is not a wave that organisations ride and then return to normal. It changes how people think, how decisions are made, how knowledge is transferred and how accountability is exercised. These are structural changes to how organisations function. And structural changes require permanent management disciplines, not project teams that disband once the rollout is complete.
I believe this will become as recognisable and as necessary as any established management function. The only question is whether it arrives deliberately or gets assembled reactively, after the consequences of not having it become too visible to ignore.
11. What's a question about AI that you wish more interviewers, or business leaders, would ask, but rarely do?
The question I wish people would ask is: "Who is leading the public conversation about what an AI-driven future actually looks like and who is being left out?"
Right now, the public discourse on AI is dominated by two extremes. At one end, technology leaders and futurists speak in terms that are either breathlessly optimistic or darkly apocalyptic. At the other end, the general public absorbs fragments through social media and YouTube content that tends to be polarised and rarely neutral.
In between, where real decisions are being made, where organisations are struggling to adapt, where people's working lives are changing, the conversation is remarkably thin.
This concerns me because AI is not just a business or technology challenge. It is a societal one. How people work, how decisions are made, how knowledge is shared, how judgment is exercised. These are being reshaped now. And the conversation about what that means, practically and humanly, is not happening at the level it needs to.
Government leadership on this has been limited. Regulation is emerging, which is welcome, but regulation is a floor, not a direction. The broader question, what kind of AI-enabled society do we want to build and how do we prepare people for it, remains largely unanswered. It is not clear who should lead that conversation. But it is clear that it needs to happen now, not after the consequences become irreversible.
I am committed to contributing to this through my book, the discipline I am building and conversations like this one. The gap between what AI makes possible and what people are equipped to do is not just an organisational problem. It is a public one. And it deserves a public conversation that is grounded, inclusive and honest.



