An interview with Kamales Lardi

Jun 24 2025 by Nicola Hunt Print This

An interview with digital transformation expert, Kamales Lardi, author of Artificial Intelligence For Business.

play now

Kamales Lardi is listed amongst the top 10 global thought leaders in digital transformation and top 50 women in tech influencers. Her work champions the human side of technology, emphasising the critical balance between people, processes and tools in transformation. Today we talk to her about her new book, Artificial Intelligence For Business, that explores the transformational potential of AI.

Kamales Lardi is a bold and strategic thinker in digital and business transformation. She's listed amongst the top 10 global influencers and thought leaders in digital transformation and top 50 women in tech influencers. Since founding Lardi and Partners Consulting in 2012, Kamales has advised multinational companies across Europe, Asia and Africa.

Her pioneering work champions the human side of technology, emphasising the critical balance between people, processes and tools in transformation. She's also the author of the best-selling book, The Human Side of Digital Business Transformation, an essential guide for business leaders navigating next-generation organisational transformation. But today we're here to talk about her latest book, Artificial Intelligence for Business.

Kamales, welcome. You describe writing Artificial Intelligence for Business as an incredible journey, the culmination of years of experience, collaboration and learning. What was your key motivator for writing the book?

Kamales Lardi: Well, over the last couple of years, we've seen this massive uptake in artificial intelligence, development, adoption, as well as the capabilities, what the technology can do across multiple areas of business, as well as in our lives. Although it's not a new field, artificial intelligence as an area of research and development has been around since the 1950s. It's only in the last four or five years that we've really seen this massive adoption because the technology has become a lot more accessible to the average person.

It's also advanced to the level that the ease of use and its capabilities are so significant that it's become an area that's so easily adapted and used across different industries. And so what I found is that the hype around artificial intelligence has also significantly increased, right? So we have almost on a daily basis, announcements and news and conversations happening around how AI is going to change our lives and disrupt business and take over our jobs.

And this hype is something that can be quite detrimental for a business leader, where you find the amount of information just overwhelming, the speed of development just so significant that companies are struggling to see how, where and when to apply these technologies and how best to utilise them for their own business value. And so I wanted to write this book as a guide or a playbook for business leaders and companies to really understand the basics of artificial intelligence to separate the conversations from what it could be or what it's going to be into and what it is today and how it could be applied today. So taking it down to a practical, pragmatic approach and helping business leaders really apply it for their own business environment.

And one key aspect of that is really addressing what we mean by artificial intelligence. So differentiating it from human intelligence. And there's a lot of conversation about AI taking over human cognitive capabilities.

And so I wanted to start the book by just clearly differentiating how AI is not truly human intelligence and then going into kind of the industry disruptors, the business applications, the how-to of AI, and also addressing some key topics around ethical and accountability and governance perspectives, as well as the future of work and what the human aspect of applying AI in the work environment.

You call it hype. Others might say that we're in the middle of an existential crisis on AI from one end of the spectrum to the other and everything in between. Whether it's hype or whether this is a true crisis, how long is this going to last?

Well, let's first define what existential crisis refers to, right? And it is truly, in my view, a complete disruption of the way we interact, engage, communicate and, of course, work. So we're seeing this AI or AI-based technologies coming into play in almost every aspect of our personal and business lives.

And this is not something new. We've seen across various industrial revolutions, we've always seen technology as a key disruptor that's shifting humanity towards the next level of maturity. And the big difference this time with artificial intelligence is the pace of development that we're seeing.

The pace of development is so rapid and AI on its own, AI capabilities on its own are so disruptive and it could be applied across various industries and business functions, but also in convergence with other technologies like blockchain and virtual realities or augmented realities or Internet of Things and all these other key technology disruptors. The combination with AI are also potentially going to be transformative across different areas of business. So considering this, I think it's truly an existential crisis in the sense that we are not going to operate the same way that we used to in the past decades.

Things are going to work very differently from here on. And this could be a, or in my view, it is a very lasting change. And we need to then ensure that we are, you know, AI is probably developing faster than we can upskill and adopt.

And so we need, we are forced into this sort of state of uncertainty, particularly in our work environment, this continuous state of uncertainty. And so we as human beings need to change the way we think about these technologies and how we embrace or adapt towards it. And we need to build a certain level of resilience and mindset shift to ensure that we are able to use them to the benefit of humanity and not get overwhelmed or taken over by those technologies.

Yes, indeed. And in the book, you go through a number of sectors to show the transformational potential of AI. If you had to pick one sector that will be transformed more than others, what sector would this be and why?

At this point, I would say pretty much every sector that we've come across has been touched by AI in some form or to some depth. So we're looking at, I think that the bigger question is around which industries are adopting faster. And we're seeing some industries accelerate in their adoption.

So healthcare, for example, technology, media and telecommunications, manufacturing, these are some of the rapidly adopting industries that are really embracing AI capabilities. And we have other industries that are probably slower in adoption. So financial services energy, consumer goods and retail, these are slower in adoption, and probably because it's more highly regulated, as well as there's a trust gap in adopting technologies like this.

I think the bigger question to ask is not around whether sectors are being transformed, but around how they are being transformed. And this is where we come back to the topic of hype. We're seeing organisations across various sectors just jumping with both legs into this AI pool, and wanting to adopt and wanting to embrace the next shiny new thing.

And the focus is so much on the technology being implemented. And what I like to, in conversations with business leaders, I always emphasise that there should be a value driven approach. First, understanding what your business challenges are, and what your needs, what are the needs of your business and the market, and how these technology solutions could then deliver towards those needs.

So this is where exploring AI or AI solutions could be leveraged. Really understanding from a business perspective, what you're trying to solve and how AI could solve towards that. We can't really jump into AI implementations directly either.

So with generative AI, potentially there are tools, off the shelf tools and platforms that could be quickly adopted. But in general, organisations need to build a solid foundation for AI based technologies to really survive and thrive. And these are elements around having the right data strategy in place, having upgraded or optimised processes and workflows, ensuring that governance frameworks and compliance capabilities are also put in place, making sure the teams or the workforce are up-skilled and re-skilled and new skilled.

And so all of these elements take time for implementation, and it will truly allow once in place will allow for companies to leverage AI to its full potential. But we also need to ensure that there's responsible and safe use of AI within the business. So questions need to be asked around, is this the right tool to be used?

How are we going to govern the tools? How are we going to put safeguards in place to ensure that the employees within the organisation as well as the customers externally are gaining the benefit of the technology, but it's not being misused or could not potentially be misused in any way. So taking a very human centric approach where AI based technologies are integrated into the workforce well, instead of replacing that human capacity and ensuring that there is sufficient human oversight is so critical.

Another critical consideration in preparing for the future workplace will be to ensure employees feel empowered rather than threatened by the application of AI based technologies. Can you give us any examples of the types of things that employers are doing to reassure people?

So this is something that we within my organisation, we work quite a lot with companies, right? So there is, as you know, significant hype in the news around companies taking an AI first approach, replacing people with AI or even limiting hiring based on, you know, areas where only in areas where AI cannot be utilised. And so this has created a significant amount of uncertainty and fear across the global workforce.

And these sorts of triggers on news can actually create an amygdala response. And that's a fear response in people where it results in the fight, flight or freeze response, right? So in the workplace, this looks like resisting change or keeping the heads down and not participating in these new implementations, or even potentially leaving the company.

And the result of this is companies are seeing a drop in productivity, efficiency, and also loss of knowledge, tacit knowledge particularly. And so companies can get ahead of this really by creating more transparent communication that starts from helping people within the organisation understand why AI is being adopted, what is the purpose and how it could be used, how it could potentially affect their roles and the tasks that they do on a daily basis, and what's expected of the people within the organisation with these new implementations.

There's been a lot of focus, I think, within companies across sectors around upskilling, so significant amount of investment going into upskilling and reskilling people. But on the other hand, the training doesn't sufficiently focus on the human aspects, right? It just focusses on the technical skills mainly.

And so the human aspects is really ensuring that people are involved in that design and implementation of the AI technology, making sure that people feel valued in that process and ensure that the technology meets the needs of the people and augments their capabilities rather than replaces them. And so another key aspect that I always try to emphasise in the work that we do is really creating a sense of psychological safety within organisations. And companies that do this really stand out in terms of long-term sustainable success.

It's really creating an environment where people feel comfortable to ask questions, raise concerns, and challenge issues without having a fear of consequences, that they are allowed to raise questions or concerns. And a critical part of AI adoption is having this sort of human oversight and teaching people to question the technology, question the outputs, and to spot irregularities or inconsistencies in the outputs. And not just blindly trusting AI as a technology solution.

As intelligent as it is, there needs to be a certain critical thinking around its adoption as well.

Psychological safety for AI adoption. Do you see the potential for a more proactive role for employers when it comes to internal career paths to ensure that people are fully supported in their learning journeys and careers despite the many challenges that will likely emerge with AI over time?

So I'm a big believer in lead by example. And I feel here leadership teams play a very critical role within organisations. They drive the direction of adoption for AI.

They set the standards of behaviour and accountability and the safe and responsible use or ethical use even of these technologies. So what we're seeing on the ground is employees are adopting AI three times faster than leadership teams expect. And there's a significant knowledge gap in leadership teams around understanding and capabilities in AI application.

And so there's a shift that's happening very rapidly in the workforce and a gap that's growing in terms of leadership capabilities around AI. So the need for upskilling in terms of leadership teams gaining an understanding for how AI will impact their business, where they should be heading, how aggressive they should be adopting these technologies, how it should be applied within the business environment, what's the future direction. This has to be cleared and developed and refined within the leadership teams first and then communicated towards the rest of the organisation.

So lead the workforce towards responsible and safe use of AI.

If conventional strategies to close skills and capabilities gaps are falling short compared to the pace and demand of the business and technological landscape, what can businesses do to address this issue?

Some of the things that are so critical at this point is really for organisations to understand where they stand in terms of skills and capabilities. So start by conducting a skills gap analysis, right? So capturing the existing capabilities and the existing kind of skill level within the organisation to assess how new technologies like artificial intelligence could be applied within the company and where the gaps lie and how these gaps could be filled.

And so this is going towards the direction of not just understanding the skills towards AI application, but also generally the skills and capabilities overall, because you need to also get a good understanding for where AI is going to replace or take over certain tasks. So this sort of current skill level, current skills analysis needs to be done as a starting point. And then you can start filling the gaps by upskilling or reskilling or newskilling people within the organisation.

And this happens at three levels, the way we conduct it. First of all, there's an organisation wide upskilling, which looks at what kind of knowledge or capabilities are required all across the organisation. And this could be a basic understanding for how AI technologies are going to transform your business or certain business functions.

The second level is really looking at workforce digital upskilling. And this is the basic level of AI capabilities that's required across the organisation. And this could be things like data analytics or data or output.

How do you interact with the output of these AI technologies and how do you understand the data that's being produced, governance capabilities, ethical capabilities, and so on. And then you have the needs-based augmentation, which goes into specific functional areas and specific roles. And what are the upskillings required in those specific areas?

So for someone working in marketing, for example, they might need for their role profile, they might need very specific training and upskilling in order to be able to use a new generative AI technology and so on. So that really deep dives into specific areas of business. Now, it's important to create an environment or to kind of establish an environment of continuous learning within the organisation, just considering the fact that AI as a technology is developing at such a rapid pace.

We're seeing new technologies or new platforms and tools coming out on a weekly basis. So every single time you get used to using a specific platform, you have another company coming out with the next updated version. And a good example of this is just looking at image generation across the board.

Where we were half a year ago and where we are today, there's been such a significant acceleration in what the platforms can do, that it's the upskilling is almost, it's impossible to do a one-time upskilling. So this continuous learning path is so important to encourage people to be curious, to explore and experiment, which sometimes can be quite counterproductive in a corporate environment, right? This exploration experimentation element is not often prioritised.

And so companies can help this process by creating sort of a sandbox environment, a safe sandbox where employees are encouraged to safely explore and experiment tools within the boundaries of the, you know, the governance and compliance perspective of the company. So this allows people to safely explore tools and keep building skills. And another element that's so critical is for companies to set up clear policies and guidelines that communicate the boundaries of AI use, right?

What can be done? What should be avoided? What is important for responsible and safe use of AI in business?

And if I can just clarify here, there's a difference between ethical and safe use, right? And this is so important for organisations to also understand and communicate. Responsible, or sorry, ethical use is really around, should you be using these tools?

Are these the right things to be doing with AI? And safe use is what many companies are addressing within their policies, which is, is it safe to use within their business environment? Are we using it correctly within that business space?

Is it being compliant with the rules and regulations? So it's important to address both of these elements in a distinct way and ensure that people are able to utilise these tools and continuously learn without crossing boundaries that they don't understand, basically.

Staying in the ethical space, you say that ethical considerations in AI should go beyond technical design to encompass broader socio-economic implications, including fostering inclusive economic growth strategies. What exactly are inclusive economic growth strategies? And why do you think they matter?

Well, as I said earlier, AI has become so embedded into almost every sector and every part of our daily lives, right? And we need to ensure that the economic growth benefits are realised for all members of society. And this includes marginalised segments and vulnerable populations, right?

So we're seeing this sort of digital divide or digital gap growing where certain regions and certain sectors or segments of the community, the global community, are rapidly gaining access to technologies that are transformative. So AI solutions that are accelerating the way we work, our capabilities for financial growth, for financial independence, and also our sophistication in terms of access to new ways of thinking and new ways of doing things. And then we're seeing certain regions still lag behind in terms of access.

And this concerns me particularly because you know, the typical approach within the tech industry that I've seen, and I've been in this space for over almost three decades now, the tech industry tends to take this grow now, distribute later approach, right? Where if it's not explicitly and intentionally included within their kind of development and approach and go-to-market approach, the aspects of fairness and equitability are sometimes overlooked, right? There's a prioritisation of profit.

And we're seeing a lot of this happen now with the big tech companies as well. There's this AI race, race to the end that's happening where the big companies are really competing with each other to get the best tool out, to be the first in the market, to be the tool that most organisations or most individuals use. And it's being incorporated into almost everything that they offer at this time.

And so there is, I think, a certain blindness towards this equitable and fair use. And we need to ensure that policies around fair use, data privacy and security, participation and access, sustainable practises, and of course, skill development that's incorporated into all of these AI development and implementation and adoption capabilities across the board. I feel like this is a field that is potentially very lucrative, and many organisations are not prioritising this sort of ethical access.

And so companies need to take a stop and assess for themselves the ethical and safe use of these technologies.

You mention a scenario in the book by K. Fooley, AI 2041, where he describes the emergence of occupational restoration companies funded by both government and employers to help retrain and reallocate people into vocational roles and new job opportunities. Based on where we are now, in the AI journey, might something like this actually become a reality?

I believe so, Nicola. I think there's been such a significant momentum building behind the upskilling and reskilling initiatives. Companies are literally spending millions in terms of making sure that their workforce is upskilled, and there's kind of job roles transformation happening, there's demands for new skills and capabilities.

And, you know, we're seeing AI being leveraged to automate tasks, optimise workflows, enhance productivities. And so the way industries and sectors are operating is shifting dramatically, and the demands for how people work and what they do is also shifting dramatically. And at the same time, we're seeing this across regions, almost every developed nation has these national skills frameworks that's been updated with AI capabilities across regions.

So learning environments, or learning and development environments for the workplace for occupational rehabilitation is underway already. And we're seeing this transition, organisations as well as national frameworks helping people to transition into new forms of employment. So given these shifts that are happening, and all of this has happened literally within the last two to three years, right, the rapid development has been within these short timeframes.

So it's not surprising to envision that this sort of formalised occupational restoration companies could be set up in terms of public-private partnerships that could create policies and, you know, really focus on helping the workforce shift and adapt to new AI-first environments. It's a little bit scary to think of those potentials if it's not approached from a human-centric, prioritised way, that it could be quite detrimental. And I think Kaifuli in his book describes a little bit of the kind of negatives of that environment, but he also ends the scenario in an optimistic way.

So I think it's really important that institutions, organisations, and governments that are focused on this sort of workforce or occupational restoration capabilities needs to take a human-centric and prioritise a human-centric approach.

Yes, because otherwise it could perhaps become a bit dystopian, which is one of the things behind the people calling it a crisis. To conclude, from a regulatory perspective, should we be hopeful that the EU's AI Act will prove to be effective when it comes to compliance and ethics, or do we need to put other frameworks in place to deal with the challenge?

So let me start by maybe prefacing and saying the EU AI Act represents a really comprehensive attempt to regulate the AI industry. And this is so necessary, particularly considering some nations and regions starting to walk back the regulatory requirements for the AI industry. So having something as robust as the EU AI Act in the region that would potentially prioritise the users, the end users of these technologies, is so crucial.

And it comes from really a risk-based oversight approach. So it gives you very robust requirements around governance, human oversight, as well as risk assessments. And it takes a phased approach, a phased implementation approach.

So from banning certain types of use or certain practises to making sure that the high-risk systems are covered or not allowed to be used. So it takes a phased approach that's designed to give companies time to adapt as well. So it's not pushing organisations into an environment where they're not able to apply AI within the region.

But I also believe that the Act has some notable challenges, and I've been quite vocal in sharing these challenges over the last year. The risk-based approach is actually quite difficult to adapt within this very fast-paced AI development industry, right? You're always playing a catch-up game with the technology.

And in certain aspects, I think inadvertently, it may force the consolidation of the market, because smaller players and startups may find it very difficult within this environment or within these regulatory requirements to fulfil their internal governance structures, to set those governance structures up, which can be quite expensive, quite effort and capabilities or skills-intensive. And smaller companies may just not have the capabilities to set this up, to fulfil the regulatory requirements set out by the EU AI Act. And so this might result in consolidation of the market, where you see bigger players maybe buying out or taking over smaller players and their innovative technologies.

And you might actually end up, or we as a region might end up creating monopolies in the market, where the bigger players just own big portions of the sector. And this creates challenges not only in terms of crippling innovation, but it also creates dependencies that we don't want to have in the market. And so this is my view.

I think there's big challenges as big tech companies become overly embedded in the AI field. We're seeing them compete very aggressively with one another, and we're kind of losing the necessary balance that's needed within AI. And there's also challenges around loss of accountability for ethical use of AI in those organisations and in the market in general.

So I feel that it's a good starting point, but there still needs to be additional frameworks in place and additional capabilities to address the dynamic aspect of the market and the potential challenges or limitations that the AI Act could create.

What Matters

Nicola Hunt
Nicola Hunt

In What Matters, Nicola Hunt, co-founder and executive editor of Management-Issues.com, invites a special guest to join her to discuss a topical business issue and explore why it matters right now.