Artificial Intelligence (AI) is already transforming the way we work and live, from recommending products to diagnosing diseases. And as AI becomes more sophisticated, its impact on our daily lives and interactions with work will only increase. However, with great power comes great responsibility. How do we make sure we are working with AI fairly and inclusively? How do we reap its benefits without falling victim to the harmful biases and discrimination it can perpetuate?
Human biases are well documented, presenting themselves in a myriad of ways, and when left unchecked they can be extremely harmful. When considering selection and assessment in recruitment, the aim is to minimise the presence of bias, ensuring candidates compete on a level playing field. At first glance, AI appears to be a powerful tool to introduce greater objectivity into the measurement of talent. But can AI be trusted to be an impartial judge?
Utilising standardised and transparent processes to reduce bias is a good start, yet the introduction of AI into the recruitment process has the potential to tip the balance between process and people – and have negative outcomes as a result. So, here are some arguments for and against the use of AI in recruitment.
Arguments for AI
On the positive side, the introduction of automated processes is heralded as an effective way to reduce human involvement in the assessment and recruitment process, potentially reducing subjective bias and promoting equality amongst candidates.
AI can also be leveraged to support employees in practical ways. It has the potential to automate tasks, leaving us more capacity to tap into our creativity and complete more complex assignments. Here are just a few examples of where AI can be leveraged to aid the recruitment process:
- Generate job descriptions.
- Suggest interview questions tailored to specific roles.
- Automate initial screening with a chatbot-like interface.
- Act as a virtual assistant for quick HR-related answers.
Arguments against AI
On the negative side, AI can display unconscious bias. Early studies show generative AI tools, such as ChatGPT, can replicate or amplify existing biases and inequalities in its content, data and algorithms. As a result, there is concern that overuse of AI might generate unfair outcomes or recommendations that affect people’s access to work or promotion. This bias in AI is not intentional, rather the algorithms reflect the content and data they were trained on, much of which could be unconsciously biased.
Textio, a tech company focused on language guidance, conducted research on bias in ChatGPT. They examined the feedback to different job roles, utilising gender-neutral prompts such as ‘write feedback for a bubbly receptionist’. They observed that in roles where people commonly hold preconceived notions about gender, those notions were perpetuated in the pronoun selection of ChatGPT’s responses. For example, regardless of what trait a kindergarten teacher was paired with, ChatGPT wrote feedback exclusively with she / her pronouns. The opposite was found for feedback provided for a construction worker.
Amazon famously scrapped early AI models used for applicant resume screening after it was identified to demonstrate a bias against females. The AI model was trained to assess applicants by observing historical patterns of resumes submitted to the company over a ten-year period. The majority of applicants during this time were male, leading the model to teach itself that male applications were more successful and thus more attractive. This is a classic example of the risk of blindly trusting AI to take over critical decision-making without rigorously analysing the parameters that these models are built upon.
And herein lies the opportunity for managers. AI is transforming the world of work and creating new challenges and opportunities for managers, yet it is not infallible. To stay relevant and effective in this changing landscape, managers need to develop and update their skills. So, what are the skills that are being sought the most and why?
1. Digital and data literacy: With AI advancements comes data, and lots of it. Managers will need the skills to be able to understand and interpret data. They will also need to communicate and contextualise data-driven insights to colleagues and decision-makers.
2. Critical thinking: AI doesn’t have all of the answers and can sometimes even assertively provide the wrong answers. Consequently, managers need to critically evaluate what is presented, using logic, evidence and reasoning. Information that sounds plausible, is not necessarily fact and could contain bias.
3. Emotional intelligence (EI) : EI allows individuals to better understand and manage their own and others’ emotions. It is a critical component of communication, relationship management and leadership. Whilst AI can interact with humans using natural language, it cannot, yet, connect emotionally. This is one area where machines cannot compete, highlighting the importance of managers developing in the area of EI.
4. Flexibility: The technological pace of change continues to accelerate, and managers require both cognitive flexibility and the capability to adapt to changing expectations and environments. They also need to be able to foster a culture of flexibility and innovation among their teams, encouraging them to experiment with new ideas and solutions, and help individuals to advance in their careers.
Find a balance
AI can be a useful tool for finding the right people, but it may inherit the biases of the content and data it is trained on. As a result, it can undermine diversity and fairness in recruitment if it completely replaces human judgement and values. Finding the right balance is therefore key.
The onus is on organisations to use AI ethically and wisely, as a productive co-pilot, capable of boosting efficiency on low-stakes tasks, while freeing up time to focus on complex decision-making. This will help managers to use fair and inclusive practices when dealing with people-related decisions, creating better teams as a result.