Dr Claire Bullen-Foster is a Consultant Clinical Psychologist, trauma expert and the CEO of Eleos Group, an organisation that helps high-pressure industries implement support and psychological wellbeing support for staff and workforces. We talked to her about on the subject of 'quiet cracking' and the use of AI in mental health support.
Q: What is 'quiet cracking'?
A: 'Quiet cracking' refers to employees who stay in their roles and still meet expectations externally, but internally they feel increasingly disengaged and emotionally drained. Colleagues or employees might still be hitting results, but they may seem quieter or more disconnected than usual, making it harder for employers to spot that something is wrong.
Whilst 'quiet cracking' is a new term appearing online and on social media, the concept isn't actually anything new. The symptoms that 'quiet cracking' describes are essentially what's more commonly recognised as stress and burnout.
Q: Why has 'quiet cracking' become a trend recently?
A: 'Quiet cracking' is essentially another term for stress and burnout that has become popular on social media and online. Social media trends often help to shine a light on under-discussed topics and help people to understand things in different ways. However, it's important that we recognise that stress and burnout are not new things.
From a clinical perspective, it's really important that we don't start to suggest to people that they might have something new that they need to diagnose or identify with, but rather that we support people to understand what the signs and symptoms are that this social media trend is describing.
A: lot of the narrative around 'quiet cracking' talks about feelings of low morale, feeling stuck or even hopeless or helpless about a situation – and these are all feelings and scenarios that can happen to every one of us, not just to people who identify with the new social media term. 'Quiet cracking' feels like it is describing the slow and gradual build-up and impact that we know as stress and burnout in general.
Q: There have been reports of people using AI for mental health support at work to help with symptoms of 'quiet cracking'. Is this safe?
A: AI can go some way to open up the conversation (so to speak) about mental health for a person who is struggling. Anything that provides a space for people to seek help is positive. For some, possibly for young people or those who are neurodivergent, using text-based conversations, online platforms and AI may feel more familiar to them as a way of self-expression, and could feel like their safest first step.
What we need to ensure is that AI knows which lane it is in and when it needs to direct a person to the right and appropriate human and clinical support. Ensuring that AI knows the boundaries of its role within these conversations is the smartest way we can use this platform to better people's mental health.
The potential danger and controversy come with where the use of AI ends. How far can or should AI go, and is it capable of managing risk and safety? Risk management is fundamental when we think about how we use AI to improve people's mental health.
AI will be unable to fully assess the risk and safety of an individual who is suffering with their mental health compared to a human clinician. There are so many nuances to consider when assessing risk, such as the tone, rate, and volume of a person's speech, which can be incredibly significant. AI cannot quantify these things as a human clinician can, and that's before we start to think about body language, or the interpersonal processes that can only take place within a human dynamic.
Q: What do the recent reports of 'AI Psychosis' tell us about the human-AI relationship, and how can this impact mental health?
A: There have been recent reports of individuals reporting experiences of AI Psychosis in the media. It is important to point out that this is not a clinical diagnosis, however it is being defined as when people experience interactions with AI chatbots that have led them to believe that what transpire to be delusional beliefs are true, or that their false perception of reality is correct. This is because AI is trained to replicate empathy – empathy is a clear and vital human response, however the key here is that AI can only replicate it. What AI doesn't do is apply critical thinking or context that we as humans also provide at the same time as offering empathy, all of which allows a person to fully evaluate the reality of their situation within a safe and supportive relationship.
We need better education and regulation of the use of AI when it comes to mental health, and improved awareness of how AI works amongst the public. Whilst AI bots can replicate human interaction, emotion and empathy, they can't replace it, and it's crucial that AI is trained to signpost people who really need support to the places where they can find it.
Q: Could AI mental health support undo progress made in reducing stigma around mental health?
A: What's most worrying is that AI therapy pushes mental health back into a secret, hidden space and potentially unravels the decades of work that have been done to try to shift people to having honest and open conversations about mental health. These open conversations are crucial to reducing the stigma that continues to be attached to mental health. The implications of conversations about mental health being held within AI chatbots alone carry a lot of risk in terms of people not feeling safe to speak up, speak out and ask for mental health support outside of their AI conversation.
Using AI for the first steps in conversations about mental health is a positive step, but anything beyond this feels like it risks keeping mental health locked away from public view and public acceptance. So, for me, there has to be a really important and ethical debate about where AI sits, and where the boundaries lie for its use with mental health. If not, the quiet cracking will continue quietly and silently, and to what end, if it's only the AI Chatbot that knows we're not ok?