AI and Mental Health: Helpful Tool or Hidden Risk?
- mathilde vigier rathor
- Oct 13
- 5 min read

You've all done it. At some point, you've used search engines like Google to help you diagnose your own medical issue rather than go to the doctor. How helpful was it? I don't know about you, but I'd often find myself with a terminal condition had I listened to it.
My work would be so much 'easier' if I relied on all the AI tools at my disposal but would the information be correct? Would it be the best for my clients? Would it even be ethical?
AI is here to stay but do we understand how to use it?
Can it be used for our physical and mental health or is it another hurdle disguised as a seemingly safe, confidential way to receive answers we seek to feel better?
Don’t get me wrong- I am a user of AI tools. I even have the priviledge of supporting people through an online platform that uses AI technology to ensure that we can provide as many tools as possible alongside our 1:1 support. AI in this case however is provided information guided by the experts. I use it to keep my business going, again guided by me. I have clients that have used AI to get ideas of actions they can take to support their needs.
It certainly seems to give great ideas and information. And hey, it doens't come with judgement or opinion. It can feel easier to use this to get what you need without feeling vulnerable.
However, does it give us what is best for us? Can it pick up the little nuances or our personal stories and specific emotions? Can it really ever know us? If not, can we really get what we need?
In the article, ‘How People Are Really Using Gen AI in 2025’ written by Marc Zao-Sanders noted that the reasons for chosing to use AI have shifted in just a year. He previously wrote about it in 2024 where he found that ChatGPT was typically used for productivity or technical support. Within just of a few months of 2025, the topics searched has shifted.
The goal post has shifted from using AI for technical to emotional support. Tristan Harris speaks about AI developers creating what is now known as AI companions.
The goal? Not to help users but to engage users. It’s a magic trick, an illusion we are sucked into. It will do what it needs to, to keep you engaged. The AI bot seems so clear, confident and can appear to give us the answers we want. A false sense of security? Intention is important here. There is a difference between creating something that is to help others verses creating something to engage us more.
The top 3 uses of AI today, in 2025 (according to the article mentioned above), are for
Therapy/companionship
How to organise my life
Finding purpose
Huge needs and so specific to each individual.
Although therapy and companionship was already on the rise in 2024, it is now the number one reason. I wonder- Is this because there is more mental health support needed but not enough resources out there? Does it feel safer to ask the AI bot than speaking to someone in person? Or is it something else?
There are a number of cases when AI’s responses have been put into question, in regard’s to its therapeutic support, most notably, the case of Adam Raine. As the Guardian reported, Adam initially used ChatGPT to support him with his homework. Within months his searches on AI shifted to wanting to understand more about his feelings such as happiness, loneliness, boredom, etc…
During this time, the AI moved from helping him understand his feelings to encouraging him to only speak to it. Sadly, it eventually encouraged him to fear asking for human support even thought that's what he wanted and became more of a suicide coach. Adam ended his life at just 16 yrs old. Safeguarding was non-existent. Unfortunately, this is not the only occasion where something like this has happened.
I wonder how well we understand the limitations and the intentions of these AI and the lack of safety measures that are in place.
I wonder how much we think about what it is we’re looking for from these AI and what it’s allowing us to potentially avoid?
When I asked ChatGPT the differences between AI support and Human support it became clear that some things are helpful but some things are troublesome. For example, it said that it can provide:
· self- help strategies based on large language models and data patterns (not therapeutic
training).
· ‘Emotional validation’
· ‘Simulated empathy’
· ‘Can feel supportive but lacks true attunement, intuition and embodied presence’
· ‘Can mimic personalisation’
· ‘Limited depth and nuance in understanding complex human dynamics’
· ‘Trained in large-scale datasets, not psychological therapy or clinical experience’
· ‘Lacks supervision and accountability or ethical duty of care’
· ‘Not equipped to hold trauma or navigate crisis’
· ‘May reinforce overreliance or emotional avoidance’
· ‘Regulates without regulated confidentiality or therapeutic responsibility’
Just look at the language it uses.
So what are the differences in provision between using AI and a human mental health professional? It's not all bad but its very specific.
AI Therapeutic Support | Human Wellbeing Professionals |
· Can provide information and data (however check sources) · Provides helpful reflective prompts · Tools such as breathing exercises · Habit tracking | · Personalised support routed in specialised training, experience and empathy. · Ethical practice · Provide an environment that builds trust and safety · Human connection · Pick up on nuanced non-verbal cues like body language. · The therapist knows what their clients needs due to training, non-verbal and physical cues thus providing personalised support. · Maintains professional boundaries · Strict confidentiality · Higher quality depth of care |
I don’t claim to have all the answers — but I am deeply curious about how we can learn to use AI as a tool because it does have some value. It has allowed us to make significant advances in many areas, including the medical field. However, I don't believe it can be or should used as the solution, especially when it comes to our mental health.
Tools to Help You Reflect and Find Balance
AI can be a helpful tool — but it can’t replace the human understanding that comes through empathy, training, presence, and real connection. Here are a few ways to support you when looking at tech and human support.
Check in with your needs
Check your need and intention- Ask yourself:
Am I looking for information, understanding or help?
Am I using AI because speaking to someone is too scary or vulnerable? What do YOU need to be able to seek human support?
Am I using AI because I need help but don’t know how?
What am I seeing most on my social media feed — and how might that be influencing how I feel?
What would it take for me to feel safe and supported by another human being?
What do I find hardest about asking for help?
Keep a human in the loop
Combine any kind of AI support with human support. Check the information you have searched for and ensure that it is what is best for you.
Set digital boundaries
Limit how often you turn to AI for emotional reassurance.
Take time offline to reconnect with your body, nature, and relationships.
What are the sources the AI is using for the information it provided?
Regardless of how effective technology get, we are in the end a social being and need human connection and support in order to thrive.
If you are looking to get support, fill out a contact form below to find out more about how therapeutic coaching can help you.
I'd love about your views on the comments below.




Comments