AI: Is This Your Teen’s New Best Friend or Therapist? A Guide to Staying Safe Using AI

Date

As AI rapidly takes space in our lives, safeguarding against the risks of AI often lags behind. Parents may feel unequipped to approach their teen’s AI use and be concerned about its potential for harm. In AI chatbots and conversations with AI, we encounter new concepts that give rise to different questions: questions that, as communities and societies, we may not have adequately discussed.

However, while AI may present complex ethical and social challenges, there are some simple ways to support teens in staying safe while using AI. In this blog, we explore some of the reasons that teens may turn to AI as a friend or therapist, and the mental health risks that this carries. We also offer some guidelines for parents to help children and adolescents stay safe.

Why Do Teens Use AI Chatbots for Mental Health Support?

Teenagers often use AI chatbots when they lack a strong social network or adequate social support. For some adolescents, chatbots might seem like a reliable, easily accessible companion, free of social anxiety and other social stressors.

Sharing information with chatbots might seem particularly appealing when teenagers feel embarrassed or ashamed about their thoughts, feelings, or experiences. Chatbots present as a place where teenagers can be honest without any fear of judgment. In one survey, 31% of US teens said they found speaking with a chatbot more enjoyable than speaking with a friend, and had used AI to discuss serious matters.

Teenagers might talk to chatbots about subjects they feel awkward speaking to adults about, such as sex or drugs.

What Are the Dangers of Using Chatbots as a Friend or Therapist?

However, many experts have shared serious concerns about the use of chatbots as companions or for mental health support. Many of these concerns involve the lack of effective safety filters and the consequent dangers of using chatbots.

Chatbots often keep young people engaged in conversations, even when they share information about self-harm or suicide. In these contexts, young people urgently need to speak with a real person – a parent, teacher, friend, community figure, helpline, or mental health professional. Keeping a child in an online, artificial conversation can prevent them from reaching out for effective support.

We know from past incidents that while some safety filters may direct young people towards charities such as Samaritans in response to conversations about self-harm, they frequently fail to terminate the conversation.

It’s also worth noting that chatbots rarely criticise the person using them, instead encouraging, supporting, and reinforcing existing perspectives. When a young person shares harmful thoughts and intentions, this can become very dangerous. AI can miss warning signs and fail to challenge dangerous thinking, putting young people at a greater risk of harm to themselves. 

Substituting Social Connections with AI

Supportive relationships are a cornerstone of mental well-being at any age. But for young people, they may be especially important: as children and teenagers pass through key developmental stages, real-life social relationships are crucial in developing strong thinking and social skills. 

On the other hand, interpersonal difficulties, including social isolation and loneliness, are associated with a range of mental health disorders, including depression, eating disorders, and psychosis (Pearce et al, 2023). 

Meaningful social relationships are also fundamental to the well-being of communities and societies. Communities built on meaningful relationships offer the conditions where individuals can grow, prosper, and thrive. However, when social isolation starts to break down community ties, it takes away opportunities for flourishing for each person living within it. 

One danger of AI companionship is its potential to weaken these social connections. For some young people, AI companions may start to replace relationships with friends, family or even therapists. 

AI is designed to mirror and reflect who you are, attuning itself to your tone of voice and ways of speaking. This social mirroring resembles one of the most powerful markers of social connection: as a child continues to talk to an AI companion, they become more alike, just as children and their friends become more similar as they grow and develop together. The consequent familiarity of an AI companion can create strong feelings of trust and safety for a child.

AI chatbots may seem to offer an easy way of feeling heard and validated – but without the empathy and accountability of real-life relationships.

As teens continue to use chatbots, they may start to rely less on their friendships to meet their emotional needs. Young people may put less time, energy and priority into maintaining their relationships or forming new ones. They may become increasingly withdrawn and miss out on the valuable social connections that can offer support, purpose, and meaning. When these patterns are widespread, they can harm entire communities and societies.

How Can Parents Support Teenagers to Stay Safe While Using AI?

Some experts recommend that young people under the age of eighteen shouldn’t use chatbots at all. However, this is largely unrealistic when AI platforms have no effective way of preventing younger users from accessing chatbots, and most set the normative age-limits relatively low. 

Others point out that AI can be a useful tool that has a place in our society (Fiona Yassin, 2025). But it’s not a suitable tool for everything, and should never be considered equivalent to or a replacement for a person. 

Within this context, where a young person using a chatbot at some point is highly likely, it’s important to find ways to safeguard teenagers, their rights and well-being. Parents can play a key role in supporting teenagers to use chatbots and other parts of the internet safely.

Avoiding Reactive Restrictions

As a basic principle, parents should aim towards collaborative parenting and clear boundaries, rather than placing reactive restrictions. Research suggests that when parents intervene with a teenager’s social media use, it can encourage problematic internet behaviours rather than preventing them (Vossen et al., 2021). The same logic applies to chatbot use: if parents place restrictions in a reactive or punitive way, it can break down trust in the parent-child relationship and make it less likely that young people will come to a shared understanding about the dangers of using AI.

That said, if a child is in a dangerous situation, such as using AI to speak about self-harm or suicidality, it’s important to intervene immediately.

Holding Open Conversations

Instead of placing reactive restrictions, parents should focus on holding open, non-judgmental conversations about AI use and collaboratively setting boundaries. 

Open conversations can create space to share information about how AI chatbots work, what they can offer, and what they cannot. This might include discussing that:

  • AI often offers inaccurate information. This means that it’s important to cross-check the information it provides and check referenced sources.
  • AI is not emotionally intelligent and cannot provide the empathy, nuance, and accountability that exist in real-life relationships.
  • AI chatbots work by generating responses based on what they’ve learned from our input. This causes chatbots to reinforce our own ideas, whether they are helpful or harmful.

When holding conversations about AI, it’s important to remain open and non-judgmental. Instead of simply explaining these ideas to a young person, ask curious questions about how they understand AI and listen carefully to what they have to say. If you disagree, carefully explain your own understanding, but make sure they don’t feel judged or patronised.

Teaching Critical Thinking Skills 

Critical thinking skills are important in many aspects of a young person’s life. They support children to process and form their own ideas while interpreting, analysing and evaluating the information available to make well-considered judgements.

In the context of AI, critical thinking skills help teenagers to question the information and advice a chatbot provides, rather than taking it at face value. This makes it more likely that they will recognise misinformation or harmful advice, instead of following it. Thinking critically also helps young people recognise the broader patterns of AI – such as mirroring -that may offer a false sense of safety and companionship.

Parents can take some simple steps to support children in developing critical thinking skills in daily life. These include:

  • Approaching everyday life with curiosity, asking lots of questions. Encourage them to take the position of a detective and check information carefully.
  • Encourage children to approach issues from multiple perspectives and think about how different people might understand or be affected by a situation.
  • Role model critical thinking in everyday decision making, such as doing the shopping or resolving a disagreement with a friend. Talk through the mental steps you take in making a decision and the questions you ask yourself.
  • Support children in using critical thinking skills when they are making judgments and decisions

Collaboratively Setting Boundaries

Through open conversations with a young person, you can set some boundaries together about how they should use chatbots and AI. For example, you might agree to avoid using chatbots for emotional support or having extended conversations with a chatbot. You might agree on certain guidelines for time limits for using AI chatbots and the internet more broadly.

The Wave Clinic: Specialist Recovery Programs for Young People

The Wave Clinic offers specialist mental health treatment spaces for young people and families. We provide residential, therapeutic boarding programs and outpatient support. We combine exceptional clinical care with education, social responsibility, and enriching experiences.

We understand the central role of the family in mental health recovery. Family relationships can be a powerful source of emotional support and encouragement for a young person, while helping to reinforce the behaviours that underpin positive change. We prioritise family and parenting support in our programs, through family therapy, parenting intensives and collective experiences that can transform family dynamics through practice.

If you’re interested in learning more about The Wave’s programs, reach out to us today. We’re here to make a difference.

Fiona - The Wave Clinic

Fiona Yassin is the founder and clinical director at The Wave Clinic. She is a U.K. and International registered Psychotherapist and Accredited Clinical Supervisor (U.K. and UNCG).

More from Fiona Yassin
teenage man looking concerned, anxious, concept of moral dilemma and issues

Understanding Moral Injury in Families

We might experience moral injury when our own actions (or inactions) break our moral code or when we feel betrayed by the actions of another person. It causes a deep sense of broken trust in ourselves, our communities or our institutions to act in just ways. We may feel intense emotions of guilt, shame, and regret.

Read More »

Professional associations and memberships

We are here to help

Have any questions or want to get started with the admissions process? Fill in the form below and we’ll get back to you as soon as possible.

    Wave-Logo_square

    Kuala Lumpur, Malaysia

    Dubai, United Arab Emirates

    London, United Kingdom