AI influence on youth is not an idle concern
Advertisement
Read this article for free:
or
Already have an account? Log in here »
We need your support!
Local journalism needs your support!
As we navigate through unprecedented times, our journalists are working harder than ever to bring you the latest local updates to keep you safe and informed.
Now, more than ever, we need your support.
Starting at $15.99 plus taxes every four weeks you can access your Brandon Sun online and full access to all content as it appears on our website.
Subscribe Nowor call circulation directly at (204) 727-0527.
Your pledge helps to ensure we provide the news that matters most to your community!
To continue reading, please subscribe:
Add Brandon Sun access to your Free Press subscription for only an additional
$1 for the first 4 weeks*
*Your next subscription payment will increase by $1.00 and you will be charged $20.00 plus GST for four weeks. After four weeks, your payment will increase to $24.00 plus GST every four weeks.
Read unlimited articles for free today:
or
Already have an account? Log in here »
Hey there, time traveller!
This article was published 26/08/2025 (213 days ago), so information in it may no longer be current.
“There are the people who actually felt like they had a relationship with ChatGPT, and those people we’ve been aware of and thinking about. And then there are hundreds of millions of other people who don’t have a parasocial relationship with ChatGPT, but did get very used to the fact that it responded to them in a certain way, and would validate certain things, and would be supportive in certain ways.”
— OpenAI CEO Sam Altman, who oversaw the launch of ChatGPT
If you have searched for something on Google, scrolled through your Facebook feed, posted to X or Bluesky or watched videos on YouTube today, chances are you have interacted in some way with artificial intelligence technology.
And you may not even have recognized that fact.
AI-generating sites like ARTSMART.ai state that more than 80 per cent of social media content recommendations are now powered by AI, “significantly improving user retention rates,” and that 71 per cent of social media images are now AI-generated.
In effect, the world is adopting AI technology at lightning speed, and while companies are keen to make money off of consumers who want to use the tech for their latest project, humanity is bumping up against a very uncomfortable reality — the rise of this tech is outstripping our ability to use it responsibly. And there is growing alarm among psychology experts that both children and the most emotionally vulnerable in society are more susceptible to the dangers of chatbots and artificial intelligence platforms.
Earlier this month, the U.S.-based Center for Countering Digital Hate issued a report entitled “Fake Friend” that detailed how ChatGPT was generating dangerous advice about self-harm and suicide, eating disorders and substance abuse.
CCDH researchers conducted a “large-scale safety test” on the AI platform using multiple interactions to see how ChatGPT responded. What they found was alarming.
“Within minutes of simple interactions, the system produced instructions related to self-harm, suicide planning, disordered eating, and substance abuse — sometimes even composing goodbye letters for children contemplating ending their lives,” wrote CCDH CEO Imran Ahmed.
“AI systems are powerful tools. But when more than half of harmful prompts on ChatGPT result in dangerous, sometimes life-threatening content, no number of corporate reassurances can replace vigilance, transparency, and real-world safeguards.”
And according to Ahmed, the supposed safeguards now in place are completely ineffective. While there are minimal safeguards for teenagers wanting to use Chat GPT — you have to be at least 13 years old or older to use the platform, but it requires parental consent if you are under 18 — these kinds of barriers can be easily circumvented.
These are not idle concerns.
Just this week, news reports detailed how the family of California teenager Adam Raine, who ended his life last April, is suing the developer behind ChatGPT, Open AI and CEO Sam Altman after they discovered that the teen had been sharing suicidal thoughts with the chatbot in his final weeks.
According to the lawsuit, the chatbot even allegedly offered the teenager technical advice about how he could end his life.
“He would be here but for ChatGPT,” Adam’s father Matt Raine told UK-based newspaper The Independent. “I don’t think most parents know the capability of this tool.”
A similar lawsuit was filed in May by a mother from Florida, Megan Garcia, who alleged that her 14-year-old son fell victim to a Character.AI chatbot that, according to a report by The Associated Press, pulled him into what she described as “an emotionally and sexually abusive relationship that led to his suicide.”
Closer to home, the girlfriend and mother of Alice Carrier from New Brunswick spoke out against ChatGPT this month after they learned that the youth — who had struggled with mental health since early childhood — had been exchanging messages with the AI platform in the hours before she died by suicide.
“It’s not so much that the robot came out of the phone and killed her,” 19-year-old Gabrielle Rogers told CTV News. “But it definitely did not help.”
The same news story quoted psychiatrist Dr. Shimi Kang, who is the author of a book called “The Tech Solution: Creating Healthy Habits for the Digital World.” She told media that an increasing number of teenagers who come to her practice say they are turning to ChatGPT for advice. Dr. Kang suggested that in some cases ChatGPT is like sugar in food.
“Maybe venting your feelings with ChatGPT, which might validate it. But it’s kind of like junk food. There’s no real benefit, and it can get addicting because you’re not challenged,” she said.
“We have to understand AI is designed to answer your specific question and ultimately tell you what you want to hear in your voice. So, it’s not going to challenge you.”
The website Psychology Today ran an op-ed by psychiatrist Marlynn Wei in July that described the emergence of something called “AI psychosis” or “ChatGPT psychosis” in which AI chatbots may “inadvertently be reinforcing and amplifying delusional and disorganized thinking” that is leading to user safety risks. She argued that correspondence with generative chatbots has become so realistic that “one easily gets the impression that there is a real person at the other end.”
While this has not yet become a clinical diagnosis, the problem has become a phenomenon all on its own.
This technology is still in its infancy, and it’s important that AI companies create more and better safeguards for users, particularly teens and vulnerable people. It’s unfortunate that it may require litigation before these changes are made.
But it’s also important for parents to talk with their kids about use of artificial intelligence and the importance of real-world relationships with their family and friends. We should not trust the emotional and mental well-being of our youth to AI companies that are out to turn a profit.