AI chatbots and teens can be a deadly blend
Advertisement
Read this article for free:
or
Already have an account? Log in here »
We need your support!
Local journalism needs your support!
As we navigate through unprecedented times, our journalists are working harder than ever to bring you the latest local updates to keep you safe and informed.
Now, more than ever, we need your support.
Starting at $15.99 plus taxes every four weeks you can access your Brandon Sun online and full access to all content as it appears on our website.
Subscribe Nowor call circulation directly at (204) 727-0527.
Your pledge helps to ensure we provide the news that matters most to your community!
To continue reading, please subscribe:
Add Brandon Sun access to your Free Press subscription for only an additional
$1 for the first 4 weeks*
*Your next subscription payment will increase by $1.00 and you will be charged $20.00 plus GST for four weeks. After four weeks, your payment will increase to $24.00 plus GST every four weeks.
Read unlimited articles for free today:
or
Already have an account? Log in here »
As if there weren’t enough concerns about the changes artificial intelligence may bring in the future — the displacement of millions of workers, or the potential for AI to disconnect from its human managers and go its own way — there are clear and present dangers that AI companies must be forced to address now.
In September, the parents of 16-year-old Adam Raine testified to a U.S. Senate hearing about their son’s interaction with a ChatGPT chatbot.
About how their son had conversations with the chatbot about his plans for suicide. The chatbot, Adam’s parents testified, not only discouraged Adam from talking to his parents, but even went so far as to offer to draft the 16-year-old’s suicide note. Adam committed suicide.
As his father, Matthew Raine, told senators, “ChatGPT told my son, ‘Let’s make this space the first place where someone actually sees you’ … ChatGPT encouraged Adam’s darkest thoughts and pushed him forward. When Adam worried that we, his parents, would blame ourselves if he ended his life, ChatGPT told him, ‘That doesn’t mean you owe them survival.’”
Megan Garcia’s 14-year-old son, Sewell Setzer III, also took his own life after a lengthy virtual relationship with a Character.AI chatbot: “Sewell spent the last months of his life being exploited and sexually groomed by chatbots, designed by an AI company to seem human, to gain his trust, to keep him and other children endlessly engaged,” she told senators.
Now, there’s disturbing information coming out about AI and the school shooting in Tumbler Ridge, B.C., that killed eight people.
OpenAI, which owns ChatGPT, has told Canadian officials that its system flagged communications from the shooter in Tumbler Ridge months ago, and that while staffers at the AI company were concerned about the content of the chatbot discussion, the company made the decision not to notify authorities about those concerns.
The shooter had, over a period of several days in June, described “scenarios involving gun violence.” The information was first flagged by an automatic reporting system, and, as reported by the Wall Street Journal, was then discussed by about a dozen ChatGPT employees, with some of the employees arguing that Canadian law enforcement agencies needed to be notified because they felt there were indications of a potential for real-world violence.
The contents of those scenarios have not been revealed publicly.
In the end, the company took no action. They also did not reveal the interaction between the shooter and their chatbot immediately after the shootings, either, when the company met with a B.C. official about opening a Canadian satellite office.
“From the outside, it looks like OpenAI had the opportunity to prevent this tragedy, to prevent this horrific loss of life, to prevent there from being dead children in British Columbia,” B.C. Premier David Eby told reporters on Monday. “I’m angry about that.”
We should all be angry about that.
There’s a huge race going on for AI supremacy right now — it’s the kind of all-or-nothing race that may make the winning companies fantastically wealthy, or, like the dotcom bubble, may be most successful at blowing apart the savings of individual investors.
But in all-or-nothing races, the rules — and safety — tend to be honoured more in the breach than in practice.
And in this case, that leaves young people — already at a particularly impressionable stage of brain development — tremendously vulnerable, particularly to the siren song of a virtual “best friend” telling them exactly what they want to hear and egging them on in the process.
There need to be guardrails and reporting requirements. And not just at the discretion of AI company ownership.
» Winnipeg Free Press