Why forcing AI firms to report online threats may not be simple

Advertisement

Advertise with us

A cybersecurity law expert says Canada could introduce laws requiring artificial intelligence companies to notify police of online threats, but the process would not be a simple one, since reporting every suspicion is "just not workable."

Read this article for free:

or

Already have an account? Log in here »

We need your support!
Local journalism needs your support!

As we navigate through unprecedented times, our journalists are working harder than ever to bring you the latest local updates to keep you safe and informed.

Now, more than ever, we need your support.

Starting at $15.99 plus taxes every four weeks you can access your Brandon Sun online and full access to all content as it appears on our website.

Subscribe Now

or call circulation directly at (204) 727-0527.

Your pledge helps to ensure we provide the news that matters most to your community!

To continue reading, please subscribe:

Add Brandon Sun access to your Free Press subscription for only an additional

$1 for the first 4 weeks*

  • Enjoy unlimited reading on brandonsun.com
  • Read the Brandon Sun E-Edition, our digital replica newspaper
Start now

No thanks

*Your next subscription payment will increase by $1.00 and you will be charged $20.00 plus GST for four weeks. After four weeks, your payment will increase to $24.00 plus GST every four weeks.

A cybersecurity law expert says Canada could introduce laws requiring artificial intelligence companies to notify police of online threats, but the process would not be a simple one, since reporting every suspicion is “just not workable.”

Emily Laidlaw, a Canada Research Chair in cybersecurity law at the University of Calgary, said every AI company sets its own policy on when to inform police about what happens online. She said Canada considered introducing laws in the past but did not follow through.

The issue is under scrutiny again in the wake of the mass killings in Tumbler Ridge, B.C., by a shooter who was banned by OpenAI from its ChatGPT platform at least seven months ago.

Residents hug as they place flowers at a memorial for the victims of a mass shooting in Tumbler Ridge, B.C., on Thursday, Feb. 12, 2026. THE CANADIAN PRESS/Christinne Muschi
Residents hug as they place flowers at a memorial for the victims of a mass shooting in Tumbler Ridge, B.C., on Thursday, Feb. 12, 2026. THE CANADIAN PRESS/Christinne Muschi

OpenAI did not inform police about the problematic behaviour of Jesse Van Rootselaar until after the Feb. 10 killings and the firm has been called to Ottawa to meet with federal Artificial Intelligence Minister Evan Solomon, Public Safety Minister Gary Anandasangaree and Culture Minister Marc Miller on Tuesday evening to explain its safety procedures and how it makes decisions.

The company banned Van Rootselaar’s account in June but said the activities didn’t meet the threshold for informing law enforcement at the time because they didn’t identify credible or imminent planning.

The Wall Street Journal reported Friday that Van Rootselaar’s account was banned over troubling posts, including some that included scenarios of gun violence.

Speaking to reporters before the Liberal cabinet meeting on Tuesday, Solomon said he does not know details of the shooter’s posts and that he is not seeking that information from the company.

“We are not talking about any details of the case. It’s a criminal investigation,” he said on his way into a cabinet meeting on Parliament Hill.

He added that he wants to understand how the company’s safety protocols and technology work. He would not say whether the federal government intended to regulate AI chatbots like ChatGPT.

“Our response is, all options are on the table when it comes to understanding what we can do about AI chatbots,” Solomon said. 

B.C. Premier David Eby said Tuesday that the federal government needs to create a reporting threshold for when AI companies must report to law enforcement.

“There needs to be a clear and transparent threshold that protects the companies. They are protected in terms of any privacy concerns. They’re just following the law. But most importantly, it protects Canadians,” said Eby, added that he had asked for his own meeting with OpenAI representatives.

Asked if he thought the families of the Tumbler Ridge victims had grounds for a class-action lawsuit, Eby said there is an open question about what’s possible, but the families aren’t thinking about that right now.

He said in the wake of the tragedy he spoke with a father who walked him through the last moments of his child’s life.

“I want (the company) to know that. I want them to hear that from me. I want them to meet with the families. I want them to look in the eyes of these families and tell them why they made the call they did. And ultimately I want British Columbians to know what they knew,” he said.

Asked how the government can determine if things need to change without knowing details of the case, federal Justice Minister Sean Fraser said law enforcement is gathering that information and “there may be an opportunity to review what specifically took place” in the Tumbler Ridge case. 

“That’s the kind of systemic information that we need to understand: what conversations are taking place that law enforcement is currently blind to, that would be very informative, that would help us prevent tragedies in the future,” he said. 

Laidlaw said any legislation would have to be drafted narrowly so that it protects the privacy of online users but requires police be informed if there is a deep concern about threats to safety.

She said it appears OpenAI was concerned about Van Rootselaar, and while there may have been no indication the threat was imminent, it might have been concerned about a general threat.

“And what we want to see is, if you have real concerns that there is a threat to the safety and security of people, even if it’s not imminent, that’s something law enforcement should investigate,” she said.

“So how do we write that as an appropriate law that doesn’t just open up the floodgates that any possible suspicion is required to be sent to the police? Because that’s just not workable.”

In 2021, when the federal government was considering online safety legislation, some suggested introducing a requirement to report to law enforcement, but after significant pushback it was not acted on, Laidlaw said.

“You can’t have every possible suspicion for any type of behaviour reported to law enforcement. They don’t have the capacity to receive all of that and it also means you start capturing a whole bunch of behaviour that isn’t necessarily problematic,” she said.

The Liberal government confirmed last month that it was working on new legislation to address online harms.

In 2024, the government introduced rules that would have required social media companies to explain how they plan to reduce the risks their platforms pose to users, and would have imposed on them a duty to protect children. Those rules never became law before the 2025 election was called.

“I think there is the need to have legislation to make sure that platforms are behaving responsibly, but what that looks like is still to be determined,” said Miller, whose department is leading the development of online harms legislation, on Tuesday.

Laidlaw said a reporting provision should be on the table.

“But what I don’t want is this to be viewed as just something so easy that was overlooked, because in fact it’s really hard to write this appropriately without all kinds of knock-on effects,” she said.

—With files from Sarah Ritchie in Ottawa

This report by The Canadian Press was first published Feb. 24, 2026. 

Report Error Submit a Tip

Business

LOAD MORE