Real or fake? Researchers to develop tool that would help courts spot AI evidence
Advertisement
Read this article for free:
or
Already have an account? Log in here »
We need your support!
Local journalism needs your support!
As we navigate through unprecedented times, our journalists are working harder than ever to bring you the latest local updates to keep you safe and informed.
Now, more than ever, we need your support.
Starting at $15.99 plus taxes every four weeks you can access your Brandon Sun online and full access to all content as it appears on our website.
Subscribe Nowor call circulation directly at (204) 727-0527.
Your pledge helps to ensure we provide the news that matters most to your community!
To continue reading, please subscribe:
Add Brandon Sun access to your Free Press subscription for only an additional
$1 for the first 4 weeks*
*Your next subscription payment will increase by $1.00 and you will be charged $20.00 plus GST for four weeks. After four weeks, your payment will increase to $24.00 plus GST every four weeks.
Read unlimited articles for free today:
or
Already have an account? Log in here »
TORONTO – As artificial intelligence makes it easier for anyone to doctor – or even fabricate – videos, photos and other types of evidence, a group of researchers in Canada is aiming to help the courts sort through what’s real or fake.
The team, which includes technologists and legal scholars based in universities in Ontario and British Columbia, is planning to spend the next two years creating an open-source, free and easy to use tool that courts, people navigating the justice system and others can use to help detect AI-generated content.
Courts are ill-prepared for the rise in AI content, and the current roster of commercial tools is opaque and largely unreliable, often producing false positives and showing bias against non-native English speakers, said Maura Grossman, one of the project’s co-directors.
Meanwhile, hiring experts to authenticate evidence is costly and can slow down the court process, the team said.
In court, the stakes are particularly high, said Grossman, a computer science professor at the University of Waterloo who also teaches at York University’s Osgoode Hall law school.
“If I’m going to convict you because of something being fake or real, I have to have a high degree of confidence in the tool’s accuracy, validity and reliability,” she said in an interview this week.
“It can’t just be a black and white decision. It has to tell me how confident it is in its decision, and it has to be transparent enough to tell me why it’s made the decision it’s made.”
The project is one of the two inaugural research initiatives funded through the Canadian Institute for Advanced Research’s Canadian AI Safety Institute program, launched by the federal government last year as part of its broader AI safety strategy. Each project is set to receive $700,000 over two years.
Though the research team is based in Canada, its advisory board – which includes judges and organizations supporting self-represented litigants – is spread across North America, and the goal is to build a tool that can be used throughout the continent, Grossman said.
One of the biggest challenges will be keeping up with the ongoing and rapid evolution of generative AI, said Yuntian Deng, an assistant professor at the University of Waterloo and visiting professor with the tech giant NVIDIA whose role in the project will focus on the technical aspects.
“This is not (the kind of project where)… we just kind of do it once and then you can just sit back and relax and never kind of touch on it anymore,” Deng said.
“Given that the technology of AI generations is moving so fast, I think developing the techniques to counter that will also be a continuous process. So it’s basically like hitting a moving target, and the better AI is at generating fake text, images and videos … the more work we need to put into detecting them.”
While the team can develop a “minimum viable product” in two years, that tool won’t be a permanent solution, said Grossman, adding she’s already considered the need to look for alternate funding once the grant runs its course.
People who are bringing AI-generated videos in family court aren’t using sophisticated technology to make them; they’re using their phones and whatever free program they can find online, she said.
“So our thinking was, even if we don’t build the perfect tool, but we build a tool that can detect that kind of stuff, that will be put the courts ahead of the game for much of the stuff that they have to have to deal with,” she said.
“It may not deal with the most sophisticated criminal mischief, but it would be helpful for your average family court case or for a criminal case, perhaps.”
This report by The Canadian Press was first published Dec. 5, 2025.