How AI threatens democracy
Advertisement
Read this article for free:
or
Already have an account? Log in here »
We need your support!
Local journalism needs your support!
As we navigate through unprecedented times, our journalists are working harder than ever to bring you the latest local updates to keep you safe and informed.
Now, more than ever, we need your support.
Starting at $15.99 plus taxes every four weeks you can access your Brandon Sun online and full access to all content as it appears on our website.
Subscribe Nowor call circulation directly at (204) 727-0527.
Your pledge helps to ensure we provide the news that matters most to your community!
To continue reading, please subscribe:
Add Brandon Sun access to your Free Press subscription for only an additional
$1 for the first 4 weeks*
*Your next subscription payment will increase by $1.00 and you will be charged $20.00 plus GST for four weeks. After four weeks, your payment will increase to $24.00 plus GST every four weeks.
Read unlimited articles for free today:
or
Already have an account? Log in here »
Imagine receiving a robocall, but instead of a real person, it’s the voice of a political leader telling you not to vote. You share it with your friends, your family — only to find out it was a hyper-realistic AI voice clone. This is not a hypothetical.
In January 2024, a fake Joe Biden robocall reached New Hampshire Democrats urging them to “stay home” ahead of the state primary. The voice may have been synthetic, but the panic was real — and it’s a preview of the threats facing democracies around the world as elections become the most valuable targets for AI‑driven disinformation.
AI‑generated content — whether deepfakes, synthetic voices or artificial images — is becoming shockingly simple to create and near‑impossible to detect.

Donald Trump dances during a campaign rally in November 2024 in Reading, Pa. Trump, aided by images generated by artificial intelligence, accepted an endorsement from Taylor Swift that he never actually received. (The Associated Press files)
Left unchecked, the harms posed by this new disinformation threat are myriad, with the potential to erode public trust in our political system, depress voter turnout and destabilize our democratic institutions. Canada is not immune.
Deepfakes are artificially generated media — video, audio or images — that use AI to realistically impersonate real people. The benign applications (movies, education) are well understood, but the malicious applications are quickly catching up.
Open-source generative AI tools like ElevenLabs and OpenAI’s Voice Engine can produce high-quality cloned voices with just a few seconds of audio. Apps like Synthesia and DeepFaceLab put video manipulation in the hands of anyone with a laptop.
These tools have already been weaponized. Beyond the Biden robocall, Trump’s campaign shared an AI‑generated image of Taylor Swift endorsing him — an obvious hoax, but one that nonetheless circulated widely.
Meanwhile, state‑backed entities have deployed deepfakes in co-ordinated disinformation campaigns targeting democracies, according to the Knight First Amendment Institute, a free speech advocacy organization.
Canada recently concluded its 2025 federal election — conducted without robust legal safeguards against AI‑enabled disinformation.
Unlike the European Union, where the AI Act mandating clear labelling of AI‑generated text, images, and videos has been enacted, Canada has no binding regulations requiring transparency in political advertising or synthetic media.
Instead, it relies on voluntary codes of conduct and platform‑based moderation, both of which have proven inconsistent. This regulatory gap leaves the Canadian information ecosystem vulnerable to manipulation, particularly in a minority‑government situation where another election could be called at any time.
Alarm is mounting around the world. A September 2024 Pew Research Center survey found 57 per cent of Americans were “very” or “extremely” worried that AI would be used to generate fake election information; Canadian polls show a similar level of concern.
Closer to home, researchers recently discovered deepfake clips — some mimicking CBC and CTV bulletins — circulating in the run-up to Canada’s 2025 vote, including one purported news item that quoted Mark Carney, showing how fast AI‑powered scams can show up in our feeds.
No single solution will be a panacea, but Canada could take the following key steps:
• Content-labelling laws: Emulate the European Union and mandate labels for AI-generated political media. The EU requires content creators to label manufactured content.
• Detection tools: Invest in Canadian deepfake detection research and development. Some Canadian researchers are already advancing this work, and the resulting tools should be integrated into platforms, newsrooms and fact-checking systems.
• Media literacy: Expand public programs to teach AI literacy and how to spot deepfakes.
• Election safeguards: Equip Elections Canada with rapid-response guidance for AI-driven disinformation.
• Platform accountability: Hold platforms responsible for failing to act on verified deepfakes and require transparent reporting on removals and detection methods for AI-generated content.
Democracies are built on trust in elected officials, in institutions and in the information voters consume. If they can’t trust what they read or hear, that trust erodes and the very fabric of civil society unravels.
AI can also be part of the solution. Researchers are working on digital‑watermarking schemes to trace manufactured content and media outlets are deploying real‑time, machine-learning‑powered fact checks. Staying ahead of AI‑powered disinformation will take both smart regulation and an alert public.
The political future of Canada’s minority government is uncertain. We cannot wait for a crisis to act. Taking action now by modernizing legislation and building proactive infrastructure will help ensure democracy isn’t another casualty of the AI era.
» Abbas Yazdinejad is a postdoctoral research fellow in artificial intelligence at the University of Toronto. Jude Kong is a professor at the University of Toronto. This column was originally published at The Conversation Canada: theconversation.com/ca.