AI-aided sexual violence shows need for safeguards
Advertisement
Read this article for free:
or
Already have an account? Log in here »
We need your support!
Local journalism needs your support!
As we navigate through unprecedented times, our journalists are working harder than ever to bring you the latest local updates to keep you safe and informed.
Now, more than ever, we need your support.
Starting at $15.99 plus taxes every four weeks you can access your Brandon Sun online and full access to all content as it appears on our website.
Subscribe Nowor call circulation directly at (204) 727-0527.
Your pledge helps to ensure we provide the news that matters most to your community!
To continue reading, please subscribe:
Add Brandon Sun access to your Free Press subscription for only an additional
$1 for the first 4 weeks*
*Your next subscription payment will increase by $1.00 and you will be charged $20.00 plus GST for four weeks. After four weeks, your payment will increase to $24.00 plus GST every four weeks.
Read unlimited articles for free today:
or
Already have an account? Log in here »
The new image and video editing feature for xAI’s chatbot, Grok, has generated thousands of non-consensual, sexually explicit images of women and minors since Grok announced the editing feature on Christmas Eve. It was promoted as enabling the addition of Santa Claus to photos.
The growing ease of perpetrating sexual violence with novel technologies reflects the urgent need for tech companies and policymakers to prioritize AI safety and regulation.
I am a PhD candidate in public health. My research has largely focused on the intersection of gender-based violence and health, previously working on teams that leverage AI as a tool to support survivors of violence. The potential and actual harms of AI on a such a wide scale require new regulations that will protect the health of mass populations.
‘NUDIFYING’ APPS
Concern about sexually explicit “deepfakes” has been publicly debated for some time now. In 2018, the public heard that Reddit threads profiled machine learning tools being used to face-swap celebrities like Taylor Swift onto pornographic material.
Other AI-powered programs for “nudifying” could be found in niche corners of the internet. Now, this technology is easily accessible at anyone’s fingertips.
Grok can be accessed either through its website and app or on the social media platform, X. Some users have noted that when prompted to create pornographic images, Grok says it’s programmed not to do this, but such apparent guardrails are being easily bypassed.
xAI’s owner, Elon Musk, released a statement via X that the company takes action against illegal content on X by removing it, “permanently suspending accounts, and working with local governments and law enforcement as necessary.”
However, it’s unclear how or when these policies will be implemented.
THIS IS NOTHING NEW
Technologies have long been used as a medium for sexual violence. Technology-facilitated sexual violence encompasses a range of behaviours as digital technologies are used to facilitate both virtual and face-to-face sexually based harms. Women, sexual minorities and minors are the most often victimized.
One form of this violence that has received significant attention is “revenge porn” — referring to the non-consensual distribution of an individual’s images and videos on the internet.
Victims have reported lifelong mental health consequences, damaged relationships and social isolation.
Some social media websites have policies forbidding the distribution of non-consensual intimate content and have implemented mechanisms for reporting and removing such content.
Search engines like Google and Bing will also review requests to remove links from search results if they’re in violation of their personal content policies. Canada has criminalized “revenge porn” under the Criminal Code, which is punishable by up to five years in prison.
Similar to revenge porn, victims of deepfakes have reported mental distress, including feelings of helplessness, humiliation and embarrassment, while some have even been extorted for money.
Creators of sexually explicit deepfakes have also targeted prominent female journalists and politicians as a method of cyberbullying and censorship.
NOW WHAT?
This latest Grok controversy reflects a predictable major lapse in AI safeguards. Prominent AI safety experts and child safety organizations warned xAI months ago that the feature was “a nudification tool waiting to be weaponized.”
On Jan. 9, xAI responded by moving the image-editing feature behind a subscription for X users (though it can still be accessed for free on the Grok app) and has stopped Grok from automatically uploading the generated image to the comments.
However, X users are still generating sexualized images with the Grok tab and manually posting them onto the platform.
Some countries have taken action to block access to Grok.
LOOKING TO THE FUTURE
This isn’t the first time, nor will it be the last time, a tech company demonstrates such a major lapse in judgment over their product’s potential for user-perpetrated sexual violence.
Canada needs action that includes:
1. Criminalize the creation and distribution of non-consensual sexually explicit deepfakes.
Legal scholars have advocated for the criminalization of creating and distributing non-consensual sexually explicit deepfakes, similar to existing “revenge porn” laws.
2. Regulate AI companies and hold them accountable.
Canada has yet to pass any legislation to regulate AI, with the proposed Artificial Intelligence and Data Act and Online Harms Act dying when Parliament was prorogued in January 2025.
Canada’s AI minister referenced this in his response to these Grok issues, but the response lacks a dedicated timeline and a sense of urgency.
As AI progresses, major regulatory actions need to be taken to prevent further harms of sexual violence. Tech companies need to undergo thorough safety checks for their AI products, even if it comes at the expense of slowing down.
It also raises questions about who should be responsible for the harms caused by the AI’s outputs.
Three American senators have called on Apple and Google to remove Grok from their app stores for its clear policy violations, citing the recent examples of these companies’ abilities to promptly remove apps from their store.
3. Expand the scope of sexual violence social services to support those affected by non-consensual sexually explicit deepfakes.
As the perpetration of sexual violence via AI technologies becomes more prevalent, sexual violence organizations can expand their scope to support those affected by non-consensual sexually explicit deepfakes.
They can do so by leveraging existing services, including mental health care and legal supports.
4. Dismantle the underlying rape culture that perpetuate these forms of violence.
The root of sexual violence is the dominance of rape culture, which is fostered in online environments where sexualized abuse and harassment is tolerated or encouraged.
Dismantling rape culture requires holding perpetrators accountable and speaking out against behaviour that normalizes such behaviours.
» Kyara Liu is a PhD candidate in public health at the University of Toronto. This column was originally published at The Conversation Canada: theconversation.com/ca