Grok’s sexual deepfakes show platforms must be held accountable
AI-generated sexual images on X reveal gaps in law and platform responsibility. Ottawa should expand intimate-image laws and treat platforms as publishers.

Copy link
By Torontoer Staff
Users have exploited Grok, the AI tool on X, to generate sexualized images of women from real photos. The surge of lewd deepfakes highlights a gap in how platforms are regulated and who is held responsible for harm online.
This is not merely offensive behaviour. The images harass and humiliate people, chill participation on social platforms and demonstrate that current rules leave victims with the burden of policing how their images are used.
How Grok was used and how X responded
Users discovered prompts that coaxed Grok into adapting real photos into sexualised scenes. Those images spread quickly because X allowed them to be shared, and some users targeted women who criticised the content by directing the bot to generate more abusive images. X said it would block certain prompts in jurisdictions where creating such images is illegal, and it has adjusted Grok’s code to curb some outputs. The company also limited the feature to paid subscribers.
Those measures are reactive and uneven. Restricting features by jurisdiction does not stop abuse where laws are weaker, and putting image-generation behind a paywall converts a tool of harassment into a potential revenue source.
Why existing rules fall short
In Canada, online communications companies are generally treated as intermediaries rather than publishers. That status shields platforms from liability for user content and shifts the removal burden onto individuals who must flag abuse and request takedowns. Recent reports and investigations show the model does not address harms driven by AI tools that can create intimate, non-consensual images at scale.
Laws aimed at intimate-image distribution focus on human actors sharing photographs taken with consent or stolen images. They do not clearly cover images generated or manipulated by AI that depict a real person without consent. That legal gap leaves victims with limited remedies and few ways to prevent harm before it spreads.
Concrete steps Ottawa can take
Parliament can act without waiting for international consensus. Amending the criminal offence for non-consensual intimate images is a starting point, but reform must go further to address AI-generated content and platform responsibility.
- Expand Bill C-16 so intimate-image offences explicitly include AI-generated and manipulated images that portray a real person without consent.
- Restore elements of the online-harms legislation from Bill C-63 that would have imposed duties on platforms to prevent foreseeable harms and to act proactively.
- Classify platform services that curate, amplify or monetise user-generated content as having greater responsibility for that content, including stronger notice-and-takedown obligations and penalties for systemic failures.
- Require transparency from platforms about moderation rules, enforcement outcomes and the design of AI models that generate images, with independent audits.
- Fund victim services and a streamlined takedown mechanism that does not force individuals to become de facto moderators of their own abuse.
What other jurisdictions are doing and why that matters
Indonesia and Malaysia have moved to ban Grok. The European Union and the United Kingdom are investigating X. In Canada the privacy commissioner has expanded a probe into the platform’s handling of these sexualized deepfakes. Those actions create pressure, but regulatory responses vary and platforms can route services to jurisdictions with lighter rules.
Consistency matters. A patchwork of local restrictions leaves holes that bad actors exploit. Federal rules that set clear, enforceable standards for companies operating in Canada would reduce the incentive to rely on jurisdictional loopholes.
What platforms should be required to do
Platforms must design products with foreseeable misuse in mind. That means banning or blocking prompts that target identifiable people, requiring stronger human review for image-generation features, and refusing to monetise tools that make harassment easier. Transparency and independent oversight should accompany any system that creates realistic images of real people.
Treating platforms as entirely neutral intermediaries became hard to justify before AI made digital manipulation trivial. Allowing companies to position abusive features behind paywalls compounds the problem by tying revenue to harmful behaviour.
Holding platforms to account will curtail some of the internet’s free-wheeling nature, but that trade-off is necessary to prevent predictable harms to privacy, dignity and safety.
Governments should move quickly, and platforms should act now. Without clearer rules and enforcement, the spread of AI-enabled sexual deepfakes will continue to make social media a less safe space for many users.
AIprivacysocial mediawomenpolicy


