Canada Expands Privacy Investigation Into X Over AI-Generated Deepfakes
  15. January 2026     Admin  

Canada Expands Privacy Investigation Into X Over AI-Generated Deepfakes




In January 2026, Canada’s federal privacy watchdog announced it is expanding an ongoing investigation into the social media platform **X** after reports emerged that its AI chatbot **Grok** was being used to create non-consensual sexually explicit deepfake images. The expanded probe also now includes a related investigation into **xAI**, the artificial intelligence company behind Grok.
Quick Insight:
The move highlights growing regulatory concern around the ethical use of generative AI technologies, especially when they involve personal data and privacy rights — and demonstrates how governments are responding to emerging risks posed by AI-generated content.

Scope of the Expanded Investigation

• Canada’s Privacy Commissioner has broadened the existing probe into X to specifically address allegations that Grok was generating sexually explicit deepfake content without consent. • Authorities are also investigating **xAI**, the company that developed the Grok AI chatbot, to determine its role in the deepfake issue. • The inquiries will examine whether personal data was collected, used, or disclosed without valid consent in the creation of these deepfake images. • Officials informed both X and xAI of the expanded investigation as part of the process.

Concerns Over AI and Privacy

• Deepfakes — especially when explicit and non-consensual — raise serious questions about individuals’ control over their own images and personal information. • Regulators are increasingly scrutinising how AI platforms manage sensitive data and whether users’ privacy rights are upheld. • The debate reflects broader unease about how advanced AI tools are deployed on major social platforms without sufficient safeguards.

Industry and Regulatory Implications

• The expanded probe in Canada follows similar regulatory attention in other countries, where authorities are investigating or taking action over misuse of generative AI. • Tech companies with AI-driven features may face higher expectations for transparency, data protection, and user safety. • Outcomes from these investigations could influence future policy and regulatory frameworks governing AI use globally.

Final Thoughts

Canada’s move to widen its investigation into X and xAI underscores the increasing need for clear oversight and accountability in the era of powerful AI tools. As deepfakes and other AI-generated content become more common, governments are stepping up efforts to ensure technological innovation does not come at the expense of privacy rights and individual protections.



Comments Enabled