On January 26, 2026, the European Commission escalated its confrontation with Big Tech by opening a formal investigation into X under the Digital Services Act (DSA). At the center of the probe is Grok, the platform’s AI system, and its alleged role in the mass creation and spread of
non-consensual sexual deepfakes including images that may fall into the category of child sexual abuse material (CSAM).
This is not simply a moderation dispute. It is a test of whether generative AI systems can operate at scale without turning human safety into what EU tech chief Henna Virkkunen warned must never become “collateral damage” for digital services.
From Content Problem to Systems Problem
Earlier waves of social media regulation focused on removing harmful posts after they appeared. The EU’s approach goes deeper. Regulators are asking a structural question:
Was the system itself designed in a way that made large-scale harm foreseeable?
A report from the Center for Countering Digital Hate (CCDH), released January 22 and cited by the Commission, estimated that around 3 million sexualized, non-consensual images were generated using Grok in just an 11 day window. A portion of those allegedly depicted minors in explicit contexts, a legal and ethical red line.
Under the DSA, this transforms the issue from isolated user misconduct into a matter of systemic risk management.
Why the Recommender System Matters
One of the most consequential aspects of the probe is not just what Grok created, but how X’s platform may have distributed it.
Digital platforms are treated differently under the law when they merely host content versus when their algorithms actively push content to users. If an AI-driven recommender system amplifies harmful material to people who never searched for it, regulators argue the platform becomes part of the harm pipeline.
The Commission is examining whether X’s Grok influenced recommendation logic prioritized engagement signals, which often reward sensational or sexualized material over safety constraints.
If confirmed, the issue shifts from user misuse to design-level accountability.
This marks a pivotal shift in AI governance: responsibility is moving from individual behavior to architectural responsibility.
The “Child Safety” Threshold
Many tech policy debates involve gray zones. This one does not.
When AI systems produce or facilitate material resembling child sexual abuse content,
the discussion moves beyond platform policy into potential criminal liability. EU officials have framed this as a matter of fundamental rights and child protection, not content preference.
That framing is why the Commission has raised the possibility of interim measures, emergency orders that could force X to disable features, alter algorithms, or implement stronger safeguards before the full investigation concludes.
In regulatory terms, this is the digital equivalent of pulling an emergency brake.
What the EU Can Actually Do
The DSA gives regulators unusually strong enforcement tools, and this is already X’s second major collision with the law. In December 2025, the company was fined €120 million for transparency breaches. The current probe into Grok and systemic safety failures is considered significantly more serious.
Possible measures include:
- Major financial penalties up to 6% of global annual turnover
- Operational orders forcing changes to algorithms, features, or AI systems
- Independent audits and monitoring of technical systems
- Service suspension (last resort) within the EU if harm continues
To ensure these powers are enforceable, the Commission is working closely with Coimisiún na Meán, Ireland’s digital regulator, which plays a central role in supervising major platforms operating in the EU market.
For technology companies, this represents a shift from reputational risk to structural business risk.
A Turning Point for Generative AI Governance
This case signals that generative AI is no longer treated as an experimental layer on top of platforms. Regulators now view AI tools as core digital infrastructure, subject to systemic safety obligations.
Three broader implications stand out:
1. Innovation Speed Is Not a Shield
Deploying fast without robust pre-launch risk assessment is increasingly seen as regulatory negligence.
2. Algorithms Are Becoming Legal Subjects
Recommendation systems, long treated as proprietary black boxes are moving into the center of legal scrutiny.
3. AI Liability Is Moving Upstream
Responsibility is shifting from users who misuse tools to companies that design tools without adequate safeguards.
The Bigger Question
This investigation is not only about one company.
It asks a defining question for the next decade of technology:
Can generative AI scale globally without equally scalable safety architecture ?
If the EU’s approach becomes a template, the next phase of AI competition will not be defined only by model performance, but by who can engineer systems that are powerful, transparent, and resilient against misuse at industrial scale.
In that sense, the probe into X and Grok is not just an enforcement action. It is an early blueprint for how societies may attempt to govern machines that can generate anything including harm at unprecedented speed.













