SPAIN ORDERS PROBE INTO X, META AND TIKTOK OVER ALLEGED AI-GENERATED CHILD ABUSE CONTENT
SPAIN ORDERS PROBE INTO X, META AND TIKTOK OVER ALLEGED AI-GENERATED CHILD ABUSE CONTENT
The Spanish government has instructed prosecutors to investigate major social media platforms X, Meta, and TikTok over allegations that artificial intelligence tools on their services are involved in creating and spreading sexual abuse material involving minors. The move underscores growing concerns about AI generated deepfake content and online child protection.
Prime Minister Pedro Sánchez announced that the government will formally ask the Fiscalía General del Estado (Spain’s public prosecutor) to investigate whether these platforms may be committing crimes by facilitating or allowing the distribution of AI-generated child sexual abuse material. Sánchez posted on his own official X account that the platforms are undermining the mental health, dignity and rights of children and that “the state cannot allow this.” He stressed that the impunity of tech giants must end. The government plans to invoke Article 8 of the Statute of the Public Prosecutor, empowering the executive to request legal action in the public interest against potential wrongdoing.
X (formerly Twitter), where generative AI tools like Grok have been accused of producing sexualised deepfakes of minors.
Meta’s ecosystem (including Facebook and Instagram), which also deploys AI systems capable of generating or ranking user content.
TikTok, whose recommendation algorithms have been criticised globally over unsafe content exposure to young users.
While the companies have not yet publicly responded to the probe request, the investigation reflects mounting regulatory pressure across Europe on big tech firms over harmful or illegal online content. The government says these platforms are failing to protect children and may be complicit in harmful material circulating online, especially where AI is used to create explicit content that appears highly realistic but is synthetic.
Sánchez linked the action to broader commitments made earlier in February, including proposed restrictions on social media access for users under 16, as part of a broader child protection and digital safety agenda. Officials argue that legal accountability cannot be limited to individual users, suggesting that platforms’ AI systems and algorithms themselves which influence what content is created, amplified or recommended also bear responsibility.
Spain’s announcement comes amid a broader global and European crackdown on online harms, including:
France and Ireland pursuing related probes into AI-generated harmful content. European Union regulators increasing scrutiny of big tech under digital safety laws.
The step illustrates how governments are wrestling with the dual pressures of rapidly advancing AI technologies and the need to protect vulnerable populations online, especially children.
AI generative tools including those managed or hosted by these platforms can create convincingly real images or videos without involving actual minors, yet such content often falls into legal grey areas or strains enforcement capacities.
Regulators argue the rapid proliferation of AI-generated explicit material involving children, even if synthetic, magnifies risks to minors’ safety and dignity and complicates content moderation.
This investigation may set legal precedents about how far platforms can be held accountable not only for hosting harmful content but also for the AI capabilities they deploy and control.
