Elon Musk’s social media platform X is under investigation by the European Union over concerns that it may be allowing the spread of illegal content, following public backlash over manipulated sexualised images generated by its Grok artificial intelligence chatbot.
The European Commission said on Monday it will examine whether X adequately protected users by properly assessing and addressing risks linked to Grok’s features. The probe follows a separate investigation launched two weeks earlier by the UK media regulator Ofcom, after concerns that Grok was producing sexually explicit deepfake images. Indonesia, the Philippines and Malaysia had also temporarily blocked access to the chatbot.
Earlier this month, the Commission described the circulation of AI-generated images of undressed women and children on X as unlawful and deeply disturbing, echoing widespread international condemnation. EU technology chief Henna Virkkunen said non-consensual sexual deepfakes involving women and children represent a serious and unacceptable form of abuse.
X pointed to a statement issued on 14 January saying its owner, xAI, had restricted image-editing features for Grok users and limited the ability to generate images of people in revealing clothing in jurisdictions where such content is illegal. The company did not specify the countries involved. The Philippines and Malaysia later restored access to the chatbot after xAI introduced additional safety measures.
The Commission’s action falls under the EU’s Digital Services Act, which requires large technology companies to take stronger steps to curb illegal and harmful online content. Breaches of the law can result in fines of up to 6% of a company’s global annual revenue.
While the recent safeguards were welcomed, a senior Commission official said they do not fully address the broader risks. The official added that X may not have carried out a proper risk assessment when introducing Grok’s features in Europe. Virkkunen said the investigation will determine whether X complied with its legal obligations or failed to adequately protect the rights of European users, including women and children.
European lawmaker Regina Doherty said the case highlights broader shortcomings in how AI regulation is enforced, stressing that legislation must be adaptable and capable of responding quickly when serious harm occurs.
Separately, EU regulators have expanded an investigation launched in December 2023 into whether X has sufficiently managed systemic risks tied to its recommendation systems, including the platform’s recent shift to a Grok-based model. Regulators warned that X could face interim measures if meaningful improvements are not made. The company was fined €150 million in December for breaching transparency rules under the Digital Services Act.
Click here for more on Lifestyle


