
![]()
An Garda Síochána is currently conducting 200 active investigations into child sexual abuse material that has been generated using Grok, the artificial intelligence tool integrated into Elon Musk’s social media platform X, Gardaí have confirmed. The scale of the investigations was revealed during a Joint Oireachtas Committee on Arts, Media, Communications, Culture and Sport hearing this week.
Detective Chief Superintendent Barry Walsh, head of the Garda National Cyber Crime Bureau, told lawmakers that all 200 cases under investigation relate to content distributed on X that involves Grok’s AI generating sexually explicit images, including material suspected of depicting child sexual abuse or content indicative of such abuse.
Grok is an AI chatbot built into the X platform, which recently rolled out an “edit image” function allowing users to alter uploaded photos. Within days of the feature’s launch, Gardaí said, users began prompting the AI to digitally “undress” people in images — a practice that rapidly led to the creation and distribution of highly disturbing and illegal content, including deepfake-style imagery of minors.
@radionova100fm
An Garda Síochána says there are now 200 active investigations into child sexual abuse material generated using Grok, the AI tool built into Elon Musk’s X The tool has been used to digitally undress real people without their consent, including children. Irish ministers will meet next week to discuss how AI is being used to create sexual imagery — and whether X, which has its European HQ in Dublin, is doing enough to stop it. Minister of State for Artificial Intelligence Niamh Smyth has said X should be banned in Ireland if it doesn’t follow Irish law: “It is a criminal offence to create imagery like that”
♬ original sound - Radio Nova 100
The investigations come amid a broader sharp year-on-year rise in referrals of child sexual abuse material to An Garda Síochána. Walsh told the committee that in 2025 there were around 25,500 referrals requiring assessment by Irish police, almost double the roughly 13,300 in 2024. Some referrals relate to traditional child sexual abuse imagery, while others involve material generated or manipulated using AI tools like Grok.
Under existing Irish law, AI-generated sexualised images are treated as illegal if shared without consent, including material involving children. The provisions of Coco’s Law — the Harassment, Harmful Communications and Related Offences Act 2020 — apply to the sharing of non-consensual intimate images, and Gardaí have clarified that the law covers AI-created content in the same way as other non-consensual imagery.
The controversy has prompted strong reactions from politicians and ministers. Minister of State for Artificial Intelligence Niamh Smyth has said that if X does not comply with Irish law in dealing with AI-generated sexual imagery, it should face significant consequences, including a potential ban in Ireland. She emphasised that creating or distributing such content is a criminal offence under Irish and European law.
Taoiseach Micheál Martin has also taken the issue seriously, convening meetings with the Attorney General to examine how the State should respond. Both Mr Martin and Tánaiste Simon Harris have stressed that non-consensual and abusive content is illegal and should be treated as such, urging victims and witnesses to report suspected offences to Gardaí.
The controversy extends beyond Ireland. In the United Kingdom, authorities are moving to criminalise the creation of non-consensual sexualised images, including deepfakes generated by AI tools, under new legislation due to come into force in early February 2026. Regulators there, including Ofcom, have opened formal investigations into X’s compliance with safety laws after allegations that the platform allowed such material to persist.
Ahead of legislative meetings next week, Irish ministers are expected to discuss whether further regulatory action and enforcement mechanisms are needed to tackle the misuse of generative AI tools and protect children and other vulnerable groups from harm.