
Recent revelations surrounding the misuse of X’s AI tool, Grok, have reignited debate about whether the UK should go as far as banning the platform. Users exploited Grok’s image-generation feature to create non-consensual sexualised images of real people, including celebrities and politicians. Reporting has also confirmed the generation of sexualised AI images involving minors, which is unequivocally illegal under UK law.
This is not a marginal issue of online offence. It raises serious questions about consent, safeguarding, platform responsibility, and how emerging AI technologies are regulated when harm is foreseeable and scalable.
What actually happened
Investigations by international media, including Reuters, confirmed that Grok was used to generate explicit images of identifiable individuals using simple prompts. Following public and political backlash, X restricted aspects of Grok’s image functionality.
Under UK law, the creation or possession of sexual images of children, including AI-generated images, is illegal under the Protection of Children Act 1978 and subsequent amendments. Non-consensual sexual imagery of adults is also a criminal offence. These legal standards apply regardless of whether content is created by a human or an AI system.
The concern here is not isolated misuse by a handful of users, but the deployment of a tool at scale without adequate safeguards, where harmful use was predictable
Regulatory response
In response to these events, the Prime Minister directed Ofcom to assess whether X is complying with the Online Safety Act. This legislation was designed specifically to address systemic online harms, including those arising from platform design and emerging technologies.
The Act places statutory duties of care on platforms, particularly in relation to:
- Preventing illegal content
- Protecting children
- Assessing and mitigating foreseeable risks arising from product features and algorithms
Ofcom has confirmed that platforms cannot rely solely on blaming users where harm is enabled or amplified by design choices.
Does this justify banning X?
A nationwide ban on a major communications platform would be an extraordinary step and, at present, would struggle to meet the test of proportionality in a liberal democracy. The UK’s regulatory framework is deliberately structured to prioritise compliance and enforcement before prohibition.
Under the Online Safety Act, Ofcom has significant powers, including:
- Fines of up to 10 per cent of global turnover
- Mandatory changes to platform design and moderation systems
- Feature restrictions
- Platform blocking only as a last resort
These measures are intended to ensure corporate accountability, rather than collective punishment of users.
X’s defence falls short
X has argued that responsibility lies with users who input illegal prompts. However, UK regulatory guidance makes clear that this defence is insufficient where platforms deploy tools that make harm foreseeable, repeatable, and difficult to contain.
The fact that Grok’s image features were restricted only after public and political pressure strongly suggests that the risks were known, or should reasonably have been anticipated. Under the Online Safety Act, failure to act on such risks is itself a regulatory concern.
Feminist digital ethics research reinforces this position. A recent systematic review of AI-generated sexual imagery concludes that non-consensual deepfake content constitutes a form of digital sexual violence, disproportionately affecting women and girls. The authors argue that platform governance models relying on “user responsibility” consistently fail to prevent harm, and that platform-level regulation and enforceable duties are essential where misuse is foreseeable (Ma’arif et al., 2025).
Free speech is not a shield for negligence
Freedom of expression is a cornerstone of democratic society, but it does not extend to the mass production of non-consensual sexual imagery, nor does it absolve corporations of safeguarding responsibilities. Framing this issue as “censorship versus free speech” is a category error. The real distinction is regulation versus abdication.
The political contradiction
It is also worth noting that Nigel Farage publicly opposed and campaigned against the Online Safety Act, framing it as censorship and state overreach. He objected in particular to platform duties of care, child-safety obligations, and Ofcom’s enforcement powers.
Yet these are precisely the legal tools now being relied upon to address AI-enabled sexual abuse on X. You cannot oppose regulation, weaken enforcement mechanisms, and then demand action when predictable harms materialise. That contradiction matters.
The real test
If X complies with UK law, implements robust safeguards, and meaningfully reduces harm, a ban will remain unnecessary. If it does not, the law already provides clear escalation routes.
The question is not whether Britain should silence platforms, but whether platforms operating in Britain should be allowed to ignore the law with impunity.
Conclusion
Hard regulation is justified. Enforcement is overdue. The protection of children and the principle of consent must take precedence over tech exceptionalism. A blanket ban should remain a last resort, but accountability cannot be optional.
If platforms wish to operate in the UK, they must meet UK standards. That is not authoritarianism; it is governance.
Resources
BBC News (2024) X restricts Grok AI image tool after abuse concerns. Available at: https://www.bbc.co.uk/news (Accessed: 8 January 2026).
Criminal Justice and Courts Act 2015. (UK). Available at: https://www.legislation.gov.uk/ukpga/2015/2/contents (Accessed: 8 January 2026).
Ma’arif, A., Rahman, F., Hidayat, R. and Putri, S. (2025) ‘Social, legal, and ethical implications of AI-generated deepfake pornography on digital platforms: A systematic literature review’, Social Sciences & Humanities Open, 10, 101882. https://doi.org/10.1016/j.ssaho.2025.101882
Ofcom (2024) Online Safety Act: Draft codes of practice on illegal content and child safety. London: Ofcom. Available at: https://www.ofcom.org.uk/online-safety (Accessed: 8 January 2026).
Ofcom (2024) Risk assessment duties under the Online Safety Act. London: Ofcom. Available at: https://www.ofcom.org.uk/online-safety (Accessed: 8 January 2026).
Online Safety Act 2023. (UK). Available at: https://www.legislation.gov.uk/ukpga/2023/50/contents (Accessed: 8 January 2026).
Protection of Children Act 1978. (UK). Available at: https://www.legislation.gov.uk/ukpga/1978/37 (Accessed: 8 January 2026).
Reuters (2024) X’s Grok AI used to generate explicit images, prompting backlash. Reuters. Available at: https://www.reuters.com (Accessed: 8 January 2026).
UK Government (2023) Online Safety Act: Explanatory notes. London: HM Government. Available at: https://www.gov.uk/government/publications/online-safety-act-explanatory-notes (Accessed: 8 January 2026).
UK Statistics Authority (2017) Assessment of claims about NHS funding and Brexit. London: UKSA. Available at: https://www.statisticsauthority.gov.uk (Accessed: 8 January 2026)
Leave a comment