(Updated: 27.12.2025)

What the evidence demonstrates
A substantial body of research shows that women experience disproportionate levels of gendered harassment on social media platforms. Studies focusing specifically on Twitter, now rebranded as X, demonstrate that women, particularly those in public, political, or professional roles, are more likely than men to be subjected to misogynistic abuse, threats, and hostile engagement (Kumar et al., 2021; Boukemia et al., 2025). This pattern cannot be adequately explained as the behaviour of a small number of individuals. Instead, it reflects structural features of platform design and governance.
Debates concerning women’s safety, free speech, and online harm have intensified since Elon Musk acquired Twitter. Musk has publicly positioned himself as a defender of women, particularly through commentary on UK grooming gangs. However, journalism, policy analysis, and independent research raise questions about how women’s safety is being framed, which forms of harm receive attention, and how platform dynamics under Musk influence gendered abuse online.
This article sets out what can be established as fact, what is supported by peer reviewed research, and where cautious analysis is required.
Engagement and emotional amplification
Social media platforms are designed to maximise user engagement. Research consistently demonstrates that content which provokes strong emotional responses, particularly anger and outrage, spreads more widely than neutral or evidence based material (Brady et al., 2021). Engagement ranked systems tend to prioritise posts that generate replies, quote posting, and extended interaction.
Empirical research examining emotional dynamics on Twitter shows that anger is reinforced through social feedback mechanisms, increasing its visibility and reach within networks (Brady et al., 2021). As a result, polarising and hostile content frequently outperforms calm discussion in terms of circulation.
This dynamic is not unique to misogyny. However, misogynistic content reliably triggers high engagement and therefore benefits from these structural incentives.
Platform incentives and revenue
Higher engagement extends the time users spend on platforms. This in turn increases advertising inventory and subscription value. While X does not publish content level revenue data, the relationship between engagement and monetisation is well established in analyses of social media business models (Zuboff, 2019; Vaidhyanathan, 2018).
The commercial logic of engagement has been widely described as encouraging the production of provocative material designed to provoke reaction rather than inform. This phenomenon has become known as rage bait. In 2025, rage bait was named Oxford University Press word of the year, reflecting its centrality to contemporary digital culture (Associated Press, 2025).
Within this context, polarising content is not anomalous. It is structurally rewarded.
Governance and moderation on X
Concerns about amplification cannot be separated from governance changes following Musk’s acquisition of Twitter. Policy analysis by the Humboldt Institute for Internet and Society documents multiple changes to X’s content governance framework, including the removal of several misinformation policy categories and revisions to hate and harassment policies (HIIG, 2024).
At the same time, credible reporting confirms significant reductions in Trust and Safety staffing. Australia’s eSafety Commissioner reported that X cut approximately thirty percent of trust and safety staff after Musk’s takeover, raising concerns about the platform’s capacity to manage harmful content at scale (Associated Press, 2024).
These changes do not demonstrate deliberate promotion of misogyny. They do, however, alter the risk environment in which harmful narratives circulate and persist.
Women’s safety as political rhetoric
In January 2025, commentary published in The Conversation argued that Musk’s framing of protecting women relied on selective narratives aligned with far right political discourse rather than a consistent focus on violence against women and girls (Pearson, 2025). The article noted that the majority of sexual violence against women in the United Kingdom occurs within domestic and familiar contexts, a reality largely absent from Musk’s public interventions.
This critique does not dispute the seriousness of organised sexual exploitation. Instead, it questions the political selectivity of invoking women’s safety primarily through narratives that racialise perpetrators while diverting attention from broader and well evidenced patterns of male violence against women.
Targeted harm and platform reach
The impact of such rhetoric is not theoretical. In January 2025, UK safeguarding minister Jess Phillips stated publicly that posts made about her by Musk caused her to fear for her personal safety, due to the scale of his following and the hostility that followed his comments (The Guardian, 2025).
Research on online harm consistently shows that women in public life are disproportionately targeted by misogynistic harassment and that high visibility magnifies exposure to abuse and threat (Kumar et al., 2021; Boukemia et al., 2025).
Islamophobia, grooming gangs, and amplification
Independent empirical research provides further context. Reports published by the Center for the Study of Organised Hate in 2025 analysed a large dataset of posts on X relating to grooming gangs in the United Kingdom. The researchers found that highly racialised and Islamophobic narratives dominated engagement and visibility, and that Musk’s own posts accounted for a significant share of attention within the dataset examined (CSOH, 2025a; CSOH, 2025b).
Journalistic investigation by Wired similarly reported that Musk repeatedly posted misleading or exaggerated claims about grooming gangs during this period, contributing to the amplification of misinformation and racially charged narratives (Wired, 2025).
These findings do not require assumptions of malicious intent. They demonstrate that when high profile users engage with inflammatory framing, platform dynamics can rapidly magnify harm.
Historical context
Backlash against women’s rights is not new. Historical analysis shows that periods of social and legal progress for women are frequently accompanied by resistance and hostility (Faludi, 1991). What distinguishes the current moment is scale. Digital platforms enable hostility to be amplified, replicated, and monetised at unprecedented speed.
Conclusion
The available evidence supports a structural explanation for the prevalence of misogyny on X. Engagement driven systems reward emotionally charged content. Reduced moderation capacity increases the persistence of harmful material. Economic incentives align more closely with outrage than with accuracy.
This does not require assumptions about intent. It reflects the interaction of platform design, governance choices, and commercial incentives.
Misogyny on X is therefore not best understood as a cultural accident. It is an outcome of how contemporary social media systems operate. Women’s safety cannot be meaningfully addressed through selective rhetoric or scapegoating narratives. It requires evidence based approaches, consistency, and accountability within platforms whose structures currently reward polarisation and harm.
References
Associated Press (2024) X Corp has cut large numbers of trust and safety staff, regulators say. Available at: https://apnews.com (Accessed: 27 December 2025).
Associated Press (2025) Rage bait named Oxford word of the year as outrage drives online engagement. Available at: https://apnews.com (Accessed: 27 December 2025).
Boukemia, J. et al. (2025) ‘Misogyny, politics, and social media determinants of hostile engagement against women parliamentarians on Twitter’, Legislative Studies Quarterly, 50(1), pp. 1 to 25.
Brady, W.J. et al. (2021) ‘How social learning amplifies moral outrage expression in online social networks’, Science Advances, 7(33).
Center for the Study of Organised Hate (2025a) Elon Musk, X, and the amplification of Islamophobia in the UK. Available at: https://www.csohate.org (Accessed: 27 December 2025).
Center for the Study of Organised Hate (2025b) Racialised grooming gangs: how Musk and X amplified Islamophobia and racism in the UK. Available at: https://www.csohate.org (Accessed: 27 December 2025).
Faludi, S. (1991) Backlash: The Undeclared War Against Women. London: Chatto and Windus.
HIIG, Humboldt Institute for Internet and Society (2024) Policy changes of X under Musk. Available at: https://www.hiig.de (Accessed: 27 December 2025).
Kumar, P. et al. (2021) ‘Mapping violence against women of influence on Twitter’, Journal of Computational Social Science, 4(2), pp. 1 to 20.
Pearson, E. (2025) ‘Elon Musk and the phoney far right narrative of protecting women’, The Conversation. Available at: https://theconversation.com (Accessed: 27 December 2025).
Vaidhyanathan, S. (2018) Antisocial Media: How Facebook disconnects us and undermines democracy. Oxford: Oxford University Press.
Wired (2025) Elon Musk is posting nonstop falsehoods about grooming gangs. Available at: https://www.wired.com (Accessed: 27 December 2025).
Zuboff, S. (2019) The Age of Surveillance Capitalism. London: Profile Books.
Leave a comment