How AI-generated images fuel anti-Muslim, anti-migrant sentiment online

Academic research has warned that the continued normalisation of anti-Muslim stereotypes and racist conspiracy theories via AI-generated materials on social media is pushing some into targeting hotels with protests in recent weeks, according to a The Times newspaper article on the research.

The London School of Economics study analysed 622 posts from notable far-right sources, detailing how visualisations of racist conspiracies gained more amplification (around 30 per cent) than other content.

That academic submission appeared online in response to a House of Commons Home Affairs Committee that is investigating new forms of extremism, and it also drew attention to forms of racialised dehumanisation and algorithmic bias.

That submission contrasted how the GenAI-generated images depicted Muslim men as dirty and of carrying weapons in contrast to ‘heroic’ white men and ‘vulnerable’ white women threatened by racialised migrants, imagery informed by algorithmic bias.

They attributed the mobilisation of far-right violence after the murders and stabbings in Southport to a mixture of extremist content, fake news and conspiracy content circulating online.

Earlier this year, Tell MAMA closely monitored the social media accounts of far-right foreign agitators abroad (and a possible Russian connection) who first sought to exploit the far-right violence and disorder by taking footage of violence and using AI-generated materials to call for violence towards Muslims, migrant communities, and the police, using paid-for X accounts to add a veneer of legitimacy. One such AI-generated image we captured and used for this story had a racialised depiction of a Muslim man carrying a knife and a religious text. SightEngine detected with 99 per cent certainty that GenAI had generated it.

It was, however, on Telegram that the extremism flourished, with terrorist documents uploaded and encouragement from those who operated the channel to encourage individuals within the UK to vandalise mosques, with or without financial incentives. Sky News reported in late January 2025 that seven buildings, including mosques, community centres and a primary school in London, faced such vandalism with anti-Muslim and calls for their forced removal, triggering subsequent hate crime investigations. Tell MAMA worked closely with various police forces and counter-terrorism to pass on our evidence about the far-right group.

One recommendation called for Ofcom to develop a specialised unit to monitor AI extremism and combat any disinformation it spreads. Other submissions, however, including from The Alan Turing Institute, highlighted how AI could also help counter extremism. Recent research has also raised concerns that automated detection tools on social media are inconsistent in classifying hate speech.

Tell MAMA’s report, ‘The New Norm of anti-Muslim Hate,’ published earlier this year, detailed the harms of AI-generated materials on social media platforms like X (formerly Twitter) and Facebook. The report warned that “examples flagged with Tell MAMA make full use of the technology to push racialised, stigmatising, and criminalising tropes about Muslims – externalising Muslim men and women as distinct cultural, demographic threats to mythologised, monocultural themes of national identity.” Moreover, verified cases included racist, anti-Black tropes about white women and rape targeting Black Muslim communities, with similar examples of South Asian Muslim men targeting a white woman in Union Jack clothing.

Regarding Facebook, a member of the public alerted us to a self-styled “comedy” page in December 2024, which had re-uploaded a disturbing AI-generated video (originally uploaded to TikTok) of refugees having their boat stolen by machinery and left to drown, captioned “Splish splash” with hashtags such as “#comedy”.  The platform, however, did not remove it, and as of writing, it boasts 1.8 million views.

A screenshot of the AI-generated video depicting the murder of refugees on a boat had gained 129k views on Facebook from a self-styled “comedy page”. Credit: Facebook.

A follow-up investigation into that same page months later revealed equally disturbing AI-generated videos, including one of an assault rifle firing upon and indiscriminately murdering refugees in a dinghy boat, with a caption about border “deterrence.” It received around 129k views, around a thousand thumbs up and 110 heart emoji responses, respectively.

In examples of cases flagged with Tell MAMA for the first half of 2025, some AI-generated materials linked Muslims to bestiality or promoted racist conspiracies concerning demographics. Another example repurposed old racialised memes – including one of a white male “sweeping” away Black and South Asian Muslim men and women from Europe.

 

The post How AI-generated images fuel anti-Muslim, anti-migrant sentiment online appeared first on TELL MAMA.

Categories: AI, News, X