Videos and images of mass shootings, kidnapped civilians and soldiers and other violence linked with Hamas’ attack on Israel are being widely shared on X, formerly known as Twitter, in violation of the company’s own rules against inciting violence.
POLITICO’s review of Elon Musk’s social media platform in the wake of Hamas’ attacks, which began on October 7, discovered scores of videos that allegedly showed militants murdering civilians and Israeli soldiers; viral hashtags associated with the ongoing violence that praised Hamas’ activities; and social media posts that included graphic pictures of those killed and antisemitic hate speech.
Such extremist material was also accessible on other social media platforms, most notably on Telegram. But the level at which the terrorist-related content was circulated on X was significantly higher compared to others, according to analysis by POLITICO and two outside researchers who independently reviewed the tech companies’ response to the Middle East crisis.
“There is a huge prevalence of extremely graphic violent material on X,” said Adam Hadley, director of Tech Against Terrorism, a nonprofit organization that works with social media platforms and governments to combat how terrorist organizations spread their propaganda online. “This doesn’t appear to be the same on other large platforms.”
Hadley and Moustafa Ayad, executive director for Africa, the Middle East and Asia for the Institute for Strategic Dialogue, a think tank that tracks online extremism, reviewed how graphic content tied to the unfolding violence spread across social media.
A representative for X did not respond to a request for comment. The company’s internal rules say users cannot promote violent acts or share propaganda related to terrorist activities. “There is no place on X for violent and hateful entities,” the firm’s policy says.
Under the European Union’s new social media rules, known as the Digital Services Act, large social media platforms like X also must combat the spread of hate speech — including content related to terrorist groups — or face fines of up to 6 percent of annual global revenue. Musk said X would comply with the 27-country bloc’s rules despite the billionaire’s free speech ethos and the firing of much of X’s global content moderation team.
Yet in the days following Hamas’ widespread attacks on Israel, which have left hundreds of people dead, POLITICO easily found graphic images and videos on X in violation of both the EU and X’s separate rules.
The content included grainy footage of militants gunning down Israeli soldiers, other social media posts of alleged Hamas fighters desecrating the bodies of victims, and videos of beheadings that, while promoted as taken from the most recent attacks, had, in fact, been reused from earlier jihadi violence in Syria.
Hamas-related hashtags that praised the ongoing violence had also begun to trend across X despite much of this content either including graphic imagery or promoting terrorist attacks in violation of X’s own terms of service, based on POLITICO’s review of the social media platform.
While such gruesome material is outlawed under all the tech companies’ internal policies, these firms’ executives and European regulators still find themselves in a difficult position when deciding how to respond to the ongoing conflict in the Middle East.
Alongside the graphic violence shared online, people across the world have similarly taken to social media to voice their support for different sides of the conflict. Much of this content represents political speech and does not meet the threshold of promoting terrorism. With the violence spreading, tech giants’ content moderation teams and regulators must determine the fine line between what represents legitimate speech and what veers into jihadi propaganda.