Social media

Report Shows X is Still Displaying Ads Alongside Harmful Content

A new report has found that X continues to display ads alongside harmful and/or offensive content, this time in relation to controversial commentary around the recent race riots in the U.K.

According to a new report from The Center for Countering Digital Hate (CCDH), X has recently displayed ads from major brands alongside misinformation relating to the unrest.

As per the CCDH:

Elon Musk’s social media platform X was running ads near posts from five key U.K. accounts pushing lies and hate in the wake of the Southport attack. The CCDH found that these accounts amassed 260 million views in the week following the Southport attack on 29 July, and that X is presenting ads for well-known brands including GlaxoSmithKline, the British Medical Association, Betfred and the International Olympics Committee near their content.”

CCDH example

As you can see in this example, according to the CCDH report, X is displaying ads in-feed alongside potentially harmful and offensive content related to the riots.

The CCDH also notes that many of the more incendiary profiles posting about the riots are also part of X’s Creator Ad Revenue Share Program, which means that X is effectively paying these users to post controversial and divisive remarks.

X has already sought to dismiss similar claims from the CCDH in the past, by suggesting that its reports manipulate the X ad serving system, and as such, are not indicative of real user experiences. But even if that were the case, the CCDH’s reports do seemingly show that X may display ads and promotions alongside this kind of material, which is a key reason why many advertisers are now scaling back or halting their X spend.

This new report won’t help to boost X’s standing in this respect, though neither will the fact that Elon Musk himself is amplifying comments from controversial spokespeople, including Tommy Robinson, whose anti-Islamic sentiments are believed to have played a significant role in sparking the recent riots.

Which is the crux of the whole matter here. Elon Musk, who’s determined to amplify and share whatever he feels like in the app, is also arguing that he is being forced to censor content, by governments and/or by big corporations, under threat of bans and restrictions on his business. But the reality is that Musk is indeed free to say whatever he likes, he just can’t do so without consequences, and the consequences of amplifying hate may well impact his business.

That seems to be the sticking point. Elon’s argument is that there’s a broader incentive to limit free speech, controlled by shadowy government figures, but in most cases that Musk has presented, the actual motivation has been public good, as determined by the relevant regional government.

Is that overreach? Well, in some cases, it may be, as authoritarian governments seek to control the flow of information. But in others, elected officials may be seeking to quell unrest or other impacts, which could relate to content being amplified on X.

As such, the balance here is more about the impacts of allowing such on your platform, or amplifying it, as opposed to using a blanket “free speech” argument. In this respect, Elon can allow people to post whatever they like, but advertisers can conversely choose not to advertise in response.

Which, I suspect, is where most of Musk’s legal challenges will end up. Musk has already lost one case to the CCDH, while he’s also pursuing legal action against various other groups that have found X to be unsafe for paid promotions.

This latest CCDH report may trigger yet another legal response from X, but the weight of evidence does suggest that X is amplifying harmful content, and that, invariably, will also see ads displayed nearby, in some form.  




Source link

Related Articles

Back to top button