Despite repeated assurances from X (formerly Twitter) that its ad placement tools provide maximum brand safety, ensuring that harmful or offensive content as well as paid promotions do not appear in the app, more and more advertisers are not responding to X’s revised “freedom of speech,”— Continues to express concern under this. not reach” approach.
Today, Hyundai announced it was pausing its ad spending on the X, after it found its ads were appearing alongside pro-Nazi content.
This comes days after NBC released a new report that at least shows that 150 blue checkmark profiles on the app, including thousands of unpaid accounts, have posted and/or amplified pro-Nazi content on X in recent months.
X denied the NBC report Early in the weekThis is the label A “gotcha” article, which “lacked extensive research, investigation and transparency.” Yet, now, another Big X advertiser is facing the exact problem highlighted in the report. Which X has acknowledged, and it has suspended the profile in question, while it works with Hyundai to address its concerns.
But again, it keeps happening, which suggests that X’s new approach to free speech is unsustainable, at least in terms of meeting advertisers’ expectations.
Under X’s “free speech, not reach” approach, more content that violates X’s policies is now kept active on the app, as opposed to being removed by X’s moderators, though its reach is limited to limit any impact. X also claims that any post hit with this reach penalty is not eligible to show ads alongside them, yet various independent analysis reports have found that the brand’s promotions are indeed being displayed with such content, meaning it is not being detected as infringing. By X’s system, or X’s ad placement controls are not working as expected
The main concern for X is that with an 80% reduction in total staff, including many moderation and security staff, the platform is now simply unable to cope with the detection and action required to enforce its rules. Which means that many posts that break the rules are simply being missed in detection, with X instead relying on AI, and its crowd-sourced community notes, to do much of the heavy lifting in this regard.
Experts claim that it will not work.
Each platform uses AI to moderate content to varying degrees, although it is generally acknowledged that such systems are not good enough on their own, human moderators are still a necessary expense.
And based on EU releases, we know that other platforms have a better moderator-to-user ratio than X.
According to a recent EU moderator report, The TikTok app has one human moderation staff member for every 22,000 users, Meta slightly worse, at 1/38k.
X has one moderator for every 55k EU users.
So while X claims that its staff cuts have made it better equipped to deal with its containment requirements, it is clear that it is now relying more on its other, non-staff systems and processes.
Security analysts also claim that X’s community notes are ineffective in this regard, leaving significant gaps in overall enforcement of how notes are displayed and how long they take to appear.
And based on Elon Musk’s own repeated statements and positions, it seems he would prefer to have virtually no restraint.
Musk’s long-held view is that all viewpoints should be given a chance to be presented in the app, with users then able to debate the merits of each and decide for themselves what is true and what is not. Which, in theory, should lead to more awareness through citizen participation, but in practice, it means opportunistic misinformation peddlers are able to gain traction with misguided internet sleuths with their random theories, which are wrong, harmful and often dangerous to both. groups and individuals.
Last week, for example, after a man stabbed several people at a shopping center in Australia, a verified X account misidentified the killer and broadcast the wrong person’s name and information to millions of people across the app.
It used to be that the blue checkmark accounts were the ones you could trust for accurate information on the app, which was often the purpose of verifying the account in the first place, but the incident underscores the erosion of trust that has resulted from X’s change. , conspiracy theorists are now able to rapidly promote baseless ideas on the app for just a few dollars a month.
And what’s worse, Musk himself is often involved in conspiracy-related content, which he admits he doesn’t fact-check in any way before sharing. And as the holder of the most followed profile on the app, he himself arguably poses the greatest risk of such harm, yet, he is also one of the policy decision makers on the app.
Which seems like a dangerous mix.
It’s also one that, surprisingly, still features ads alongside such content in the app, and yet, this week, Ad measurement platform DoubleVerify An apology has been issued In order to misreport X’s brand safety measurement data, X’s actual brand safety rate restated at “99.99%”. This means that such brand exposure is limited to only 0.01% of all ads displayed on the app.
So is this small margin of error reporting these repeated concerns, or is X’s brand safety actually significantly worse than it suggests?
It seems that, on balance, the X still has some issues to clear up, especially when you also consider that the Hyundai placement issue was only addressed after it was highlighted on the Hyundai X. It was not recognized by X’s system
And with X’s ad revenue still down 50%, there’s also a significant squeeze coming for the app, which could make hiring more staff on this component a challenging solution.