Why is X hiring content moderation staff again?










• X announces new content moderation hiring ahead of 2024 election.
• The platform is also allowing election advertising for the first time since 2019.
• The ads themselves will be subject to content moderation rules.


You could be forgiven for thinking that content moderation had become the red-headed stepchild of social media platforms over the course of 2022-23.


From the bonfire of the moderators that took place shortly after Elon Musk took over Twitter to moves on other platforms to cut down their content moderation teams during the year of the Great Tech Layoff, it would be ostensibly reasonable to conclude that we as a society were done caring whether the things we saw on social media were factually true, or just interesting and reassuring that the view of our bubble was reinforced by content, and so must be the only correct view to take.


But that move to a moderation-lite social media world has made analysts concerned that the 2024 presidential election might well be the least in touch with verified fact in modern political history – even if we take into account the continuing absurd rants of Donald Trump and the would-be insurrection at the Capitol building on January 6, 2021.


That’s because in 2023, we’ve seen the recent rise of not one but two phenomena that complement one another perfectly and lead to a perfect storm of potential when it comes to misinformation and disinformation.



The bonfire of content moderation.


One the one hand, as we’ve said, we’ve witnessed social media platforms letting go of their disinformation teams. Twitter led the trend shortly after the arrival of Elon Musk, who gutted the department, unblocked several controversial right-wing accounts (up to and including Donald Trump’s – but stopping short of Alex “Sandy Hook was a false flag” Jones) and boasted to Fox News host Tucker Carlson about how few staff you needed to run Twitter if you didn’t care about it being an advocate platform, or about “censorship.”


[embed]https://www.youtube.com/watch?v=RNuCFgO8MXg[/embed]


Turns out you might need more staff than you thought you did…

Much more recently, Meta followed suit, laying off a significant number of members of a global misinformation and disinformation team that worked across Facebook and Instagram, and would have taken responsibility for tackling misinformation on Threads too.


These layoffs in social media disinformation teams can only serve to make social media as a whole a more dangerously fact-lite experience – which is worrying when a 2021 Pew Research poll in 2021 found that 71% of Americans got some of their news about the world directly from social media.


And on the other hand, that trend towards stripping away the guardrails around actual facts has coincided with the rise of a technology that can convincingly blur the walls of reality, rendering facts obsolete in service to a narrative of choice.



That technology of course is generative AI, with its power to generate deepfake images and video, which can put people into situations they’ve never been in, manipulate the perception of events, and potentially gaslight an entire section of the electorate into believing things which never happened.


As a retrospective example, for instance, former-President Trump’s claim that his inauguration speech was viewed by unprecedented hordes of people in 2016, were it to be made in the event of a second inauguration in 2025, could be bolstered by footage that it would be excruciatingly difficult to prove had been faked just to salve the ego of a president.


Trump's inauguration in 2017.

President Trump’s inauguration in 2017. In the world of GenAI deepfakes, the gaps would be filled with “fake people.”


As a combination, the removal of content moderation teams – people whose job it is to tell us the difference between fact, opinion, and blatant falsehood in a media source used very frequently by a majority of Americans – and the emergence of a technology that can bolster lies with practically incontrovertible video and still imagery, so that from an observer’s point of view there’s very little difference between truth and lies, have a lot of analysts and political observers worried about the state of American democracy.


That’s especially piquant as details emerge of how close to the fall of democracy America came on January 6, 2021, protected as it seemingly was by the constitutional conscience of Vice President Mike Pence.



New hires and politicas ads.


All of which sturm and drang will be at least moderately appeased by news coming out of X that it is hiring for its safety and election teams, and – for better or worse – that it will allow political advertisements on the platform again, for the first time since 2019.


It’s a big move from Elon Musk’s platform, which could be said to have lived through interesting times since he first took the helm, including a couple of mass exoduses to the likes of Mastodon and Threads, and plummeting ad revenue.


According to a company blog post this week, the enigmatic platform will apply its “civic integrity policy” – prohibiting the use of the platform for “manipulating or interfering in elections,” including posting content that could mislead people about how, when or where to participate in civic processes such as voting – for a “limited period of time before and during an election.”


“We’re updating this policy to make sure we strike the right balance between tackling the most harmful types of content,” the blog continued, “those that could intimidate or deceive people into surrendering their right to participate in a civic process—and not censoring political debate.”


We will have to wait to see what X regards as political debate that shouldn’t be censored – it recently gave Donald Trump an entirely alternative option, rather than turning up to the Republican nominee debate and giving the nation the courtesy of a chance to see him grilled alongside his nominal competitors. Instead, Trump was on X, giving a one-on-one interview to Tucker Carlson.


But the blog announced that posts that were deemed to violate the civic integrity policy would be labelled as such, and their reach “restricted.” Not removed, but restricted. Which arguably it less content moderation, and more content flagging.


It would be easy for a political cynic to make the case that X is relenting on political advertising just at the point when the rollercoaster of its last year has seen most of its traditional long-term advertisers either reduce or entirely remove their ad commitments. But whatever the reason behind the move, political advertising on the platform will be subject to several constraints.


What constraints?


Guardrails, like content moderation staff, save you from disaster.

Guardrails, like content moderation staff, save you from plunging and falling and crashing and burning.


“This will include prohibiting the promotion of false or misleading content, including false or misleading information intended to undermine public confidence in an election, while seeking to preserve free and open political discourse.”


While the same cynics might raise an eyebrow at this part of the blog in light of the recent Trump interview with Carlson hosted on X, any move in the direction of more scrutiny of the electoral process and the peddling of falsehood or misleading content must be welcomed in comparison to the previous shedding of disinformation moderators across social media. The return of political advertising may be less wholeheartedly welcome.


The return of political ads to X coincides with a very thin ad revenue period. More content moderation will be needed to ensure ethical safety.

Need new ad revenue? Try political ads.


In the cage-fight of one-upmanship between X’s Musk and Meta’s Zuckerberg, observers will now wait to see whether Meta re-thinks its policy of laying off those responsible for checking the truth of content as the election (or incarceration, or conceivably both) grows closer.


























Post a Comment

0 Comments