How Meta should tackle abortion misinformation on Facebook and Instagram.



In March, a member of an anti-abortion Facebook group shared a post describing what it claimed was “pro-abortion logic”: “We don’t want you to be poor, starved or unwanted. So we’ll just kill you instead.”



That same month, another Facebook user shared a link to a news article covering a South Carolina bill that would have criminalized abortion as homicide, thus making it eligible for the death penalty. In the caption, the user criticized lawmakers’ logic that “it’s wrong to kill so we are going to kill you.” On Instagram, another post struck the same tone, criticizing the idea of being “so pro-life” that “we’ll kill you dead if you get an abortion.”






These posts represent opposite sides of the abortion debate, but they faced the same fate. After Meta’s automated hostile speech classifier flagged the posts, human reviewers determined that they constituted death threats, violating Meta’s violence and incitement policy. The posts were removed from Facebook and Instagram. In response to appeals from the users who shared the content, Meta ultimately restored all three posts (in one case after four reviews), admitting that they should never have been removed since they did not incite violence.












But the story doesn’t end there. Now these posts are at the center of Oversight Board cases that will establish guidelines for how Meta should moderate abortion-related content. Specifically, the board—a panel of 22 researchers, advocates, and policymakers who advise Meta on content moderation—is considering how the tech giant “should treat content that uses the word ‘kill’ while discussing abortion and its legality.” But beyond the specific moderation issues within the cases, the board will also evaluate how Meta’s enforcement practices shape the conversation about abortion in America. The outcome could have profound implications for abortion access.






The Oversight Board is expected to announce its recommendations (shaped in part by public comments) in the coming weeks. While the recommendations are nonbinding, the cases themselves reflect a broader issue: Meta is not prepared to moderate content on abortion. Unlike YouTube, Meta has no policies that specifically address abortion, according to the Institute for Strategic Dialogue. When asked about the guidelines the platform currently uses to moderate abortion-related content, a Meta spokesperson said in an email that posts must follow existing rules, “including those on: prescription drugs, misinformation, coordinating harm, bullying and harassment, violence and incitement, and violent and graphic content.” But applying this patchwork of more general policies risks inconsistencies. It’s notable that for all three of the posts the Oversight Board is currently considering, applying the violence and incitement policy was inappropriate. Existing policies don’t account for the specific challenges posed by abortion content—Meta’s policy against “harmful health misinformation,” for example, doesn’t mention reproductive health.









Meanwhile, abortion misinformation abounds. The Center for Countering Digital Hate found that Facebook ads promoting abortion pill reversal, a dangerous and unsubstantiated procedure, were shown 18.4 million times to women and girls as young as 13. Media Matters reported that Meta showed more than 800 political ads that contained abortion misinformation, resulting in over 37.6 million impressions. According to ISD, an ad showing an animation of a fetus being dismembered in the womb and having its skull crushed, in violation of Meta’s violent and graphic content policy, was active for more than a month. (In an email, the Meta spokesperson said all ads must comply with the platform’s ads policy, and noted that the platform prohibits ads that “repeatedly use shocking imagery to further a point of view.”)









At the same time, advocates have struggled to disseminate accurate information about abortion under Meta’s policies. We Testify, an organization that uses storytelling to shift perceptions of abortion, relies on social media to share up-to-date information about abortion access. When it tried to run an ad with information from the World Health Organization explaining how to undergo a self-managed abortion, a procedure that is more critical than ever after Dobbs, the submission was denied. Such content moderation “has a real impact on us because we’re not always able to fight it,” said Emma Hernández, communications manager at We Testify, which has only six staff members.













There’s no question that content moderation presents an enormous challenge, especially when it comes to a topic as polarizing as abortion. But there are several steps the company could take to facilitate productive and accurate discussions about reproductive health. To start, Meta could reframe abortion in the context of public health, said Isabel Jones, a researcher at ISD. Meta developed detailed policies about COVID-19 content, but while abortion is also an urgent public health issue, she said, it’s not being treated the same way.






Just as Meta pointed users toward accurate information through the COVID-19 Information Center, it could elevate facts about abortion from medical experts. Recommendation algorithms could prioritize credible sources while limiting the visibility of unreliable content. Posts that discuss abortion could be accompanied by an information panel, as outlined in YouTube’s policies. Jackie Rotman, founder of the Center for Intimacy Justice, suggested that Meta could institute a third-party verification process for reproductive health organizations that would protect their content from arbitrary moderation.



Due to Meta’s opaqueness surrounding content moderation, however, researchers are limited in the recommendations they can provide. Because platforms don’t share results from internal experiments or details about their moderation procedures, it’s difficult for researchers to assess the effectiveness of certain moderation interventions and hold platforms accountable, Jones said. Simple questions about whether posts are flagged by a human or an automated system, for example, are often impossible to answer. This lack of transparency is just as frustrating for advocates, who must interpret Meta’s ambiguous policies to predict whether they will end up “in Facebook jail,” said Angela Vasquez-Giroux, vice president of communications and research at NARAL Pro-Choice America. In their public comment in response to the Oversight Board cases, the Center for Democracy and Technology and the American Civil Liberties Union called on Meta to provide documentation about the process underlying its moderation decisions and about how human moderators are trained.











It’s hard not to be cynical about whether the Oversight Board cases will lead to tangible change. The board can make recommendations, but it’s up to Meta to implement them, and the board’s jurisdiction doesn’t extend to advertising (a major area of concern for researchers and advocates). Still, the cases make clear that Meta’s current approach of reducing a topic as complex as abortion to a handful of general policy domains has not worked, and it’s time to hold the platform accountable. Meta must modify its existing policies—or, even better, create new ones—that reflect the realities of abortion access in the U.S. The platform should develop abortion-specific misinformation guidelines that address both health and procedural issues. On the health front, Meta should consider medical misinformation related to issues like abortion pill reversal and the impact of abortion on fertility, whereas procedural policies should consider how to address misinformation about abortion access and legality, especially in the context of rapidly changing laws. Among other changes to general guidelines, Meta’s adult products or services policy, which currently lists “family planning” as a permitted category, should specifically address abortion to eliminate ambiguities; the prescription drugs policy should account for medication abortion. These changes must be made not only to Meta’s policies but to the underlying moderation classifiers and content recommendation algorithms too. Policies should also be responsive to the volatile state of abortion rights, and Meta should create pathways for advocates, researchers, and health care providers to communicate emerging concerns.



While more than a year has passed since the Supreme Courtoverturned Roe v. Wade, Meta’s abortion policies remain stuck in a pre-Dobbs era. Despite the limitations of the Oversight Board, its abortion cases have the best chance yet of spurring substantive change. When it took the cases on, the board said it would “assess whether Meta’s policies or its enforcement practices may be limiting discussion about abortion.” The answer is almost certainly yes. The real challenge comes with convincing the tech giant to take responsibility for its effect on reproductive rights.




Future Tense
is a partnership of
Slate,
New America, and
Arizona State University
that examines emerging technologies, public policy, and society.







Post a Comment

0 Comments