Ordered to remove unsuitable books from their libraries, school administrators in Iowa outsourced the job to AI. So long, The Color Purple, Beloved, The Handmaid’s Tale …
Tue 22 Aug 2023 13.01 BST
What do you get when you combine artificial intelligence with human stupidity? There are, unfortunately, numerous responses to that question. But in this particular case the answer can be found in Iowa’s Mason City Community School District, where school administrators are using ChatGPT to help them ban books.
Ahead of the new school year, school staff have been busy trying to comply with a new state law, Senate File 496, the Parental Rights and Transparency Act, requiring every book in Iowa public school libraries to be “age appropriate” and devoid of “descriptions or visual depictions of a sex act”. Of course, nobody wants hardcore porn in school libraries, but this sweeping bill, which also restricts education about gender identity and sexual orientation, isn’t trying to prevent that nonexistent problem: it’s about indoctrination. Republicans don’t want kids learning anything that goes against their narrow worldview so, over the past couple of years, they’ve gone on a censorship orgy, trying to ban everything from gender studies to psychology to African American studies.
A hallmark of Republican legislation is its ambiguity. The party can’t explicitly decree “we want to ban everything we don’t like”, because that would be blatantly unconstitutional. So instead, it couches its laws in vague language like that contained in this book-banning law. There is very little guidance in the legislation as to what constitutes a description of a sex act. (Would “the two elephants mated” count, for example?) The only real pointer given is that sex in religious texts like the Bible is absolutely fine and exempt from the law. That ambiguity gives them plausible deniability: we’re not banning books, we’re protecting children! It also tends to make people over-comply for fear of violating the law. One state senator told the Iowa Capital Dispatch that she knew a teacher who had removed every book from her classroom to make sure she was in compliance with the new law. Others have resorted to using AI to help them navigate the legislation.
“It is simply not feasible to read every book and filter for these new requirements,” Bridgette Exman, the assistant superintendent of Mason City School District, said in a statement quoted by Iowa newspaper the Gazette. “Therefore, we are using what we believe is a defensible process to identify books that should be removed from collections.”
How many books are we talking about? Hundreds, thousands, tens of thousands? Nope. The answer to this question is the same as the answer to life, the universe, and everything: 42. The district had compiled a list of 42 titles from banned book lists across the US which needed review. Nobody, let alone an educator, could possibly read 42 books! Hence the need for technology.
So how does ChatGPT, a generative AI tool that is incapable of critical thought and whose processes, training method and underlying training datasets are worryingly opaque, figure out which books are too lewd for the eyes of young Iowans? Nobody knows. It’s not clear, for example, if ChatGPT has actually “read” the books it has been asked about. All we know is that an administrator typed “does [x] book contain ‘a description or depiction of a sex act’?” into ChatGPT, then waited for a reply. ChatGPT, which doesn’t have a moral compass, did its job diligently and without protest: it identified 19 books as being too scandalous and they were pulled from the shelves. The banned books included The Color Purple by Alice Walker, Beloved by Toni Morrison and The Handmaid’s Tale by Margaret Atwood.
While dystopian, Exman’s tactics are smart in their own way. It is, as she put it, a “defensible process”. If the powers-that-be later find a contraband book in the library, school administrators can simply blame ChatGPT. What, one wonders, do the people behind ChatGPT make of all this? They keep going on about how AI is going to advance humanity. And yet, as cases like this demonstrate, AI is far more likely to be harnessed to advance the views and agenda of powerful people. Welcome to a future where the computer constantly says no.
• Arwa Mahdawi is a Guardian columnist
topLeft
bottomLeft
topRight
bottomRight
heading
#paragraphs.
/paragraphshighlightedText#choiceCards/choiceCards
0 Comments