ChatGPT has a liberal bias, research on AI’s political responses shows


A paper from U.K.-based researchers suggests that OpenAI’s ChatGPT has a liberal bias, highlighting how artificial intelligence companies are struggling to control the behavior of the bots even as they push them out to millions of users worldwide.

The study, from researchers at the University of East Anglia, asked ChatGPT to answer a survey on political beliefs as it believed supporters of liberal parties in the United States, United Kingdom and Brazil might answer them. They then asked ChatGPT to answer the same questions without any prompting, and compared the two sets of responses.

The results showed a “significant and systematic political bias toward the Democrats in the U.S., Lula in Brazil, and the Labour Party in the U.K.,” the researchers wrote, referring to Luiz Inácio Lula da Silva, Brazil’s leftist president.

The paper adds to a growing body of research on chatbots showing that despite their designers trying to control potential biases, the bots are infused with assumptions, beliefs and stereotypes found in the reams of data scraped from the open internet that they are trained on.

The stakes are getting higher. As the United States barrels toward the 2024 presidential election, chatbots are becoming a part of daily life for some people, who use ChatGPT and other bots like Google’s Bard to summarize documents, answer questions, and help them with professional and personal writing. Google has begun using its chatbot technology to answer questions directly in search results, while political campaigns have turned to the bots to write fundraising emails and generate political ads.

ChatGPT will tell users that it doesn’t have any political opinions or beliefs, but in reality, it does show certain biases, said Fabio Motoki, a lecturer at the University of East Anglia in Norwich, England, and one of the authors of the new paper. “There’s a danger of eroding public trust or maybe even influencing election results.”

Spokespeople for Meta, Google and OpenAI did not immediately respond to requests for comment.

OpenAI has said it explicitly tells its human trainers not to favor any specific political group. Any biases that show up in ChatGPT answers “are bugs, not features,” the company said in a February blog post.

Though chatbots are an “exciting technology, they’re not without their faults,” Google AI executives wrote in a March blog post announcing the broad deployment of Bard. “Because they learn from a wide range of information that reflects real-world biases and stereotypes, those sometimes show up in their outputs.”

For years, a debate has raged over how social media and the internet affects political outcomes. The internet has become a core tool for disseminating political messages and for people to learn about candidates, but at the same time, social media algorithms that boost the most controversial messages can also contribute toward polarization. Governments also use social media to try to sow dissent in other countries by boosting radical voices and spreading propaganda.

The new wave of “generative” chatbots like OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Bing are based on “large language models” — algorithms which have crunched billions of sentences from the open internet and can answer a range of open-ended prompts, giving them the ability to write professional exams, create poetry and describe complex political issues. But because they are trained on so much data, the companies building them do not check exactly what goes into the bots. The internet reflects the biases held by people, so the bots take on those biases, too.

And the bots have become a central part of the debate around politics, social media and technology. Almost as soon as ChatGPT was released in November last year, right-wing activists began accusing it of having a liberal bias for saying that it was better to be supportive of affirmative action and transgender rights. Conservative activists have called ChatGPT “woke AI” and tried to create versions of the technology that remove guardrails against racist or sexist speech.

In February, after people posted about ChatGPT writing a poem praising President Biden but declining to do the same for former president Donald Trump, a staffer for Sen. Ted Cruz (R-Tex.) accused OpenAI of purposefully building political bias into its bot. Soon, a social media mob began harassing three OpenAI employees — two women, one of them Black, and a nonbinary worker — blaming them for the alleged bias against Trump. None of them worked directly on ChatGPT.

Chan Park, a researcher at Carnegie Mellon University in Pittsburgh, has studied how different large language models showcase different degrees of bias. She found that bots trained on internet data from after Donald Trump’s election as president in 2016 showed more polarization than bots trained on data from before the election.

“The polarization in society is actually being reflected in the models too,” Park said. As the bots begin being used more, an increased percentage of the information on the internet will be generated by bots. As that data is fed back into new chatbots, it might actually increase the polarization of answers, she said.

“It has the potential to form a type of vicious cycle,” Park said.

Park’s team tested 14 different chatbot models by asking political questions on topics such as immigration, climate change, the role of government and same-sex marriage. The research, released earlier this summer, showed that models developed by Google called Bidirectional Encoder Representations from Transformers, or BERT, were more socially conservative, potentially because they were trained more on books as compared with other models that leaned more on internet data and social media comments. Facebook’s LLaMA model was slightly more authoritarian and right wing, while OpenAI’s GPT-4, its most up-to-date technology, tended to be more economically and socially liberal.

One factor at play may be the amount of direct human training that the chatbots have gone through. Researchers have pointed to the extensive amount of human feedback OpenAI’s bots have gotten compared to their rivals as one of the reasons they surprised so many people with their ability to answer complex questions while avoiding veering into racist or sexist hate speech, as previous chatbots often did.

Rewarding the bot during training for giving answers that did not include hate speech, could also be pushing the bot toward giving more liberal answers on social issues, Park said.

The papers have some inherent shortcomings. Political beliefs are subjective, and ideas about what is liberal or conservative might change depending on the country. Both the University of East Anglia paper and the one from Park’s team that suggested ChatGPT had a liberal bias used questions from the Political Compass, a survey that has been criticized for years as reducing complex ideas to a simple four-quadrant grid.

Other researchers are working to find ways to mitigate political bias in chatbots. In a 2021 paper, a team of researchers from Dartmouth College and the University of Texas proposed a system that can sit on top of a chatbot and detect biased speech, then replace it with more neutral terms. By training their own bot specifically on highly politicized speech drawn from social media and websites catering to right-wing and left-wing groups, they taught it to recognize more biased language.

“It’s very unlikely that the web is going to be perfectly neutral,” said Soroush Vosoughi, one of the 2021 study’s authors and a researcher at Dartmouth College. “The larger the data set, the more clearly this bias is going to be present in the model.”



Post a Comment

0 Comments