Is ChatGPT Ethical in Media? Experts Share Their Thoughts


As a tech journalist and communications consultant who specializes in technology integration, I am always eager to jump into any conversation around artificial intelligence and media ethics. And, right now, a lot of media professionals are afraid of how AI is going to impact their livelihood

If you do a search on TikTok for the combination of #ChatGPT, #layoffs and #writers, there are a handful of videos from copywriters and marketing professionals who say their employers let them go to replace them with AI-focused technology. There are also writers saying that AI won’t take jobs, but writers will need to adapt to working with it. But is ChatGPT ethical in media? What about AI?

My perspective has always been that AI’s job is to support humans, not replace them. 

Machines can’t learn

In order to understand why AI can’t (or shouldn’t) replace humans, you have to understand how machine learning works. The thing is, machines don’t actually learn. 

David J. Gunkel, Ph.D., is a professor of media studies in the Communication Studies department at Northern Illinois University and the author of An Introduction to Communication and Artificial Intelligence.

“Machines don’t learn in the way we normally think about learning—it’s a term that was applied by computer scientists who were sort of groping around for terminology to explain, basically, applied statistics, if you really wanted to get very technical about it,” Gunkel explains. “So what the large language models and other machine learning systems do is they set up a neural network, which is modeled on a rudimentary mathematical understanding of the brain and the neuron and how it works.” 

Basically, the machines look at large amounts of data and learn how to make predictions based on patterns in the data. And sometimes the results are a little bit off. For example, I was writing a policy and procedure manager for a business client, and I asked what his corrective action policy was. He asked an AI, and it suggested that management conduct a “root cause analysis to determine the underlying factors that contributed to the problem. This analysis can help to identify the specific changes needed to prevent the problem from recurring.” 

I ended up just writing the policy myself. 

AI tools in journalism

OtterAI

Jenna Dooley is the news director at WNIJ, an NPR affiliate station in DeKalb, Illinois. The reporters in her newsroom have been using OtterAI, an online assistant that records and automatically transcribes audio files, to transcribe interviews for years, and it has saved her reporters endless hours and headaches. 

“Traditionally before AI, what you would do is you’d come back [and] you’d have anywhere from a 10-minute interview to a two-hour interview and it would be on a tape,” Dooley says. “You used to have to ‘log the tape,’ is what they call it. And that was a real-time exercise of sitting, listening to a few seconds and typing it out, listening for a few more seconds [and] typing it out so that you could make your own manual transcription of the interview.”

“Logging tape was obviously really slow and you couldn’t even start writing your story until you’ve done your transcriptions,” Dooley says. “It’s much faster to be able to just go to that transcript and say ‘okay, here’s the line I want to use. Here’s where I want to use my quote.’” 

YESEO

WNIJ also uses a tool called YESEO that was developed at the Reynolds Journalism Institute (RJI). YESEO is an AI tool in Slack that reads your articles and gives you keywords and headline suggestions. 

RJI fellow Ryan Restivo, who developed the app, says that he came up with the idea for YESEO when he was working at Newsday and noticed that some of their stories weren’t appearing on the first page of Google. He knew that it was likely that other newsrooms had better search engine optimization, or SEO, practices and he wanted to find a tool to help journalists reach their audiences. 

“We talked about [why we didn’t make the first page and] we made a Google sheet that looked at all the things the competitors did that were on the page versus what we had,” Restivo says. “We didn’t have any of the relevant information that was going to be surfaced in any of these searches… that’s where I got the inspiration for the idea.”

YESEO is unique because a media professional developed it for other media professionals—meaning it’s designed with media ethics in mind. One issue that came up in the development of the app is data privacy for newsrooms. YESEO is built off of OpenAI’s application programming interface, which allows business orders to integrate large language models like GPT-3 into their own applications. Restivo wanted to make sure that the stories that newsrooms were submitting were not going to be used to train the AI, so he confirmed the data would not be used for training unless YESEO explicitly opted in. 

“When I’m dealing with the privacy implications [of] these unpublished stories that are super valuable that nobody wants [anyone] else to see, and [all] the other stories that are getting entered into the system, I want to protect people’s data at all costs,” Restivo says.

The impact of AI on human writers

This month, TikToker Emily Hanley posted a video stating that ChatGPT took her copywriting job, and that she had been offered an interview for a job where she would train AI to replace her. 

Grace Alexander is a full-time copywriter who has lost clients to AI. She usually has a roster of clients, and in May, one of her clients dropped her out of the blue because they wanted to try out AI content writing. 

“The company I was working for that I was doing the project for actually got rid of almost all of the freelancers and took everything in-house because they were like, ‘Oh, we can just use ChatGPT,’” Alexander recalls.

Gunkel does not think that organizational staffing cuts will be permanent. 

“I think they’re gonna end up hiring a lot of them back in different positions,” Gunkel says. “The smart money is on creating really effective human-AI teams that can work together to generate content for publication.” 

This prediction might be correct. Although Alexander didn’t have work for the month of June, the company she worked for seems to want the human touch back. 

“They let me go for a month,” Alexander says. “They’ve already sent out feelers like, ‘Do you have availability for July?’ So I think I’m going to get my job back.” 

Is ChatGPT and AI ethical? 

Media organizations will likely use some form of AI in the near future. But ethically, using AI is still an uncharted territory. Dooley says that newsrooms may benefit from adopting a code of ethics. 

“I had just seen a kind of ethics policy the [Radio Television Digital News Association] had put out,” Dooley says. “Just like we have a code of ethics for our news reporting, their suggestion was to develop [a code for ethics in AI] within a newsroom.”

One consideration is transparency. The Houston Times has a page on their website explaining how and when they use AI tools to generate content. 

This is not the case for “pink-slime” outlets, organizations that represent themselves as local news to support political candidates or policies. The owner of Local Government Information Services, a pink-slime outlet based out of Illinois, told Columbia Journalism Review that its various media outlets use a software, which examines regional data, to algorithmically generate most stories.

“Unfortunately, we’re gonna see a lot more of this because the algorithms make the development of this kind of content far easier, far simpler and far less labor intensive,” Gunkel says. “So not only will you have a lot of aggregated content that will not be easy to trace back to its original sources… but also the prevalence and the proliferation of a lot of disinformation and misinformation.” 



Post a Comment

0 Comments