While hiking in Costa Rica, Keast consumed AI podcasts talking about the software’s existential risk to humanity. At home in Mill Valley, Calif., he’s spent hours online in fiery group discussions about whether AI chatbots should be used in the classroom. In the car, Keast queried his kids for their thoughts on the software until they begged him to stop.
“They’re like: ‘You got to get a life, this is getting crazy,’” he said. “But [AI] totally transformed my whole professional experience.”
Keast isn’t alone. The rise of AI chatbots has sowed confusion and panic among educators who worry they are ill-equipped to incorporate the technology into their classes and fear a stark rise in plagiarism and reduced learning. Absent guidance from university administrators on how to deal with the software, many teachers are taking matters into their own hands, turning to listservs, webinars and professional conferences to fill in gaps in their knowledge — many shelling out their own money to attend conference sessions that are packed to the brim.
Even with this ad hoc education, there is little consensus among educators: for every professor who touts the tool’s wonders there’s another that says it will bring about doom.
The lack of consistency worries them. When students come back to campus this fall, some teachers will allow AI, but others will ban it. Some universities will have modified their dishonesty policies to take AI into account, but others avoid the subject. Teachers may rely on inadequate AI-writing detection tools and risk wrongly accusing students, or opt for student surveillance software, to ensure original work.
For Keast, who teaches at the City College of San Francisco, there’s only one word to describe the next semester.
After ChatGPT became public on Nov. 30, it created a stir. The AI chatbot could spit out lifelike responses to any question — crafting essays, finishing computer code or writing poems.
Educators knew immediately they were facing a generational shift for the classroom. Many professors worried that students would use it for homework and tests. Others compared the technology to the calculator, arguing teachers would have to provide assignments that could be completed with AI.
Institutions such as Sciences Po, a university in Paris, and RV University in Bangalore, India, banned ChatGPT, concerned it would undermine learning and encourage cheating. Professors at colleges such as the Wharton School of Business at the University of Pennsylvania and Ithaca College in New York allowed it, arguing that students should be proficient in it.
Tools to detect AI-written content have added to the turmoil. They are notoriously unreliable and have resulted in what students say are false accusations of cheating and failing grades. OpenAI, the maker of ChatGPT, unveiled an AI-detection tool in January, but quietly scrapped it on July 20 due to its “low rate of accuracy.” One of the most prominent tools to detect AI-written text, created by plagiarism detection company Turnitin.com, frequently flagged human writing as AI-generated, according to a Washington Post examination.
Representatives from OpenAI pointed to an online post stating they “are currently researching more effective provenance techniques for text.” Turnitin.com did not respond to a request for comment.
Students are adjusting their behavior to avoid getting impacted by the uncertainty.
Jessica Zimny, a student at Midwestern State University in Wichita Falls, Tex., said she was wrongly accused of using AI to cheat this summer. A 302-word post she wrote for a political science class assignment was flagged as 67 percent AI-written, according to Turnitin.com’s detection tool — resulting in her professor giving her a zero.
Zimny, 20, said she plead her case to her professor, the head of the school’s political science department and a university dean, to no avail.
Now, she screen-records herself doing assignments — capturing ironclad proof she did the work in case she ever is ever accused again, she said.
“I don’t like the idea that people are thinking that my work is copied, or that I don’t do my own things originally,” Zimny, a fine arts student, said. “It just makes me mad and upset and I just don’t want that to happen again.”
All of this has left professors hungry for guidance, knowing their students will be using ChatGPT when the fall rolls around, said Anna Mills, a writing teacher at the College of Marin who sits on a joint AI task force with the Modern Language Association (MLA) and College Conference on Composition and Communication (CCCC).
Because universities aren’t providing much help, professors are flocking to informal online discussion groups, professional development webinars and conferences for information.
When Mills talked on a webinar hosted by the MLA and CCCC for AI in writing in late-July, a time when many teachers might be in the throes of summer break, more than 3,000 people signed up and ultimately more than 1,700 people tuned in — unusual numbers for the groups’ trainings.
“It speaks to the sense of anxiety,” Mills said. In fact, a survey of 456 college educators in March and April conducted by the task force revealed the largest worries professors have about AI are its role in fostering plagiarism, the inability to detect AI-written text and that the technology would prevent students from learning how to write, learn and develop critical thinking skills.
Mills and her task force colleagues are trying to clear up misconceptions. They explain that it’s not easy to recognize AI-generated text and caution the use of software to crack down on student plagiarism. Mills said AI is not only a tool used for cheating, but can be harnessed to spur critical thinking and learning.
“People are overwhelmed and recognizing that this new situation demands a lot of time and careful attention, and it’s very complex,” she added. “There are not easy answers to it.”
Marc Watkins, an academic innovation fellow and writing lecturer at the University of Mississippi, said teachers are keenly aware that if they don’t learn more about AI, they may rob their students of a tool that could aid learning. That’s why they’re seeking professional development on their own, even if they have to pay for it or take time away from families.
Watkins, who helped create an AI-focused professional development course at his university, recalled a lecture he gave on how to use AI in the classroom at a conference in Nashville this summer. The interest was so intense, he said, that more than 200 registered educators clamored for roughly 70 seats, forcing conference officials to shut the door early to prevent over crowding.
Watkins advises professors to follow a few steps. They should rid themselves of the notion that banning ChatGPT will do much, since the tool is publicly available. Rather, they should set limitations on how it can be used in class and have a conversation with students early in the semester about the ways chatbots could foster nuanced thinking on an assignment.
For example, Watkins said, ChatGPT can help students brainstorm questions they go onto investigate, or create counterarguments to strengthen their essays.
But several professors added that getting educators to think on the same page is a daunting task, that is unlikely for the fall semester. Professional development modules must be developed to explain how teachers talk to students about AI, how to incorporate it into learning, and what to do when students are flagged as writing an entire post by a chatbot.
Watkins said if colleges don’t figure out how to deal with AI quickly, there is a possibility colleges rely on surveillance tools, such as they did during the pandemic, to track student keystrokes, eye movements and screen activity, to ensure students are doing the work.
“It sounds like hell to me,” he said.
0 Comments