No panic here: TWU prepares for ChatGPT

AI-created artwork of a Viking warrior
A drawing of a Viking warrior created by artificial intelligence.

Feb. 20, 2023 – DENTON – The world, as it often does, is losing its mind.

The concern/freak out erupted in November with the introduction of a dramatic advance in artificial intelligence, better known as AI to friends and enemies.

"It scares the hell out of me," one commentator said in an overheated video rant. "It's going to eliminate half the work force. Robots are going to take over."

The trigger of this terror is not the deadly robot army of a malevolent AI in Daniel Wilson's novel Robopocalypse. It's a thing called ChatGPT.

Yeah, doesn't sound too scary.

Okay, so maybe ChatGPT will never be a villain in the Marvel Cinematic Universe, but it does raise serious and legitimate concerns.

ChatGPT is an AI-powered website that mimics human conversation, writes and debugs computer programs and writes screenplays, poetry, and stories. It has also written essays and research papers, and answered test questions. Its cousin, DALL-E, can even create art.

AI's ability to create music, art and stories has raised eyebrows, but it's that bit about essays and papers and tests that set off klaxons in the halls of academia. Responses ranged from the bombastic doomsayers to those excited about its possibilities.

"There was a lot of anxiety about it," said Genevieve West, PhD, chair of the Texas Woman's University Department of Language, Culture & Gender Studies, and professor of English. "What does it mean for teachers in the classroom? What does it mean for programs, and particularly writing programs where we are teaching students how to use sources ethically? Then also thinking about academic dishonesty, and what that looks like at the university level?"

However, just as calculators and Wikipedia once horrified teachers, ChatGPT and its inevitable progeny are here to stay.

"Kids in school today are going into jobs where not everyone they work with is human," Richard Culatta, CEO of the International Society for Technology in Education, told USA TODAY.

In other words, you better get used to it. And you better deal with it.

Which is what West is doing. She organized a department workshop led not by an outside computer expert, but by one of TWU's own. Daniel Ernst, PhD, associate professor of English, teaches courses on science and technical communication and public rhetoric, which is the study of persuasion and communication.

"I was an English major in college, but in grad school I studied automated language technologies," Ernst said. "This is a perfect synthesis of the linguistic and the numerical. Both are languages, and this technology is the fusion of both. I think that there's an opportunity for people in the liberal arts to study and contribute to the scholarly conversation around some of these technologies.

"Ironically, a lot of computer science scholars and academics have similarly felt some fear, because one thing that ChatGPT is really good at is coding," Ernst said. "You can ask it in natural language to write you lines of code and it'll do it. That's the real revolution. There's some fear in the computer science world as well."

The fear of AI runs deep, and it has long been the stuff of human nightmares. But the technology also gives us really cool stuff, and people are suckers for cool stuff. AI is inextricably bound up in humans' desire for innovation and their misgivings about the consequences.

"The development of full artificial intelligence could spell the end of the human race," physicist Stephen Hawking said in 2014. As he explained in Brief Answers To The Big Questions in 2018, “You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”

Perhaps the first warning was in an article by Samuel Butler, "Darwin Among the Machines," which contemplated a machine-ruled future:

We refer to the question: What sort of creature man’s next successor in the supremacy of the earth is likely to be. We have often heard this debated; but it appears to us that we are ourselves creating our own successors; we are daily adding to the beauty and delicacy of their physical organisation; we are daily giving them greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race. In the course of ages we shall find ourselves the inferior race.

Butler's article was published in 1863 – 160 years ago.

The 1899 novel The Wreck of the World by William Grove, the 1920 play R.U.R. by Karel Čapek, and the 1934 film Master of the World told of sentient robots rebelling against humans.

In 1950, Alan Turing's article, "Computing Machinery and Intelligence," considered if machines can think. That same year, legendary science fiction author Isaac Asimov's collection of short stories, I, Robot, postulated the Three Laws of Robotics (a robot may not injure or allow by inaction a human to come to harm; a robot must obey orders from humans except where such orders would conflict with the First Law; and a robot must protect its own existence as long that does not conflict with the First or Second Law). In the mid-1950s, MIT's John McCarthy coined the term Artificial Intelligence, “the science and engineering of making intelligent machines."

In 1967, two decades before The Terminator and three decades before The Matrix, Harlan Ellison's short story "I Have No Mouth, and I Must Scream" told of a self-aware supercomputer which commits genocide against the human race. In the 1970 movie Colossus: The Forbin Project, a supercomputer designed to protect the United States seized control of the world's nuclear arsenal and took control of the world. Colossus even demanded the execution of meddling computer scientists or face nuclear retaliation.

Sure could have used Asimov's Three Laws then.

But neither dystopian paranoia nor reasonable caution has slowed the pace of AI development, hurling progress at a society not always ready for what's next.

The concern in academia over ChatGPT is less dramatic but no less significant: dishonesty. A student turning in an essay created by a computer.

"A lot of the fears educators have expressed about plagiarism and academic dishonesty are legitimate," Ernst said. "I don't think we can call what ChatGPT does plagiarism because it's not copying someone else's words. The text it generates is unique. Now, is it academically dishonest to generate text and pass it off as your own? I think, yeah, you can make that argument."

Banning AI usage, Ernst believes, is not the answer, nor is software that allegedly can detect text generated by ChatGPT.

"The irony, of course, is it relies on AI technology to detect AI technology," Ernst said. "From what I've seen, these programs are unreliable and inconsistent. The fear I have is if educators start adopting the use of these, it's going to create antagonism between teacher and student, and there's going to be false accusations. There have already been students accused of using AI when they haven't.

"I think it's ultimately just futile to ban ChatGPT," he added. "I'd rather accept this as a new tool, much like we accepted the Internet, which I'm sure also freaked out a lot of educators. Curricula has adapted because that's what we've done for millennia in education. If you go back to ancient civilizations, a lot of people perceived writing as a threat to our ability to recall and memorize things. Fighting these technologies is going to take more energy and resources than it would to rethink how to teach with them. To be clear, I think there's going to be a lot of growing pains, but it's not going to end education and I don't think it's going to atrophy student writing ability. We have an opportunity to rethink some of the ways that we assign writing and assess student writing and ability."

Among those new ways could be much more interactive teaching. For example, discussing a poem in class and having ChatGPT analyze it. Students could then critique ChatGPT's analysis, or explain where ChatGPT was wrong or not as deep as it could have been.

Fact is, ChatGPT is not infallible, and it lacks the in-depth understanding that humans possess.

"If you're an expert in something, you can pretty easily see when it makes errors, and it does make errors," Ernst said. "It's an inch-deep, mile-wide kind of thing because it's been fed the entire Internet. It knows lots of things pretty well. I don't think it's going to threaten, professional writers, scholars and educators."

It's that lack of depth that is the AI's weakness.

"I look at some of the stuff that ChatGPT produces, and it's not horrible, but it's also not particularly good," West said. "I have a friend who asked it to write country song lyrics, and it did. And it was hilariously bad. Even with some of the more traditional expository writing I've seen it produce, it's superficial. There's not a lot of depth. There are no specifics. It makes claims, but doesn't really support them. So I think there are going to be some interesting things to do with it, because often we struggle as faculty, especially if we've created a new writing assignment, to give students a sample to critique in class. I can see having ChatGPT generate a couple of pages and have students critique it. You're not critiquing a former student or a student in the class, but you're looking at the limits of what AI can do."

Why, then, are some schools considering banning ChatGPT? In January, USA TODAY posed that question to ChatGPT:

“It is possible that some schools or educational institutions may have policies in place that prohibit the use of certain technology in the classroom or during exams, particularly if it is deemed to be a distraction or a potential aid for cheating," ChatGPT responded. "Additionally, the use of AI models such as ChatGPT raises ethical concerns around the potential misuse of the technology, such as impersonation or spreading misinformation. Therefore, some schools may choose to ban or restrict the use of such technologies as a precautionary measure."

The challenge to education, however, is only going to increase. In January, Microsoft announced a multibillion dollar investment in OpenAI, the company that owns ChatGPT. Microsoft plans to deploy ChatGPT into Bing, Word, PowerPoint, and Outlook.

But what Ernst would like to see is the development of policies for the use of ChatGPT, similar to those on conventional research and sources, and to explore the potential it presents.

"I could see it becoming a new feature of our published writing," Ernst said. "We could cite it. It's not going to replace all writing. It could replace the boilerplate, the sort of clear-your-throat writing that we don't like doing in the first place. I think it will revolutionize writing on an amateur level, and I do think it will change the way we teach writing. That's where I think the concerns are most legitimate. I think we will adapt, and we will come up with new ways. We will redefine writing, but writing will still exist."

"As a department, we are going to create a policy that addresses AI technology in the classroom, so students know what the expectations are," West said. "We owe them that upfront. I don't see us as a department saying, no, we don't use this and we don't encourage the use of it."

Among the possibilities are entirely new teaching methods and tools that have only existed in science fiction.

"I could see imagine educators assigning students to create a dialogue with ChatGPT. Instead of the essay, we have the dialogue," Ernst said. "Create a dialogue with a generative AI and assess students on the kinds of insights and observations they can extract from the generative AI. That's going to be a new, exciting genre. Someone built AI with various historical characters so you can talk to Einstein or Socrates. Have students generate a dialogue with AI Socrates. We assign reading, and that's good. I think that that is a good way to learn and I'm never going to get rid of that. But this is a new synthesis of interacting with text. When you read a book, you can't ask it questions, but now you can sort of ask Socrates, in a sense, questions."

Fortunately, Ernst is not alone in his beliefs. The New York Times (recently published an article by Kevin Roose, "Don't Ban ChatGPT in Schools, Teach With It," which argued that ChatGPT's "potential as an educational tool outweighs its risks." Scientific American concurred in an article "How ChatGPT Can Improve Education, Not Threaten It."

"I want to try to have a positive outlook," Ernst said, "rather than just throwing my hands up and saying, the computers are coming for us."

"I feel like there's excitement mixed in with the anxiety," West said. "I wouldn't say the anxiety has completely disappeared, but I'm definitely more knowledgeable and I'm actually excited about the possibilities. I'm kind of curious to see where it takes us."

Media Contact

David Pyke
Digital Content Manager
940-898-3325
dpyke@twu.edu

Page last updated 8:41 AM, February 20, 2023