Artificial Intelligence

AI, like ChatGPT, is creating teaching challenges on college campuses

Plagiarism is nothing new, but the role artificial intelligence is playing in it is now a concern at colleges across the country.

AI, like ChatGPT, is creating teaching challenges on college campuses
Scripps News
SMS

There’s a new frontier in the world of higher education: artificial intelligence, specifically AI that can generate text, like ChatGPT. It’s capable of writing whole essays just by asking it a question or specifying a topic.

However, it can sometimes be hard to detect when it’s used, because the AI creates individual responses each time; no two are identical. It all raises real questions about how students complete some class assignments.

For perspective on this, we turned to three different professors from three different universities.

English professor Paul Fyfe at North Carolina State University: “We think about AI in the wrong way.”

Lee Tiedrich is a professor at Duke Law School: “I'm an optimist when it comes to technology.”

Caleb Husmann teaches political science at William Peace University: “The bottom line is I want them to learn.”

ChatGPT, though, can complicate that effort.

Can machines be creative?
Can machines be creative?

Can machines be creative?

Generative AI is a subfield of artificial intelligence that involves training models to generate new content such as text, images, or music.

LEARN MORE

“Nobody's got the answer at this point. We're just figuring it out,” Husmann said. “And I think being candid with your students always goes well.”

Altering class instruction can also be an option.

“We like debates and simulations and like games and stuff like that -- that ChatGPT can't, you know, do a simulation as a lobbyist for you,” Husmann said. “But it doesn't change the fact that a classic essay is a fundamental part of a college education.”

That’s something English professor Paul Fyfe decided to tackle several years ago.

“I've been interested in text generating for a few years, and starting in Fall 2020, have been assigning students to try and experiment with it, requiring it to be used to ‘cheat’ on their final papers as a way of thinking about the potential downstream consequences,” he said.

That may sound like a win for the students. As they found out, though, it was anything but.

“A lot of them are like, ‘Wow, we're going to get away with something. This is going to be so easy. I'm just going to press a button,’” Fyfe said, “and all of them realize it doesn't work that way—that the AI is troublesome.”

Child welfare algorithm faces Justice Department scrutiny
Child welfare algorithm faces Justice Department scrutiny

Child welfare algorithm faces Justice Department scrutiny

The Allegheny Family Screening Tool is designed to assess a family’s risk level when they are reported for child welfare concerns.

LEARN MORE

That’s partly because what the AI generates can be vague or just plain wrong, as it’s based on information from all kinds of online sources, some with incorrect information.

“To pretend that it doesn't exist, I don't think is the right approach,” said Lee Tiedrich, a distinguished faculty fellow at Duke Law School, with a focus on ethical technology.

She said embracing ChatGPT may be a better approach.

“I think we need to have a national learning moment about artificial intelligence, which is something I've been saying for a while because students need to understand how it works,” Tiedrich said. “What our students need to learn is, ‘How are some of the beneficial uses? How can it help you with research? But what are the responsibilities that come with using the technology?’”

It's a responsibility that students may be facing more now than ever before.