Skip to Content, Navigation, or Footer.
Now playing:
To The Baja
Listen Live

Artificial Intelligence is changing how ASU handles academic integrity

The recent rise of text-generating chatbots like OpenAI’s ChatGPT is affecting how colleges protect from cheating

Arizona State University is changing the way it looks at academic integrity in response to the recent rise of text-generating chatbots like OpenAI’s ChatGPT.

After its release, ChatGPT became the fastest-growing app in history, with over 200 million daily users in just over two months. In response, a litany of major tech companies, including Microsoft, Google, and Adobe, have announced their own efforts to branch into the artificial intelligence field.

With OpenAI and other tech companies releasing their own programs and services based on generative AI, members of ASU’s academic integrity board have been trying to decide where to draw the line between honest work with AI assistance and outright cheating. 

Melanie Alvarez, an assistant dean for the Walter Cronkite School of Journalism and Mass Communication and member of ASU’s Academic Integrity Board, said that the academic integrity board is hard at work determining a course of action regarding the use of artificial intelligence in student work.

“You know, spell check is AI. Grammarly is AI,” Alvarez said. “We encourage students to use those to help in their academic success. And why should we limit this technology for students when they can learn more… and embrace different ways of creating content?”

Alvarez said she remains hopeful that students who use artificial intelligence in their work use it to benefit their work, not do it for them.

“It may give you great ideas, but the hope would be in a journalism school that you actually take that advice and then do the work yourself,” Alvarez said. “Those are exciting ways to streamline workflows. But for us here at Cronkite, we want to make sure that the work you submit is your own.”

ASU is not the only group concerned about AI’s effects on academic integrity. Anti-plagiarism services like Turnitin, which references submitted work against a database to check for plagiarism, have also put up defenses. Turnitin recently announced via their website that their systems can detect content written by generative AI like ChatGPT.

Canvas, the system ASU uses for students to submit classwork, sends documents through TurnItIn during the submission process to evaluate their originality.

Ethical concerns about AI have also been raised by ASU professors such as Retha Hill. Hill says she sees the recent decision by OpenAI to stop sharing the data it uses to train its generative AI language models as problematic.

Up until recently, OpenAI shared that they trained their artificial intelligences on the contents of the Internet. OpenAI’s chief scientist, Ilya Sutskever, said in an interview with The Verge that the decision to stop sharing training data was due to the competitiveness emerging in the field.

Hill, the executive director of the New Media Innovation and Entrepreneurship Lab at ASU, raised concerns about where OpenAI is getting the information to train GPT-4, its latest language model. 

Hill said that over time, the development of more AI systems to choose from through this competition will benefit users.

“Right now, ChatGPT is sexy. It’s the one out there that everybody’s buzzing about,” Hill said. “But so was the Model T at one point, and then Buick and Chevy came along with a whole lot more cars.”

The recent growth of generative AI has also sparked concerns about people losing their jobs to automation – what would take humans hours, days, or even weeks to complete can now be generated in the blink of an eye by ChatGPT and similar tools.

With experience in a field where AI automation is being introduced, creative marketing director Caiden Laubach said people shouldn’t be concerned about job security.

“The AI generation tools right now don’t have layers or iterative abilities. And that’s what humans bring to the table,” Laubach said. “It’s just that humans are the bookends now rather than all the stuff in between.”

Laubach, the creative marketing director for Arizona-based design firm Design Pickle, said he is optimistic about the future of artificial intelligence, particularly in the design industry.

“AI enables clients to see a sort of computerized mockup before it goes to a human designer. It helps people communicate what they want a little bit better and gives the designers a kind of jumping-off point,” Laubach said. “And what we’ve seen is that it expedites the process a lot.”

Amid controversy about the ethics of AI-generated art through services like OpenAI’s DALL-E, Stable Diffusion, and Midjourney, Adobe has announced an art generation service called Firefly, which is currently in beta.

Users will be able to enter a prompt of what they want, and Firefly will use artificial intelligence to generate a work that matches the prompt as closely as possible. Most AI art generation services work this way.

Laubach said that Adobe is “pioneering an ethical sourcing model” by allowing artists to choose whether or not their art is used to train Adobe’s new Firefly AI service.

He said he is glad to see that companies like Adobe are evaluating the implications to artists, like whether they get paid when their work is used by an AI model, which is not the current standard – current AI models rely on “scraping” the Internet for artwork to bolster their generation abilities.

Alongside the Microsoft Office Suite, ASU allows its students free access to the Adobe Creative Suite, which may soon include the Firefly generation service.

Though the recent developments in artificial intelligence have brought it to the forefront of public consciousness, AI has been lurking in the shadows of computer science and engineering for decades.

The reason is that the development of artificial intelligence has changed – most recent AI programs are narrowly focused on a singular task, like playing chess, recognizing speech and converting it to text, and so on.

ChatGPT and similar programs represent a newer type of AI called an artificial general intelligence, which can perform multiple tasks across different fields, such as writing code or writing poetry.

These new AI programs have been able to handle a greater variety of tasks and requests, but “there are no guarantees about anything that they do,” ASU Professor Subbarao Kambhampati said during a talk on generative artificial intelligence for Manthan India.

As for chatbots’ ability to generate text, Kambhampati said they don’t understand context; any Large Language Model is just guessing.

“It’s important to understand that ChatGPT or any other LLMs are just trying to complete your prompt by repeatedly predicting the next word, given the previous 3000 words,” he said.

Kambhampati also mentioned the concerns people have had about generative AI taking tests designed for people.

“ChatGPT is essentially able to pass all the standardized tests, including those by law schools,” he said. “The way we evaluate people’s understanding has to change because we have these shortcuts now. And then, of course, we will realize that these are no longer reasonable tests.”

Since November, OpenAI’s ChatGPT has caused an avalanche of new developments in the AI field. ChatGPT is continuing to improve alongside Google’s Bard and Microsoft’s Bing Chat, with Meta having entered the AI space as well.

GPT-4, the next generation of the model ChatGPT is based on, has been in development for some time and “demonstrates remarkable capabilities on a variety of domains and tasks,” according to a research paper done by Microsoft.

The research group cited several tests they performed with the new model and stated it has the capability to produce “outputs that are essentially indistinguishable from (or even better than) what humans could produce.”

"Tracking the providence of human versus machine origin may be valuable for mitigating potential confusion, deception, or harm with regard to types and uses of content,” the group wrote about further generative AI development.

ASU does not currently have policy regarding AI-generated content, but members of the academic integrity board are aware of its potential.

“It took time to go from writing to printing, from printing to telegrams, to using wires to carry information,” said Alvarez. “It’s beyond a lot of people’s wildest dreams, and I think we’re ready for the next shift. It’s going to be exciting.”


Similar Posts