logosmall2.jpg

The Wheel

St. Catherine University’s official student news, since 1935.

AI returns!

AI returns!

By KC Meredyk

Last semester, Wheel reporter Leah Keith wrote “Are Robots Taking Over?” For the article, Keith asked several campus faculty and staff members about their opinions surrounding AI use. Interviewees expressed mixed opinions, with members of the Writing Center arguing its potential detriments. At the time, there was no official policy at St. Kate’s surrounding AI use. However, the university has since implemented a policy on AI writing tools. St. Kate’s academic integrity policy states, “Using generative AI tools (e.g., ChatGPT) on assignments without permission, in improper ways, or without citation or affirmation is considered a violation of the St. Catherine University academic integrity policy.”

This policy leaves whether or not to allow AI use in classrooms up to professors. Dr. Kristen Lillvis, chair of Art and Art History and Literature, Language and Writing, is teaching Writing in the Digital Age this semester. For this class, Lillvis and their students will use ChatGPT on assignments to explore AI writing tools’ potential uses and limitations. Lillvis emphasizes the idea of critical AI literacy, a principle similar to information literacy—in other words, one should evaluate what AI produces and use it appropriately. This is a sentiment shared by the developers of ChatGPT, Open AI, who, in a section on potential uses of the program in schools, suggest the use of the program for learning about bias and AI literacy. Lillvis also expects that the social justice focus of St. Kate’s will result in increased discussion of AI as biases from the data it is trained on become apparent.

Dr. Steven Wandler, director of Writing and Professional Communications at St. Kate’s, has led faculty meetings about AI use. According to Wandler, there are a variety of perspectives among faculty on the use of AI, with no consensus. Wandler notes that in the current moment, AI is a hot topic in academia.

“I think that St. Kate’s will develop a policy in the near future that requires instructors to articulate to students how they can, and how they cannot, use AI in their courses.” Wandler continues, “I think we’ll have a policy that every class must have, and must clearly state, its policy on AI use in the context of that class.”

Wandler acknowledges that AI could speed up student progression in fields with less emphasis on writing skills. For his classes, however, Wandler bans AI use.

“Since I … mainly teach courses that involve deliberately working through the writing process, thoughtfully analyzing ideas and texts and critically thinking about claims and arguments, using AI to do any of those things would be detrimental and defeat the purpose of the class.” Wandler continues, “I don’t want something like ChatGPT to do those things for them. These writing abilities are so important to develop that using AI to do them—or, more accurately, to mimic doing them—for us would take away from vital aspects of learning, at least in my classes.”

Opening page of AI Classifier developed by Open AI, the developers of ChatGPT. AI Classifier was meant to be a potential way to detect the usage of tools like ChatGPT. Credit: Open AI

Currently, there is no good way of detecting students’ use of AI writing software. Programs have been created, such as AI Classifier from the creators of ChatGPT as well as Vanderbilt’s AI detection program connected to Turnitin. However, both struggled with inaccuracy issues, turning up false positives, particularly among non-native English speakers and neurodivergent people. In one case, a colleague accused a Purdue professor of being AI. Both programs have since closed down. Due to this, many other educational institutions have altered their policies to assume that students will be using AI and finding ways around any potential blocks.

The statement given at the top of the article for Open AI explains that the AI Classifier they developed for detecting the use of AI in writing has been removed due to its low accuracy rate. Credit: Open AI

I ran a brief experiment with ChatGPT when writing this article. I gave it the task of attempting to title this piece. I ran three different prompts and had it generate three different responses. In two cases, it gave me a repeat. None of the responses were creative; they were all positive toward AI, followed a clear formula and used similar language. AI likely will not be coming for any writer’s job; it cannot replicate the creativity of an actual person. But if it finds mainstream success, it will change writing for everyone.

A series of photos demonstrating how ChatGPT works. I put in the prompt, “Come up with a title for a student news article about university policy on AI” and generated three responses. ChatGPT came up with two titles: “Navigating the Future: University Policies in the Age of AI” and “Navigating the Future: Exploring AI Policies at Our University.” Credit: Open AI

In many ways, the jury is still out on the use of AI. There is currently a considerable amount of back and forth on how AI writing tools may impact students’ abilities as writers, with some arguing it will significantly weaken communication skills and others claiming it will make better writers if used as a teaching tool. AI writing is here and doesn’t look like it is leaving, so it would be best to learn to work with the technology and develop AI literacy skills.

The tales of many summers

The tales of many summers

Searching for a new president

Searching for a new president