Without a clearly defined set of guidelines dictating the use of artificial intelligence in classrooms at the University of Arizona, instructors and students alike are working together to explore the potential benefits and risks of AI.
These policies (or lack thereof) reflect a marked difference in approach between the UA and Arizona State University, which recently announced a partnership with OpenAI. ASU was the first higher education institution to establish a partnership of this nature with OpenAI, a union that granted the school access to ChatGPT Enterprise.
The UA is left still grappling with how to implement course policies that integrate AI, while its peer institution in the state takes these strides to mix AI and education. At a town hall held Feb. 6 at the UA, technology experts, university instructors and UA students came together to discuss their findings about perceptions of and potential uses for AI in different learning channels.
The UA currently leaves AI guidelines up to the discretion of the individual instructors. However, there still exists a combination of fear, optimism and hesitation on the part of some university community members over how AI can be effectively employed.
Greg Heileman, vice provost for undergraduate education and a professor of electrical and computer engineering, said effective AI integration requires clear communication with students about classroom policies, something that has been historically lacking when it comes to this technology. This communication is absent not just between students and instructors, but also between instructors and faculty all across the university.
The lack of a coherent policy for AI can be attributed, in part, to different perceptions of the technology across disciplines.
Emily Jo Schwaller, an assistant professor of practice in the University Center for Assessment, Teaching, and Technology, recognized that professionals in the humanities may have a completely different outlook than those in STEM, which leads to discrepancies in instruction from class to class.
“I went to two different disciplines, one was more STEM-based, and someone there said something like ‘I’m a little nervous to say this, but I’m actually a little worried about AI,’” Schwaller said. “And I went to [the] humanities department and they said ‘I’m afraid to say this, but I actually really like AI.’ It’s interesting that on our campus there’s this huge concern about even being able to voice your opinion within the discipline, and I think it really is rooted in […] how we think about writing within those disciplines, or how we think about writing in general.”
These contradictory approaches can create confusion for some students who, in the span of a day, might attend three or four classes all with different guidelines for the use of AI.
Other key focal points in these community-wide discussions are accessibility, equity and representation. While AI has the capacity to serve as an equalizer in many settings, some of the panelists also commented that, historically, AI and the conversations around it have excluded or discriminated against certain groups.
“AI actually kind of leveled the playing field for individuals with disabilities […] for example, if grammar is an issue for me, there’s a program that’s out there that will help me with my grammar, and it’s AI generated,” said Dawn Hunziker, associate director of the UA Disability Resource Center. “There’s dictation that’s built right into Microsoft products that everybody’s using to write emails, write documents. I’m gonna have carpal tunnel surgery, and I can use that dictation software in order to continue doing the work that I’m expected to do, and it’s all AI. There’s even a program called Seeing AI, where, as a blind individual, I can use my phone to scan this room, and it’ll tell me how many people are in it.”
Schwaller also expressed a concern that AI has shown the capacity to erase linguistic diversity, due in large part to the fact it is “predominantly coded by certain populations,” she said, and is not representative of diverse populations who may be using it.
These problems are apparent in the writing software Turnitin, an online plagiarism detection service that has recently faced backlash for reportedly false concerns that disproportionately affected certain student populations.
A 2023 Stanford study found that the software held a clear bias towards writing by non-native English speakers. According to the report, GPT detectors frequently misclassified writing assignments completed by non-native English speakers as AI-generated. This example could back the concerns voiced by many members of the university community that AI use in the classroom might lead to a less equitable or representative learning environment.
Educating instructors, not just students, on the merits and uses of AI is an important next step in the UA’s inclusion of AI in the classroom.
“We can’t expect professors to just come out of the gate knowing how to use AI or what it’s all about,” said Bryan Carter, a professor in Africana Studies. “We need to educate ourselves and professors going through this on how to maybe reorient some of their assignments […] some of our exercises that will incorporate AI as a tool, as opposed to assuming that we know how it works, and how to guard against it.”
This education involves continuing and enhancing the existing resources and workshops on campus. The UA offers a variety of trainings and course resources for instructors and students alike that range in topics from AI literacy to AI ethics and equity. The UA’s AI Access & Integrity Working Group, which is made of smaller teams of faculty, staff and students from across campus, continues to hold conversations like the one held Feb. 6 in order to encourage continued discourse, curiosity and policy change.
Arizona Sonoran News is a news service of the University of Arizona School of Journalism.