Eric Zuniga, Campus Carrier news editor
James Fox, Campus Carrier deputy news editor
Berry has recently begun efforts to define its relationship with artificial intelligence (AI), establishing committees to set policies on AI use and academic integrity as well as developing resources on AI usage for faculty.
Berry made AI a priority for this academic year at the institutional level, according to Provost David Slade, after faculty members began leading informal conversations about the technology last year. Before classes began, Edward Watson, vice president for digital innovation at the American Association of Colleges and Universities’ (AAC&U), was invited to lead a one-day conference for Berry faculty on AI.
“In the previous year, very few of us felt equipped to even provide guidance, what to do, how to think about this,” Slade said. “[Watson] gave a plenary talk and gave a workshop. We had some Berry professors and a staff person to do a panel to really try to frame some of around AI for the year. We had never done an opening conference like that before.”
This year, Berry is participating with over 100 other schools in an AAC&U institute on AI led by Watson, who has co-written a book on teaching with AI. Slade said the institute, which will meet online throughout the year, is meant to help colleges work together to define their stance on the technology.
“It really gives us resources, structure and also some accountability toward an action plan,” Slade said. “We’re at a place where [AAC&U] thought, we didn’t just show up and say, ‘what is AI and how should we use it?’ We’ve got a little bit more of a sense of what our questions are.”
Associate Professor of History Christy Snider is part of the group from Berry participating in the institute.
“This group has not changed my mind about allowing students to [use AI],” Snider said. “It’s interesting to be in a group and talk with other faculty members and people at other institutions about different ways faculty across the nation are considering AI and its role in college and the benefits and the problems with it.”
Berry’s committees on AI use are still in the early stages of their work. The Provost’s Office has been leading efforts to provide information about AI for faculty, with plans to host events on the topic and a book study in the spring. The office is working on developing modules about AI use that faculty can include in their courses.
“It’s less about trying to teach faculty how to think about it in a certain way; it’s more convening some faculty discussions around it,” Slade said. “We have high value for faculty having freedom in how they go about their teaching. For me, some of the areas we especially need to give support and maybe leadership in has to do with, are there areas that are off-limits, at least as far as we can tell?”
Recommended student learning goals for AI are also in the works, according to Slade.
“These are learning goals that departments could use, different programs could use, even LifeWorks offices could use,” Slade said. “It’s not just learning tips and tricks. How do you use it ethically, and how do you use it effectively and understand what’s at stake in both aspects of that?”
Academic integrity is one of the main concerns many faculty members have about AI in higher education. Statistics indicate that cheating rates have risen as new AI tools have become available in recent years. At Middlebury College, a liberal arts school in Vermont, the percentage of students who self-reported violating the honor code rose from 35% in 2019 to 65% in 2024, according to a report in the Chronicle of Higher Education.
Berry has formed an academic integrity committee this year that will review current policies and suggest new ones with AI in mind.
“We have a separate committee that is evaluating our academic integrity policy as a whole, not just about AI,” Slade said.
Slade hopes the new AI resources will help faculty set clear expectations about how AI can be used by students.
“Expectations have to be clear; there have to be ongoing discussions about it,” Slade said. “It may be that in many classes, we have to adjust things. We need to not set students up for failure by being blind to how certain assignments might be tempting.”
The AI committee is also hosting focus groups and conducting surveys to understand how students are using the technology. The results from these efforts will be used to develop Berry’s AI learning outcomes.
“We don’t just want to launch into our learning goals without stopping a little bit first and listening to where our students are coming from,” Slade said.
Sophomore physics major Madison Pierce believes that AI is often overused in ways that can hamper learning.
“For a lot of STEM classes, you really can’t use AI to properly learn the material — it’s just to give you answers,” Pierce said. “Students who use AI to do their homework in especially STEM classes, they don’t get good grades. All they’re doing is getting good grades on the homework and then failing the tests.”
Snider has allowed her students to use AI in some of their coursework, provided they follow guidelines about citations. She said that students sometimes still struggle to assess the accuracy of work produced by AI.
“I have students who have to write papers based on historical documents, and they’re not always fact-checking the things they’re having AI produce,” Snider said. “Figuring out how to allow students to use AI in a way that still ensures they are producing accurate work is the thing I’m probably thinking most about now.”
The recent growth of the AI industry has prompted concerns about equity and environmental issues. AI models rely on massive amounts of computing power, which puts strains on water and energy resources. The free versions of AI models produce lower quality results than paid versions, according to Slade, raising worries about the equity of AI access in education.
Slade believes Berry should be an ideal place to discuss all ramifications of AI.
“I think that we have to be very cautious to attach our educational model — the very core of our mission — to something that is just being used to exploit capital, but I think there’s more to it than that,” Slade said. “We should be the perfect place to sit down and have a reasonable conversation about that and maybe even debate.”
Slade said that Berry’s AI efforts will be important in giving students and faculty the ability to think critically about the technology.
“Without it, we leave our faculty out there just to figure it out on their own; we’re also not having any kind of conversation with students,” Slade said. “I think Berry is very well poised to equip students to think about this as effective agents, getting things done, but also to do this in a way where you maintain your sense of character and understand the implications of what’s at stake for our community.”
