sAInt
vincent
and the future of learning
Story: Rob Biertempfel | Photography: Liz Palmer
At Saint Vincent College, AI is not just a buzzword—it’s becoming a tool for both students and professors to explore, challenge, and integrate into their learning. From education students using AI-powered lesson planning tools to writing professors guiding students through the ethical and creative implications of AI-generated text, faculty members across disciplines are finding ways to engage with artificial intelligence while encouraging critical thinking. But how does this technology fit into the Saint Vincent classroom? Dr. Tracy McNelly, Dr. Sara Lindey, and Mallory Truckenmiller Saylor, C’19, talk about how AI is shaping education at SVC—and where it might be headed next.
Is this technology effective?
Decide for yourself—the paragraph you just finished reading was generated by AI.
McNelly: We’ve embraced a bunch of AI tools for our prospective teachers. We’re training them to use those tools to gather ideas and find information about how to best support students.
There’s a tool called Magic School that has [abilities] like assessment builders. Students might need to develop a rubric for an assignment they’re creating for their classroom. They can use this tool to develop that rubric with the understanding that sometimes AI is biased and incorrect, so it’s kind of used as a basis. Let’s say I’m creating a lesson, and I have students in my classroom with specific disabilities. I can prompt AI to [generate] ideas for scaffolding the lesson. That’s a good way to use it—as an idea generator and a starting point, especially for new teachers.
Lindey: I agree that people need some common knowledge and a foundation. As a novice, it’s helpful to see what’s already been said and done.
McNelly: Back when I first started teaching, we didn’t even have the internet. (laughs) Things have changed so much, and I don’t think [using AI] is frowned upon because there are so many good ideas out there. I don’t think it’s taking away our creativity. It’s just giving us some guidance on how to think through things. Schools are embracing these tools, so it would be a disservice to our pre-service teachers if we didn’t prepare them for what other teachers across the US are using right now in their classrooms.
Dr. Aaron Sams, a faculty member in our department, told me he is part of an advisory committee for the Pennsylvania Department of Education that is looking at the creation of an endorsement in AI for teachers. It would [consist of] a series of courses and workshops that would then lead to an endorsement. The Pennsylvania Department of Education recognizes these tools are here and they’re not going away, so we should explore how educators can use them ethically and responsibly within their school systems.
Truckenmiller Saylor: One thing that is really valuable about our community at Saint Vincent is because we’re small and our class sizes are small, we have a lot of room for collaborating with students on AI and its introduction to our classrooms.
As members of the Core Writing Team, Sara and I help organize the curriculum we teach to all of our freshmen in their first writing class at Saint Vincent. We have a big task of how we introduce students to writing with AI looming over us. We try to involve the students as much as possible in the way we think about AI, so it’s very collaborative.
The other day, my students and I had our discussion about AI. I tried to make a space where they could share their thoughts and be vulnerable about what they think about AI and their anxiety surrounding this new technology and what they’ll be expected to do with it in the workforce. We decided we can experiment together and navigate this new ethical question together, but we have to be transparent with each other about what we think and how we’re using it—such as, how you’re maybe innovating an essay with AI or injecting AI into it.
We always go back to the root that their ideas are more important than this technology and that AI should be supplementing it. Ultimately, what we’re doing here is training them to be critical thinkers and contributors to their future professions, which can’t happen if they rely too much on AI.
Lindey: When you’re talking about writing in a humanities class, it’s rare that a chatbot will startle the student into a new way of thinking. It’s more likely that it will present to you what’s already been said, which is often biased and integrated into the system that’s represented on the internet that does not go beyond paywalls to academic articles. AI has a limited knowledge base, and it’ll present you with something that might anesthetize you instead of enlivening you. Now, I do think AI is amazing at detecting cancer and finding the safest driving route to the airport. But in terms of becoming a path towards creative, original, insightful thinking, that’s just not what it’s the best at. That’s why we have no problem with AI being used by student writers.
Truckenmiller Saylor: We show students what AI is and how they can use it for simple things like checking grammar or starting an outline. Our role also is to show them how they can be innovators and better thinkers. The most exciting conversations we’re having in our classrooms are when our students realize they are smarter than the chatbot, so now they’re not so intimidated, and when they bring that awareness to bear, it gives them a unique perspective on whatever I’m asking them to write about. That has been empowering to a lot of students.
We teach them what is a large language model, what are some of the ethical concerns, how is it made, and what it builds upon. After that, we discuss what can they offer that AI can’t? How can they make our world better without relying too much on AI? We’re trying to show them how to maximalize AI’s abilities without using it as a crutch.
Lindey: Exactly. AI can’t do everything, but let it be really good at what it can do.
AI is great at noticing patterns. It’s great at creating a large volume of ideas that you can then organize around. The goal is not, “How can we be better with AI?” but, “How can we use AI for its strengths?”
— Mallory Truckenmiller Saylor
Lindey: Let it pattern match and see what it comes up with at the speed of a computer, so that our left brain doesn’t have to filter through all that information. Maybe AI can show us where we need to put newly constructed barrier reefs for the coral to grow on—that’s amazing, yeah, but AI has limits.
McNelly: I recently read that Governor Shapiro is investigating cyber schools that exist mostly with AI. We don’t have [robot teachers] yet, and, hopefully, we won’t ever have them because teaching is about humanity, right? It’s being present with one another, lots of different others in the same space.
However, teaching is more than just planning lessons. Teachers’ jobs are enormous today. What we hope AI brings is the ability to alleviate some of the extraneous things teachers have to think about, give them ideas, and then allow them to make that fit for the students.
Lindey: Yeah, I hope AI can free us from the burden and bureaucracy of paperwork so we can be present when we’re present, and endorse our creative enterprises.
McNelly: Yeah, absolutely.
Lindey: So, where does it go from here? I think it’s an open chessboard right now, because a lot of faculty rightfully have different perspectives about it. It’s good for students to encounter those different perspectives and have different experiences in classrooms—different tolerance, different levels of inquiry, different ethical questions. I mean, I’m deeply concerned about how the information is produced and collected and how the sausage is made. At Saint Vincent, our students are probably going to continue to experience a diversity of perspectives, and that’s probably good.
“Artificial Intelligence methods are demonstrating great potential uses in healthcare, including in the early diagnosis of complex diseases, like Alzheimer’s. I have worked with collaborators at the University of Virginia and the University of Pennsylvania to develop AI and machine learning algorithms for early prediction and progression of Alzheimer’s disease and related dementias (ADRD). These methods must consider the disparities that we observe in this disease, with more older females and African American patients being diagnosed with dementia than older white males. Therefore, we had to build fair and equitable AI algorithms to avoid having them exacerbate existing biases endemic in the healthcare system. Our algorithms were able to uncover specific social determinants of health that were important in the early progression of ADRD, especially education. Indicating that a patient’s education is important in the manifestation and progression of the disease. AI is the future of our world, and therefore, we must be careful to develop AI tools that are fair and equitable to ensure a better future.”
— Dr. Mary Regina Boland, C’10, assistant professor of data science
“I took a cautious approach to AI. During the fall 2024 semester, I led a group that read the book Staying Human in an Era of Artificial Intelligence by Joe Vukov. He used a Catholic lens to think about how the widespread adoption of AI affects our humanity. I’ve had those thoughts in my mind, and that’s contributed to my hesitancy.
Something I've learned from my own work with ChatGPT and other generative AI is the more expertise you have on a given topic, the more useful AI is for you. That's my rationale for keeping it out of my intro-level classes. I want students building that expertise on their own and not using AI as a crutch.
I'm experimenting with it in my upper-level classes. In fall 2024, I taught an Advanced Business Analytics class, and we used it for coding. It was a good opportunity for me to show how I use AI professionally, using in place of a Google search.
In spring 2025, I taught an Econometrics class that included a semester-long research paper. I had a couple of what I called “AI lab days,” in which the students kind of taught themselves what was directly relevant to their research paper using their favorite AI. I was there as sort of a guardrail to make sure there were no hallucinations [misleading information generated by AI. It’s an effective use of AI—using it to learn, not to replace learning.”
— Dr. Justin Petrovich, C’14, assistant professor of statistics and business analytics and Chair of the Marketing, Analytics and Global Commerce Department
“I have experimented with ChatGPT in several of my physics laboratories to help students understand the limits of large language models such as OpenAI’s GPT. Students need to recognize how to responsibly use these models, and that requires an understanding of the models’ limitations and the unavoidable effects of relying on AI tools.
As an example of model limitations, I suggested students give ChatGPT my rubric for their short writing assignments in Physics 1 Lab and a copy of their assignment and ask it to grade the assignment based on the rubric. What they discovered almost every time was that the model could give some feedback on each grading category, and even assign a score to the category, but it could not add up the numbers for a final grade at the end, despite many, many re-tries. The lesson? ChatGPT is a language model, not a computational model, so it is unable to do even basic addition, let alone solve physics problems!
A major concern I have about student use of AI models is the temptation to short-circuit their intellectual formation by offloading certain tasks to the AI, especially when it comes to writing. We’ve already outsourced our memories to the internet, where we can find basic information anytime we want. The problem with outsourcing our thinking and writing is that we give up parts of what make us human—creativity, growth and ownership of our voice.”
— Father Michael Antonacci, O.S.B., PhD, C’07, S’14, assistant professor of physics
“I teach a business technology course, and I've been using ChatGPT almost since it was released. I mostly use it to try to help my students complete some of their assignments. Structured query language (SQL) is a popular programming language for extracting information out of a relational database. We use MySQL, which is the open source, free version of it in MIS
My students can’t cheat on their assignments by using ChatGPT because the AI doesn't know the data model for my assignment—I change it every year. They use AI interactively to ask questions, and it helps them with the syntax. SQL is a programming language, so when they get an error, instead of asking me about it, they'll ask ChatGPT to figure it out. I show them some good prompts because the better the prompts, the better the responses from ChatGPT.
In fall 2022 before ChatGPT was released, I had fifty students in my Business Technology class. The students visited my office forty-eight times and sent me ninety-two emails with questions, and the average grade on their assignments was 84.22%. In spring 2023, when we started using ChatGPT, there were nineteen office visits and forty-eight emails, yet the average scores went up to 87.58%. In fall 2023, there were fourteen office visits and fifty-two emails and the average grade was 88.45%.”
— Robert Markley Jr., instructor of business administration
“I know some of our students have been playing around with AI and using it—the toothpaste is out of the tube. We can’t just ignore the technology. I think it's appropriate for us as instructors to face those challenges head on and talk about how to use it ethically and appropriately. I chose to have students use ChatGPT a little bit in my Science Writing class, so they can get firsthand experience of what it's good for and what it's not good for.
I’ve read some literature in scientific journals about large-language model AI tools for scientific writing. In the American Chemical Society Nano Journal, there was an opinion piece that laid out the appropriate and ethical uses of AI for scientific writing. I had all the students in my class read this article, and then we discussed it.
In science, we're supposed to be objective, look at data, and draw on our breadth of knowledge to make connections and be creative about the conclusions we draw. By their nature, these large-language AI tools use previously established information. They draw a connection with the logical next step, but that isn't always the creative choice, so it's really ineffective at being creative and coming up with creative uses.
Another downfall is that AI often does not credit the sources of its information, and sometimes it pulls information that is not true. I asked ChatGPT to write a passage for me and then cite its sources. Some of the citations it produced were papers I could not find, which makes me very concerned that they are made-up sources.
On the other hand, AI can help you with overcoming writer's block if you use it for idea generation. Let’s say I want to write an introduction for a research project about making electrodes from pyrolyzed bread. ChatGPT can give me an outline for what that introduction would look like and a bulleted list of topics I might want to talk about. It can give me a place to start and apply my creativity and acumen as a researcher.
It can save a lot of time for us in the writing and research process. AI also can be very effective at proofreading and editing your document. It will not change any of your ideas, but it will change sentence structure and make your document sound much more professional. It does a really nice job for scientific writing.
Some of my colleagues in creative writing say, “Oh, it makes you sound like a robot.” Well, in scientific writing, we don't really mind sounding like robot at times! The focus of my Science Writing class is endeavoring for clarity and brevity. AI can be really helpful with that.
When I started playing around with ChatGPT, I was like, “Oh, my gosh, this is really powerful—but it could be dangerous.” I walked around campus and asked students if they’d ever let ChatGPT write something for them. I got a lot of looks, like, “Is he trying to catch me?” After a little coaxing, a lot of students told me they were using it. Unless we confront it head-on and talk about appropriate usage, we run the risk of students getting themselves into trouble by putting out false information.
One thing I keep emphasizing to students is that anything that AI generates shouldn't be your final product. It's a step. It's a tool to help your creativity.”
— Dr. Mitch Taylor, assistant professor of chemistry