For the second time, I have been accused of using AI to write an essay. Both times, after I experienced extreme anxiety over an accusation that could jeopardize my future, my professors came to the conclusion that my writing was just good. One professor even acknowledged that AI detectors are unreliable and that some of the best students may appear to be using AI simply because they prioritize professionalism, clarity and strong writing—skills that we are actively taught to develop.
This is not just my experience. Many talented writers on campus have faced the same baseless accusations. Students who excel in writing are being wrongly flagged, forced to defend their own work and subjected to unnecessary stress. Villanova’s academic integrity policy regarding AI is failing its students, targeting those who should be acknowledged for their abilities instead of assuming wrongdoing.
Sophomore Cali Carss pointed out the confusion this policy creates.
“I feel like the AI part of our academic integrity policy is too vague, which leads to a lot of variation between teachers in how it’s enforced,” Carss said.
This inconsistency puts students in a precarious position, one in which their academic standing depends not on clear rules but on individual professors’ interpretations.
Villanova’s policies are not intended to punish strong writers. The administration, faculty and student government have worked together to address AI’s impact on academics with the goal of maintaining integrity—not discouraging student success. Villanova values learning, growth and excellence.
However, while the policy is well-intentioned, its implementation has led to unintended consequences. The permitted usage of unreliable AI detection tools and the assumption of guilt puts students in an unfair position.
This is a fundamental problem with Villanova’s academic integrity policy regarding AI: it assumes guilt first and places the burden of proof on the accused student. Many accusations stem from professors running assignments through AI detection tools, tools that, ironically, are AI themselves. Leading universities like MIT and Yale prohibit faculty from using these detection tools because they are unreliable. Even OpenAI, the creator of ChatGPT, has acknowledged that all AI detection tools are unreliable. In fact, OpenAI pulled its own AI detection tool from the internet because it was too inaccurate to be useful. These tools have falsely flagged historic documents like the Declaration of Independence, The Communist Manifesto and the U.S. Constitution as AI-generated. False positives are common, yet at Villanova, students are forced to defend themselves against these flawed results.
Villanova claims it does not endorse a specific AI detection tool, but it allows them to be used as evidence in academic integrity cases.
According to an email from Vice Provost for Teaching and Learning Randy Weinstein, “The University has not endorsed or purchased a tool. We have provided guidance to faculty on AI and highlighted that these detection tools are not 100% reliable.”
Weinstein also noted that professors may ask students to verify their work by submitting drafts, comparing it to past assignments or explaining their writing process, which is a much better way to handle suspicion of AI usage. Despite this, Villanova still permits professors to use AI detection tools as part of academic integrity investigations, creating an environment where unreliable technology can be a part of dictating a student’s academic standing.
The policy also offers students little recourse if they are accused of AI use, aside from a formal appeal process that requires significant time and effort.
“All students accused of an academic integrity violation can appeal the violation and/or any grade penalty that is assigned through their respective processes,” Senior Vice Provost for Academics Craig Wheeland said. “All student records are strictly confidential. If a student believes they are wrongly accused, they can discuss the matter with the faculty member and utilize the official appeal processes.”
Why must good writers endure stress and anxiety just to prove their innocence? In our legal system, the standard is “innocent until proven guilty.” Why does Villanova seem to assume the opposite? Why does the burden of proof fall on the student rather than the accuser? Why must I dedicate time and energy to defending work I have already spent significant effort creating? And beyond that—why would I use AI when I’m a writer for the newspaper? I love writing. I take great pride in my work. Yet, my ability is treated as a red flag rather than an achievement.
The most troubling part of this entire issue is that my work was questioned because it was well-written. Other skilled writers have faced the same scrutiny. Are we expected to simplify or weaken our writing just to avoid accusations?
This policy creates a hostile environment for students and for professors.
“I feel like my professor is doubting my abilities,” an anonymous student said. “I’m uncomfortable in class now. Even if they believe my work is original, the fact that I have been accused at all means that bias may linger against my writing in the future.”
Meanwhile, professors who rely on these faulty AI detectors may be put in the difficult position of having to accuse students without solid evidence, straining their relationships with those they are meant to teach.
AI misuse is a legitimate concern on college campuses, but Villanova’s policy on AI, or lack thereof, is not the solution.
Instead of punishing strong writers and relying on flawed detection methods, the university needs to reassess its approach. False accusations harm students, erode trust between professors and students, and discourage academic excellence. Villanova should not be a place where students fear their own abilities.
Villanova has the opportunity to lead by example by prioritizing fairness and academic integrity in a way that does not undermine student success. But until real changes are made, students will continue to fear that their best work may be used against them.
I put this article through five AI detectors—GPTZero (10.75%), Copyleaks (100%), Quillbot (0%), Grammarly (5%) and Undetectable AI (40%)—and received five completely different results.
According to these AI detectors, I am simultaneously human, artificial and somewhere in between. It’s Schrödinger’s article, if you will.