Banning AI not the answer
Advertisement
Read this article for free:
or
Already have an account? Log in here »
We need your support!
Local journalism needs your support!
As we navigate through unprecedented times, our journalists are working harder than ever to bring you the latest local updates to keep you safe and informed.
Now, more than ever, we need your support.
Starting at $15.99 plus taxes every four weeks you can access your Brandon Sun online and full access to all content as it appears on our website.
Subscribe Nowor call circulation directly at (204) 727-0527.
Your pledge helps to ensure we provide the news that matters most to your community!
To continue reading, please subscribe:
Add Brandon Sun access to your Free Press subscription for only an additional
$1 for the first 4 weeks*
*Your next subscription payment will increase by $1.00 and you will be charged $20.00 plus GST for four weeks. After four weeks, your payment will increase to $24.00 plus GST every four weeks.
Read unlimited articles for free today:
or
Already have an account? Log in here »
Like many professors or teachers, I find myself confronting the role of artificial intelligence in the classroom. But I am not alone. Students are also concerned. They are worried about being accused of using artificial intelligence. Professors seem just as worried that the students are doing it.
All too often, it seems as if students are presumed guilty until proven innocent. Guilt assigned.
Right now, it seems as if professors are becoming prosecutors. Each investigation, which feels like an inquisition, unfolds rigidly because that is what policy dictates. The entire process takes place within the confines of the university. Students are, thus, rightly anxious and nervous about the accusations, about defending themselves against the lurking threat of being accused of using artificial intelligence.
A screen displays guidelines for using artificial intelligence in Casey Cuny's English class at a high school in Santa Clarita, Calif., in August 2025. Brandon University professor Jonathan Allan writes that when it comes to determining acceptable uses for AI, "a solution that outright bans the use of AI is naive at best, cruel at worst." (The Associated Press files)
Despite this, or perhaps because of it, I have tried to relieve some of these anxieties by being clear about the acceptable uses of AI and which AI detector is being used. My introductory course has students using Grammarly, which provides a report on the actual writing of the paper as well as built-in AI detection.
Even though Grammarly provides these services, what interests me more is how Grammarly supports writers. My students and I use Grammarly to improve our writing, build confidence and ultimately prepare us for the job market, which will require us to write and write well, ultimately becoming less reliant.
The problem is that a solution that outright bans the use of AI is naive at best, cruel at worst. Grammarly, for example, as used in my classroom, is about students gaining confidence in writing. Students are rarely taught the rules of grammar, let alone style. They might know they have grammar problems, but they don’t understand why. Grammarly supports their learning.
AI tools can also increase accessibility, whether it’s a voice reading the course materials aloud or rephrasing a confusing sentence. AI also might help someone unfamiliar with tone in writing. What sounds angry to one person may not appear that way to someone else.
Another student might not understand what a metaphor is and asks an AI to explain it using an analogy relevant to their understanding. So, a student in science, for example, asks about metaphors in science and is told about biochemistry and the lock-and-key metaphor, in which the enzyme is the lock and the substrate the key. It may not be perfect, but it may well help that student understand.
Universities, already under increased scrutiny with debates about how free speech really is at the university, if there are diversities of opinion, worries about “basket-weaving” courses, and grade inflation, to name but a few, are now faced with the prospect of degrees being earned through AI. This is part of the reason policies are developed that outline the use of AI that is permitted. In many cases, the rule is none. Don’t use it. Don’t think about it.
But are the universities holding themselves to the same standards as the students? This is an important question to think about as policies are developed and refined to ensure students are ethical in their behaviour.
How can universities have a set of rules for students and not for themselves? If we do not allow students to use it, do we ensure that the policies that the university develops are genuinely generated by humans? Or are the press releases about the latest research at the university being quickly and easily written by generative AI? Or perhaps the letter to alumni asking for support is now written by an AI trained to do just this? Or perhaps the acceptance letter for a graduate program is now generated by AI? At what point does AI become used to create long lists and short lists for hiring?
As policies are developed for students, they must also be applied to the institution. The presentations made to the board of governors cannot be generated by AI, nor can the university’s budget; if they are, they need to be recognized as such. We cannot have a rule for one set of citizens in the university and another set for other citizens — especially when that latter set is tasked with policing the use of AI in the classroom.
If universities genuinely believe in the importance of “educating the imagination,” to borrow Northrop Frye’s words, then we must lead by example. This does not mean banning AI, but about being truthful and transparent about its use. It also means that we all have to learn collectively about AI, its uses, its limits, and the harms and benefits.
» Jonathan Allan is a professor in the department of English, drama and creative writing, and gender and women’s studies at Brandon University.