Why Universities Fear AI More Than Students Do
University AI bans aren’t really about integrity; they signal a deeper crisis: higher education struggles to clearly define what it teaches and what its exams actually measure.
This essay is a collaboration between Mila Agius and Sam Illinworth – Professor & poet, author of popular Newsletter Slow AI.
Sam writes about reflective, intentional, and human-centered approach to artificial intelligence that resists mindless acceleration and the rush toward ever-faster, less thoughtful technological development.
This article does not defend cheating. It argues that blanket AI bans expose a deeper institutional failure. Not a student problem, but a system problem. A system that can no longer clearly explain what it is teaching, or what it is actually testing.
In January 2023, France’s Sciences Po became one of the first high profile European cases to take a hard line. The university announced that using ChatGPT and other AI tools without transparent referencing would be treated as academic misconduct, with disciplinary consequences that could include exclusion. [1]
After that, the pattern across higher education was not one single universal ban. It was a fast moving wave of strict limits, mostly aimed at assessed work and exams. Some institutions banned AI by default unless a course explicitly allowed it. Others created separate misconduct categories. Others updated wording around authorship, originality, and unfair advantage.
In Denmark, Aarhus University addressed the exam issue in plain language: an exam must be completed independently, and with AI you are not working alone. DTU sets a similar baseline rule for written exams: generative AI is not allowed unless the exam format explicitly permits it. [2][3]
In Lithuania, Vilnius University reportedly expelled around ten students over improper AI use, with the central violation framed not as the mere presence of a tool, but the failure to disclose its use. [4]
It spread like bureaucratic panic. But look closer and the picture shifts. This is not only a fight for academic integrity. It is a defensive reflex from a system that suddenly discovered how fragile some of its assessment rituals really are.
Round One: Official Reasons Versus Real World Effects
The banning angle:
Universities usually build their case around three arguments: plagiarism, unequal access, and skill erosion.
In practice, this often becomes a rule that sounds like this: if AI is not explicitly allowed, then submitting AI written text as your own is misconduct. Birkbeck states this very directly: students must not submit work generated by an AI chatbot for assessment unless the task explicitly permits it, and failing to disclose relevant use can be treated as an academic offence similar to plagiarism or contract cheating. [5]
The counterpunch:
But bans create secrecy. Instead of teaching responsible use, they push students into hiding.
And this is not just theory. Reporting on student behavior keeps returning to the same dynamic: students become reluctant to ask what is allowed, because admitting any AI use can get them branded as a cheater. The result is silence, guesswork, and a growing grey zone. [6]
Round result: a ban produces hidden use instead of responsible use. Score is one to zero for integration.
Round Two: The Invisible Core Skill Problem
The banning angle:
Universities insist AI threatens critical thinking, writing, and argumentation. Cambridge has even reported cases under specific AI related misconduct framing. [7]
The counterpunch:
Here is the uncomfortable paradox. If a single prompt can break an exam, that often says less about AI’s power and more about the exam’s weakness.
A lot of higher education assessment still rewards information reproduction and tidy packaging. Exactly what AI does quickly. The problem is not knowledge. It is that many assessments still reward reproduction and polish. These are tasks AI performs well. When assessments privilege output over judgment, AI exposes the gap immediately.
And that leads to the question universities avoid. What is the core skill each discipline trains, and how does the university actually test it?
In fintech, the core skill is making the invisible visible. Money flows, risk, compliance contours, hidden incentives. In journalism, it is asking questions that reveal systemic contradictions, not rewriting press releases. In medicine, it is making decisions in uncertainty and taking responsibility for the choice.
Round result: AI does not destroy strong education. It destroys the illusion that mechanical reproduction of knowledge is valuable on its own. Score is two to zero.
Round Three: Control as Reflex
The banning angle:
When policy cannot keep up with reality, the system grabs the most familiar lever: control. More supervision, harsher sanctions, more locked down assessment formats. Reports describe a return to in person exams, proctoring tools, restricted browsers, and suspicion by default. [6][8]
The counterpunch:
This looks less like adaptation and more like a defensive spasm. Universities are heavy structures. Accreditation, standards, credit systems, committees. Real reform drags paperwork behind it like a chain. So it feels easier to bite, ban, punish, tighten rules, than to redesign assessment so it tests higher order work.
Round result: a frozen structure can manage reflexive defense better than transformation. Score is three to zero.
Round Four: Double Standards That Eat Trust
The banning angle:
Many lecturers instinctively see AI as a shortcut for students to avoid reading and assignments. Real student stories repeat the same pattern: professors assume the worst. [8]
At the same time, a different tension appears. Adults use AI for their own tasks, but students are told not to touch it, or the boundaries are explained in a way that makes safe use impossible.
The counterpunch:
This is not a conspiracy. It is group resistance protecting the old order. The system defends roles, rituals, and symbolic status. AI answers faster, it is always available, it adapts to the student’s level, and it does not lecture you.
Students experience this as a simple contradiction. If the tool is genuinely useful, why are they being forced to pretend it does not exist?
Round result: double standards erode trust faster than any chatbot ever could. Score is four to zero.
Knockout: The Alternative Model Already Exists
Some institutions are already moving in a different direction, but the real shift is not simply allowing AI. It is slowing down how learning is framed and assessed.
Oxford’s guidance emphasises safe and responsible use rather than denial. UC Riverside positions AI tools as thinking partners that stress test reasoning rather than produce answers. European initiatives such as the AI Leaders project focus on shared frameworks, disclosure, and ethical practice. [9][10][11]
What links these approaches is restraint. AI is not treated as a shortcut to efficiency, but as a deliberately constrained instrument. Students are required to explain why they used it, what it contributed, where it misled them, and which judgments remained human. The assessment focus moves from the surface quality of output to the quality of decision making.
This is the Slow AI approach in practice. It introduces friction on purpose. It values reflection over speed, traceability over polish, and accountability over fluency. When AI is used slowly, it stops being a threat to learning and becomes a tool that exposes thinking rather than replacing it.
That is why bans fail. They respond to speed with prohibition instead of responding to fragility with redesign.
Verdict
Universities are not facing an AI problem. They are facing an assessment problem that AI has made visible.
When a system relies on rituals of authorship, presentation, and compliance, it becomes brittle. AI breaks those rituals easily. The response has been control. More rules, more suspicion, more locked down formats. That path preserves appearances while trust erodes.
The alternative is harder but clearer. Redesign assessment so that judgment, responsibility, and explanation sit at the centre. Require students to show how they think, not just what they submit. Use AI slowly, transparently, and critically, so that its limits become part of the learning rather than something to hide.
The real question is not how universities can protect themselves from AI. It is whether they are willing to slow down enough to teach what automation cannot perform. Reflection. Context. Consequence. Ownership of decisions.
If higher education avoids that reckoning, AI will not make it obsolete. It will simply reveal how little of its current practice needed to exist in the first place.
If you enjoyed this guest post, visit Slow AI by Sam Illinworth, where you can get weekly Slow AI prompts that help you think with more intention, create with less pressure, and develop a calmer, more human way of using AI.
Sources
[1] “Top French university bans use of ChatGPT to prevent plagiarism” (Reuters, author not accessible from page in my fetch, 27 Jan 2023).
[2] “Ny kunstig intelligent chatbot kan besvare alle dine spørgsmål – løste AU-lektors opgave til et 7-tal” (Omnibus AU)
[3] “Use of aids and materials at written exams” (DTU Student).
[4] “Vilnius University expels ten students for improper AI use” (LRT).
[5] “AI guidance” (Birkbeck, University of London).
[6] “AI like ChatGPT and Grammarly blur the lines of cheating. Here’s how schools are trying to keep up” (AP News).
[7] “Cambridge University introduces new AI-related academic misconduct category” (Varsity).
[8] “ChatGPT is wreaking havoc on education. Schools are scrambling to catch up” (Business Insider, Aug 2023).
[9] “Guidance for the safe and responsible use of generative AI tools” (University of Oxford, Sept 2025).
[10] “Customer stories: Gemini” (Google Blog, Sinha, 20 Mar 2025).
[11] “AI Leaders Project” (Global Focus Magazine, Cetina-Presuel, 17 Jun 2025).
AEO: Key questions answered
What is the article about?
This article argues that university bans on AI tools do not solve academic integrity problems. Instead, they expose deeper weaknesses in how higher education defines and assesses learning.
Why are universities banning AI tools like ChatGPT?
Universities usually justify bans by citing plagiarism, unequal access, and skill erosion. In practice, these bans function as a defensive response to assessment systems that reward polished output rather than judgment or reasoning.
Does the article defend cheating with AI?
No. The article explicitly rejects cheating. It argues that blanket bans push AI use underground instead of teaching responsible, transparent, and accountable use.
What problem does AI reveal in higher education?
AI exposes the fragility of assessments that rely on reproduction, formatting, and surface-level compliance. If a single prompt can undermine an exam, the problem lies in the assessment design, not the tool.
What alternative does the article propose?
The article advocates for redesigning assessment around judgment, explanation, responsibility, and decision-making. AI should be used slowly, transparently, and critically as a thinking partner, not as a shortcut.
Q&A: Reader questions
Why do AI bans often fail in practice?
Because they create secrecy rather than responsibility. Students hide AI use instead of learning how to disclose, critique, and reflect on it, which undermines trust and academic integrity.
Is AI the real threat to critical thinking?
No. The article argues that weak assessment models are the real threat. Strong education systems that test reasoning and judgment are not undermined by AI.
Why do students perceive AI rules as unfair?
Because of double standards. Faculty and professionals openly use AI, while students are often told to avoid it or navigate unclear boundaries, which erodes trust.
Are any universities handling AI differently?
Yes. Institutions like Oxford and UC Riverside focus on safe, responsible use rather than denial. European initiatives such as the AI Leaders project develop shared ethical frameworks instead of bans.
What is meant by “Slow AI” in education?
Slow AI is an approach that introduces friction deliberately. Students must explain why they used AI, what it contributed, where it failed, and which decisions remained human. The focus shifts from output quality to thinking quality.
What is the article’s final conclusion?
Universities are not facing an AI crisis. They are facing an assessment crisis that AI has made visible. The real question is whether higher education is willing to teach what automation cannot replace: judgment, context, responsibility, and ownership of decisions.






It was wonderful to work with you on this Mila. 🙏
With the rise of AI and basically infinite personal customised tutoring (hopefully anchored in authoritative sources only), universities have to adapt fast.
Especially in the UK, where universities are hit with funding problems due to immigration law changes, they will be forced with a reckoning.