Today I’m reading an article by, I quote, “Mary Curnock Cook CBE, who chairs the Dyson Institute and is a Trustee at HEPI, and Bess Brennan, Chief of University Partnerships with Cadmus, which is running a series of collaborative events with UK university leaders about the challenges and opportunities of generative AI in higher education.” HEPI (The Higher Education Policy Institute, 2020) is a key educational think tank, they explain, “UK-wide, independent and non-partisan. We are funded by organisations and universities that wish to see a vibrant higher education debate as well as through our own events.”
The article is called “From AI prohibition to integration – or why universities must pick up the pace,” and you can see from the title where it is going. It’s a summary of the Teaching and Learning Forum by Cadmus and King’s College of June 2025, about “the mismatch between the pace of technological and social change facing universities and the slow speed of institutional adaptation when it comes to AI.” I’m not going to repeat the main ideas we are all familiar with, but, basically, the forum dealt with the situation we have been facing in the last two and a half years: students are massively using AI, and this has forced us to change our approach to assessment, with a far closer focus on classroom exercises, away from the internet.
The generative AIs (or, rather, LLMs) are, besides, multiplying, with ChatGPT being now accompanied by Claude, Copilot, Gemini and DeepSeek. Anthropic’s Claude for Education, “which helps students by guiding their reasoning process, rather than simply providing answers,” seems as scary as the rest, if not more. There was in the forum a general acknowledgement that the introduction of generative AI into higher education has been forced by students, and not at all a choice we, the teachers, would have freely embraced. The change, besides, has been very quick, giving us hardly any time to react. There are references in the article to Birmingham City University’s Cadmus implementation (“saving 735.2 hours of academic staff time while improving student outcomes”) as a success story. Cadmus (https://cadmus.io/), which ran this forum, is a private platform that promises to improve assessment avoiding LLMs. It seems to be a few steps ahead of Moodle, as the ultimate tool in digitalising assessment. As if we teachers no longer knew how to collect exercises and grade them.
I’d like to focus on the three recommendations from the forum that the article gathers. The first one, is “Address systemic inequalities, not just assessment design,” which seems plain common sense. Digital media are giving an advantage to the more privileged students that have access to updated computers and smartphones, the required bandwidth, and so on. The ones who lack these tools suffer from, attention!, ‘digital poverty.’ Second suggestion: “Reduce high-stakes assessment.” Another common-sense suggestion: “reduce reliance on high-stakes exams in favour of diverse and more authentic assessment methods.” I’m surprised by this because, precisely, the problem with generative AI has surfaced because, in the last fifteen years since the implementation of the ECTS-based degrees, we have been generally replacing exams with other types of exercises done at home.
Now, here’s the suggestion that bothers me: “Co-create with students as partners.” I’m quoting: “Students are driving the pace of change – they are already using AI. They need to be partners in designing solutions, not just recipients of policies.” The action recommended is “Involve students in co-designing assessments, rubrics and AI policies. Create bi-directional dialogue about learning experiences and empower students to share learning strategies. Build trust through transparency and genuine partnership.” So that you understand my position, in my MA classes I have totally delegated to students the responsibility for their assessment. They have a rubric and they can rate their performance. If they exaggerate their merits, then I intervene, but this has been unnecessary so far. In my BA classes, I used to ask students to rate their classroom participation mark, also on the basis of a rubric. I speak in the past because I did that with my second-year students. Now that I’m teaching fourth year, I’m using a completely different model for classroom participation, which needs my input, but that I’m willing to rethink with them.
So, yes, students ARE my partners, if only because I cannot teach if they don’t want to collaborate with me. They can even rate my performance in the semestral surveys though, as my school knows very well because of my constant protests, said surveys need to be modified so that they reflect the opinion of the whole class and not only the 30% who bothers to fill them in, and who, in some cases do not even attend the sessions. Despite the power that rating surveys have placed in the students’ hands, and that may affect our obtaining our five-yearly salary supplement for quality teaching, higher education is not designed to be a full, equal partnership. The power imbalance on which assessment depends cannot be done away with, though it can be reduced.
Yet, what bothers me most is not assessment, but pedagogy. For me, the idea that students’ opinions on how they should be educated has the same weight as the teachers’ is not acceptable. I know that some of my former teachers, particularly in secondary school, could be smirking if they read this because I was the kind of cocky student always protesting against exams, which I have hated with a passion all my life. I do believe that students must express their opinion about assessment, if they find it unfair, biased, or faulty in any way, but I’m not willing to accept impositions.
The use of generative AI is an imposition, an implicit one that is becoming explicit with the collusion of the universities under pressure from think tanks and businesses. This follows from the imposition of lower standards, which we have gradually agreed to, as well. I refer to the resistance to reading, which we have meekly accepted as a sign of the times and of our obsolescence as Literature teachers. We have been reducing our reading lists and now we’re caving in to the use of generative AI. There is nothing beyond this, except the death of the Humanities.
I had a disagreement on BlueSky with the very friendly Prof. Jon Jackson (@iamjonjackson.bsky.social), Senior Lecturer in Software Engineering and Management of Queen Mary’s University in London. He actually recommended the article I’ve been commenting on. I wrote that “Students are not driving the pace of change – they are imposing their misuse of AI and destroying key practices in learning and research. At least the lazier students. The more motivated are not happy with the use of AI, either. Why this total connivance??” He replied that although in “Language, literature, and the arts for example, I totally get how GenAI can be viewed as a scourge + an affront to the dignity of human creativity” in software engineering the more motivated students “will (and should) have a very different view on GenAI compared to students of English literature, for example. Context is super important….” I ended our brief exchange agreeing that, yes, context is key, this is why the use of AI feels like a frontal attack against the Humanities.
There is no way, then, I can see an advantage in collaborating with students to decide how they can use AI for assessment in the English Literature classroom specifically. It would be tantamount to collaborating in the obliteration of the academic practices I’ve been defending for most of my life, since I was a secondary education student dreaming of being a university teacher. Other revolutions in academic life, introduced by the gradual digitalization of catalogues, databases, and texts, have been shared by students and teachers alike, and have improved our work in and outside the classroom. Indeed, we teachers know that we are far more proficient at finding secondary sources and information online than our students, for, naturally, we have more practice. What generative AI has brought into higher education has nothing to do with improvement, but plenty with destruction of our thinking processes and I will not accept it.
I’m not a digital Taliban, and I do see the advantages of using generative AI in some fields and for some purposes. Since Google introduced Gemini, I find myself checking doubts and information, yet in a way slightly more advanced than by simply using Google itself. I have asked ChatGPT occasionally about primary and secondary sources. I have not, however, used LLMs to write or correct my texts, grade students exercises, write book reviews or do peer reviewing and, in short, anything that requires the use of my thinking abilities. I read yesterday a comment by a student noting that she’d rather have an AI grade her exams, for teachers get tired as they grade and become noticeably unfair after a few hours. Imagine, however, a situation in which the student writes the exercises using ChatGPT and teachers, glad to use their time for research, also use ChatGPT for marking. How is that acceptable pedagogy? What kind of twisted educational collaboration is this?
I always tell my students that using ChatGPT is like having a friend help you with your essays and that might be in the end a key issue. Jeremy Ettinghausen, a teacher who has explored the 12000 questions three of his (male) students had asked ChatGPT over 18 months came to the conclusion that these three students, who “are not friendless loners” are “typing into the void with only an algorithm to keep them company.” They “are increasingly turning to computers to answer the questions that they would once have asked another person. ChatGPT may get things wrong, it may be telling us what we want to hear and it may be glazing us, but it never judges, is always approachable and seems to know everything. We’ve stepped into a hall of mirrors, and apparently we like what we see.” ChatGPT and other AIs are causing what my good friend Carme Torras called a ‘sentimental mutation’ in her eponymous novel (translated as The Vestigial Heart). Carme, a robotics engineer, imagined a humanity weakened by dependence on ubiquitous robotic assistants provided with AI. We have now the basic AI, and I wouldn’t be surprised if it soon jumps from the smartphone or the laptop into the humanoid robotic companions.
I don’t want to collaborate with AI-dependent students on assessment, but I’m open to suggestions from those who resist AI and see how its misuse is destroying the Humanities. I will insist on the word ‘misuse’, for there are indeed positive ways to use AI. The question is that these positive ways should always exclude replacing human intellectual and artistic creativity. AI should help us, not replace us.