I was supposed to take part in a seminar next week, which I’ll have to miss, with a talk about how to use AI correctly. In this talk I was going to describe, once more, how the late Iain M. Banks presents AI in his Culture novels (see https://en.wikipedia.org/wiki/Culture_series).

            The Culture is a post-scarcity, anarcho-socialist society, born of a confederation of space-faring humanoid civilizations which agree that survival in their gigantic spaceships, orbitals, and the empty planets and asteroids they occupy requires superior management skills. They, therefore, entrust said management to their AIs, nicknamed the Minds, which are lodged in the computers running the spaceships. The Minds progressively help the Culture citizens to get rid of money, obligatory work and property so that these lucky, carefree citizens eventually reach a utopian state in which they can live as they please.

            Banks is very optimistic, believing that the Minds can run efficiently a stable economy (Culture individuals are not interested in consumption) and that the citizens can find satisfaction in leisure, with most occupying their time in fulfilling endeavours with no need for aggression, possessiveness or dominion. The Culture is far from being perfect, as its enemies constantly point out, but it shows how AI can be used for human liberation: as an aid to reach social stability, get rid of the jobs nobody should be forced to do and free the citizens from the compulsions that often make life unbearable here on Earth. Also, to guarantee health and full control over the body. Banks says practically nothing about how actually the Culture citizens are educated, but there is a general supposition that once patriarchy and capitalism are eliminated education can be based on mutual respect and the development of each person’s capacities.

            Instead, the way AI is progressing here on Earth has already become a nightmare, dominated by corporate interests based on greed. AI is being used in very many different ways that make human life easier, but I refer here to the generative AI which is depriving many persons of their jobs in creative professions and that is threatening to end the human capacity to process thought. As Brian Klaas wrote yesterday (https://www.forkingpaths.co/p/the-death-of-the-student-essayand), “In the formative stages of education, we are now at risk of stripping away the core competency that makes our species thrive: learning not what to think, but how to think.” Klaas, a college teacher, refers in particular to “a new genre of essay that other academics reading this will instantly recognize, a clumsy collaboration between students and Silicon Valley. I call it glittering sludge.”

            Yet, here I want to go far beyond the problem of the students’ cheating and into a much larger issue, prompted by the teaching innovation workshop I attended yesterday at my school, the Facultat de Filosofia i Lletres, of UAB.

            The workshop was not a monographic discussion of AI. Other issues were discussed, such as absenteeism, gamification, how to connect teaching with practice (from experiences with museums to contacts with local authorities) and so on. Yet, inevitably, whenever teachers gather together AI looms large. There was a specific round table which quite scared me. The two colleagues that totally misunderstood the process of detecting AI-generated student exercises were a warning about the fact that many of us, ageing boomers, are overwhelmed by a situation we can hardly process. Yet, I was far more disturbed (this is the word) by the proposals coming from two colleagues who proposed that we integrate AI into our teaching and research respectively.

            A colleague from the Philosophy Departament described how a new syllabus for a core subject now integrates four different levels of AI use for the different exercises. It is taken for granted that students will use AI and, so, the teachers are now asking them to specify what services or programmes they have used for each exercise. I have no idea why she thought this strategy is feminist and linked to Haraway’s concept of the cyborg. Who knows. A colleague from my own Departament showed how to use Google’s NotebookLM (https://notebooklm.google/), a research assistant integrated with LLM (Large Language Model) Gemini. He made the point that research is time-consuming, specially in the Humanities, as we need to read a lot, and NotebookLM can offer valuable assistance to ease our burden. Coming from him, an excellent academic with Job’s patience to dig into old texts, that sounded truly menacing. I’ll try to explain why.

            As an SF lover and Banks’s fan, I am indeed in favour of using AI. I’m not in favour, though, of misusing AI. We have been using AI for decades now for instance to help us locate bibliography through online databases and catalogues, though these AIs are more opaque than the newer pseudo-human, chatty assistants, from Alexa to ChatGPT. I myself have recently asked ChatGPT to make a list of characters in a novel I’m studying or to find the title of a film whose plot I barely recalled. I do understand that NotebookLM can be very useful to locate key features of a text instead of having to read it once more, summarize secondary sources we might not want to read entirely, give better shape to our unstructured thoughts and so on. It can offer shortcuts.

            I don’t want, however, this AI assistant to end up being a too close collaborator or even a co-author, as the flesh and bones research assistants have been to so many cheeky professors. We might all get there, in the same way we’ve been using Google to find or check information, but right now, at this stage, I still want to do the dirty job. My colleague argued that with so many teaching and admin commitments we hardly have time for research, hence his use of NoetbookLM, but my impression is that we should be slowing down instead of rushing the publication of our research. That, of course, is my privilege as a researcher currently doing little teaching and no admin work.

            What had escaped me, and I suddenly realized during the workshop, is that the problem we are facing now is far more serious than I thought. Someone mentioned how when calculators appeared everyone feared we would lose the capacity to do basic math, and everyone in the room laughed out loud. Well, we have lost indeed that capacity. With LLMs and research assistants like NotebookLM we will lose the capacity to think, organize information, extrapolate insights from our sources and, generally, write argumentative essays, which are the basis of education and scholarship in the Humanities. This is a Faustian bargain that has already contaminated education at all levels and that threatens to engulf academic writing, at least, I insist, in the Humanities. There is some hope in the return to the in-class exam, but this is a too basic type of exercise for higher education.

            Not so long ago, scholars used to write academic books without using computers but we cannot go back to that past stage, just as we cannot ditch calculators. Yet, the problem is that the moment we use computers we succumb to the lure of AI. If we stop asking students for exercises written at home using computers, with citations from secondary sources, because we worry that they will use ChatGPT or similar, we are cutting the transmission belt that carries forward thought and knowledge. This is extremely serious.

            We have been using writing and the scholarly method based on quoting authorities (bibliography, secondary sources) to pass on what we know. So far, we have trusted that each generation would produce persons capable of moving forward human knowledge, but since 30 November 2022, when ChatGPT was launched, we are staring at an abyss. Either we allow students to use AI as much as they wish, and we risk being replaced by generations of, excuse me, illiterate idiots, or we suppress all contact with AI, which might mean losing the scholarly methods that have served humankind for thousands of years.

            Am I exaggerating? After all, we have planes, trains and cars, but we still walk and we have not really lost much by no longer enslaving horses (quite the opposite). Still, I am dismayed. This is no longer a question of students’ cheating their way into getting degrees: it’s a question of destroying the very foundations of scholarly life. In a few years’ time, if not right now already, only AIs will ‘read’ our publications; we will be cited by students who will never read us and who will not understand how we produce bibliography. Academic literary criticism might soon be as dead as the dodo (or Renaissance pastoral poetry), which means that we urgently need to rethink what we do and what for. I can perhaps imagine a future with no literary criticism whatsoever, but the AI curse is extending to all disciplines, and I simply cannot accept a future without, for instance, historiography or ethics. Or indeed, science.

            In a way, I have already given up, since I have stopped teaching students to write papers (as I used to do in Victorian Literature) and are now teaching them how to write reviews with no secondar sources (in Contemporary Literature). I was wondering yesterday what’s going to happen to the BA dissertation in just two years’ time if we stop teaching students to write papers for fear of ChatGPT. If our undergrads don’t read secondary sources and don’t write papers, they won’t even understand the purpose of the BA dissertation, much less the method to write it. If that is suppressed, what kind of exercises will students do at MA level? Will PhD dissertations be eventually abandoned? If I insist on writing a book along two years, even though ChatGPT or NotebookLM could help me to do it in under two months, am I being stubbornly stupid? Will be get eventually used to reading AI-generated scholarship?

            We are now facing the consequences of two very wrong ethical choices. First, generative AI has been made available regardless of the consequences. Second, people (above all, young students) have started using it also regardless of the consequences. This is classic of current capitalism: we are dying of cancer in droves because many corporations are selling us toxic products but also because we choose to consume them; we can no longer afford housing because we prefer being tourists in other people’s lands; and we’ll kill the planet rather than stop climate change because it’s too much effort to give up exaggerated consumption. The apex of stupidity, however, is building AIs that will deprive us in just one generation of the capacity to think, possibly what capitalism has wanted all along.

            Welcome, then, to the new dark age of stupid, supposing climate change does not kill us first.