Today I’m using my post as an excuse to read an article titled “Against the Uncritical Adoption of ‘AI’ Technologies in Academia” by Olivia Guest and 18 other authors based mostly in the Netherlands. This text can be found in a pre-print repository (https://philarchive.org/rec/GUEATU) where it was filed on 7 September of the current year. The abstract warns that “under the banner of progress” many noxious products, such as tobacco, combustion engines, or social media, have been accepted with no reflection on the consequences, as we are doing now with AI. The authors, who had already published an open letter on this issue (see https://openletter.earth/open-letter-stop-the-uncritical-adoption-of-ai-technologies-in-academia-b65bba1e), call now “on our employers to reverse and rethink their stance on uncritically adopting AI  technologies.” Their aim is to convince universities to “counter the technology industry’s marketing, hype, and harm; and to safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity.” I share their worries 100%, hence my summary of their article for the benefit of my readers (it has a very long bibliography, in case this interests you).

I’ll highlight a couple of opening statements, though it’s hard to choose among so much of value:

  • “The technology industry is taking advantage of us, sometimes even speaking through us, to convince our students that these AI technologies are useful (or necessary) and not harmful.”
  • “When it comes to the AI technology industry, we refuse their frames, reject their addictive and brittle technology, and demand that the sanctity of the university both as an institution and a set of values be restored.”

The main points raised and discussed are (my paraphrasis):

a) the fact that AI terminology, which has been around since the 1950s, is purposefully confusing, part of the corporations’ use of buzzwords to sell products by creating hype

b) as teachers we lack the tech know-how to properly understand current AI systems and explain them in a nuanced way to our students (besides, “In the present, AI has no fixed meaning. It can be anything from a field of study to a piece of software”). Check: how many of you truly understand that ChatGPT is a LLM (Large Language Model) that possibly uses an ANN (Artificial Neural Network)? I say possibly because OpenAI is totally opaque about how it works and has not released source code (apparently an infringement of scientific regulations).

c) any rejection of AI places you in the field of the old-style, backward luddites. Your expertise is rejected in favour of the companies promoting the use of AI, which you can’t oppose (a @BlueSky member told me that by forbidding my students to use AI I am seriously damaging their future professional prospects). The words ‘ethics’ and ‘transparency’ are often invoked to curb down our resistance, but they don’t come from experts in AI application: they come from AI manufacturers. Greenwashing also plays a key role. The authors call “on the educational technology community to demystify AI systems and instead approach those with more criticality and humility.”

d) AI is not unstoppable. Besides, if we contribute to its colonization of the university, we contribute to eroding the future and deskilling “students and ourselves,” enhancing environmental destruction.

e) previous failures of technological promises to deliver should warn us that AI will inevitably fail. Its bubble will burst, leaving in its wake the destruction of many jobs and institutions of education. I’ll add that, if you recall, primary school classes were digitalized in years past, only for teachers to conclude that this has been harming children. The Scandinavians, always at the forefront, are already emptying the classroom of digital devices.

f) anthropomorphism has made us jump to the very wrong conclusion that LLMs like ChatGPT are akin to persons in the way they interact with humans. Actually, LLMs simulate human behaviour based on the input they are fed but are BY NO MEANS independent, conscious thinkers.

g) we are integrating AI into our teaching (and research!) wrongly assuming ALL students use ChatGPT, when actually many understand its dangers and how it undermines the acquisition of key skills. I always tell my students that using ChatGPT to write an exercise is like having somebody else do it for you. Some are happy cheating that way, but many are not. One thing is teaching students about AI technologies and their dangers, and quite another using them uncritically.

          Finally, the authors provide five principles to protect “the ecosystem of human knowledge”:

1) Honesty (“we do not secretly use AI technologies without disclosure” and “one does not make unfounded claims about the presumed capabilities of AI technologies”)

2) Scrupulousness (we only use “AI products whose functionality is well-specified and validated for its specific scientific usage”)

3) Transparency (AI technologies are “open source and computationally reproducible”)

4) Independence (research is “unbiased by AI companies’ agendas” and “potential conflicts of interest are declared”)

5) Responsibility (our use of AI products cannot harm people, animals, and the environment, nor be “in violation of legal guidelines” such as copyright, data privacy, labour laws)

          The authors end up noting that “the academic world has well-known structural incentives to cheat” inclining not only students but also researchers to use ChatGPT. I recently had to tell a young researcher asking whether it’s fine to write abstracts using ChatGPT that they should consider to stop writing papers… The authors, clever as they are, see the direct link between fascism and AI, for fascistic regimes always attack academic integrity and prefer voters to be uneducated and uncritical. I find the analogy with tobacco most pertinent: knowing of the mortal risks we can now choose to smoke or not at our own peril. The difference between AI and tobacco is that we needn’t wait centuries to know that AI is addictive and harmful: we already have the evidence.

I’ll add a few more ideas:

  • what we have now is not true AI, by which I mean a non-organic, digital intelligence capable of making autonomous, sentient decisions on the basis of a consciousness BUT this will come. It might take one year or one century, but the ‘singularity’ will happen. By giving in to current LLMs models, we further prepare the ground for real AI to take over human life when it does appear (please, read and see science fiction, which is full of that type of cautionary tale).
  • AI, in its basic current form or advanced, can be a force for good if it frees humans from onerous occupations, daily hazards and medical problems. BUT for that it’s essential that its economic benefits revert to society and that AI is not controlled by corporations. What is wrong in the current development of AI is not AI per se but that its growth corresponds to greedy, ultra-capitalist, inhuman, fascist, patriarchal companies run by a handful of uncaring, unempathetic men (mostly white and living in the USA). There are wonderful uses to which AI can be put (for instance communication with animals), but the current model is based on eliciting dependence, which leads to financial gain (soon we’ll have to pay subscription) and predatory practices at all levels.
  • AI can have a place in education and research but only as an aid, never as a replacement of the acquisition of academic skills. When I started studying in 1984, library catalogues were collections of typed cards printed on paper; now, library catalogues are searchable digital tools. I still need the same set of skills to find bibliography but, of course, the resources at the tips of my fingers are much larger. However, if you ask ChatGPT to write a bibliography for you, then you use no skills whatsoever, and will most likely end with a list full of fake entries (the famous hallucinations). You might use current AI as a far more advanced version of the typical Google search, but even so, you need to be careful, for LLMs are trained to please their users and will provide you with information to please you, even if that information is wrong.
  • LLMs like ChatGPT are always extrapolating from the already existing knowledge and cannot innovate. So far, only human brains have the capacity to put 2+2 together and get a 5 instead of a 4 as a result, that is to say, have an insight that leads to a new idea. LLMs eat voraciously the documents we researchers produce to cut-n-paste, and pick-n-mix, which is why tech companies are currently the biggest thieves of intellectual property. YET, what current AI produces is necessarily derivative. It may be ok to pass a subject, but not to produce quality research, or train neurones.

          This semester I’m teaching students to write book reviews and, thinking of the experience of last year with the same subject, I fully trust they will not use ChatGPT to write their reviews. I’ve built trust telling them ‘surely, you won’t let ChatGPT tell you what your opinion of a book is’, so that they take pride in expressing their point of view about the texts they need to review. I believe this is crucial: we need to invite students to produce more personal exercises that do not simply parrot what we tell them, or what other researchers have claimed.

          If students find uses for ChatGPT which do not harm their ability to acquire academic skills, I’m listening, but I’ll certainly bear in mind the impeccable dissection of the problem that Guest at al. present, and the five principles that should rule a transparent, ethical, avoidance of current AI in academia. Try to imagine what would happen if instead of AI students had started using some mind-enhancing illegal drug, and see whether we would so easily welcome it into our classrooms, for this the closest analogy to what is happening right now in the university.