These days I have been proofreading my forthcoming book Passionate Professing: The Context and Practice of English Literature (Universidad de Jaén), which gathers together an essay and a selection of posts from this blog up to 2020. I worry that the volume is already outdated because of its many references to plagiarism, and the absence of any thoughts on ChatGPT. I must note, however, that the first news about Open AI’s chatbot started appearing only last December, not even a year ago (ChatGPT was launched on 30 November 2022). Like all universities, here at UAB we are struggling to adapt to the new situation, though some Departments appear to be more concerned than others. Mine, the English and German Department, has been quick to understand the depth of the problem, perhaps because we read the Anglophone press, and have got an early warning.

            My own warning is that no matter how often ChatGPT may fail to deliver the expected results today it will soon learn. It is important to understand that OpenAI and all the other companies running AIs do not know very well how AI actually works. In principle, ChatGPT is defined as “a large language model–based chatbot”, which is far from being an independent general artificial intelligence, but we need to worry about how far it can go. When a user asked ChatGPT “Why are you so helpful?, what do you want in return?”, the bot cheekily replied “As a language model trained by OpenAI, I don’t have wants or desires like a human has. But if you really want to help, you could give me the exact location of John Connor”, I was truly alarmed by the sick joke (for those of you non-nerds, John Connor is in The Terminator franchise the human leader in the future war against the AI led by Skynet).

            Another chilling moment came from a warning by diverse mycologists that guides written by AIs and sold on Amazon contain misguiding information which could cause death by poisoning if you pick and eat the wrong mushroom. Apparently, Amazon is now so flooded with AI-authored books, that it has forbidden the persons publishing them from issuing more than three books a day. Add to this the recent suit against OpenAI by George R.R. Martin, John Grisham and other major authors (the list runs to 17 names affiliated with the Authors’ Guild) to forbid ChatGPT from appropriating their works to write others. Apparently, Martin came across a prequel of his saga A Song of Ice and Fire, the origin of TV series Game of Thrones, written by ChatGPT without his consent. Martin has never authorized fan fiction and, understandably, he is far from pleased with this blatant appropriation.

            The field of higher education is split between those who abhor any aspect of ChatGPT (like yours truly) and those who think that ChatGPT can be integrated in the classroom and even should be integrated. The argument of the latter is that the growth of ChatGPT cannot be stopped and, arguably, preventing students from cheating with it passes through taking charge of its use and exploiting it, as we do, for instance, with bibliographical databases. As I have noted, these are still early stages for ChatGPT, but my own experience of using it for academic research was a total disappointment. I asked ChatGPT to provide me with a list of literary works in which a secondary character plays a significant, plot-defining role and a list of secondary sources about the concept ‘secondary character’. The list of literary works was of limited interest and, one thing I quickly noticed, was that no matter how many variations I introduced in my request, ChatGPT always focused on the same books. My own list (for a book I want to write in the near future) is far more interesting.

            As regards the secondary sources, ChatGPT suggested consulting the habitual databases (which I have done already) but when I insisted, it provided me with a list of sources. At first, I was very happy to have found so many sources I didn’t know on the theme of the secondary character, but by the time I had checked about five I realised they were invented. The authors did exist but ChatGPT had extrapolated from their publications titles (not complete bibliographical references) of false works. The bot had gathered whatever it found about secondary characters and came up with a counterfeit bibliography of no use. In a similar vein, you might enjoy the chronicle by Elif Batuman of his difficulties to have ChatGPT find a quotation in Proust’s series In Search of Lost Time. ChatGPT offered a variety of inexact quotations and even lied about the availability of the original French text, claiming it is still under copyright when it is not. This playfulness, if it is playfulness, begins to sound like wilfulness.

            The situation as regards the assessment of our students’ work is that now plagiarism has taken second seat to… what? We don’t even have a label for what happens when a student presents a text generated by AI. Anti-plagiarism tools like Turnitin are of no use to detect AI-authored text, whereas the new tools, such as ZeroGPT, give false results. The six essays submitted by my students last June that, according to this app, were not written by humans, turned out to have been written by students who had simply not read the corresponding novels. ZeroGPT detected them because the style was as robotic as that used by ChatGPT.

            One thing that called my attention last week, in any case, is that not only BA students but also MA students are using ChatGPT for their dissertations. A BA student boasted that ChatGPT had written the second half of his dissertation, after he had procrastinated for too long. He got a B- for that, which is in itself interesting: ChatGPT may help you pass, but not get As. The MA student also boasted that ChatGPT had helped finish her dissertation, for which it had also provided the main argumentation. As I have always said, there are many methods to fool a teacher and this is just the newest one; I’m just sorry that so many students reject the chance to be educated and focus on getting a degree certificate that proves nothing.

            I am not teaching this semester and will have to wait until the next how things work in my Victorian Literature class. We, the two teachers in charge, have decided to have students write the shorter essays in class (they’re not quite exams because they choose the passages to comment on, and prepare the essay at home), but still ask for the habitual 2000-word essay with 4 secondary sources. I assume that ChatGPT can easily generate second-year level papers, but, unlike other colleagues, I don’t want to eliminate that kind of exercise and return to exams, which I hate. Cheaters will always cheat but I still believe in personal integrity. If I catch a student using ChatGPT I will be disappointed, rather than annoyed or angry. As for those who will successfully cheat on me, shame on them and I’m sorry that you’re missing the chance to learn. Skynet scores a victory, and poor John Connor loses once more.

            I am not again the use of AI for creative purposes, as long as this is acknowledged. Suppose for a moment how ridiculous it would be for audiovisual FX specialists to claim that computer-generated images do not exist and all is their own painstaking work by hand. In the same way, I see no obstacle for all kinds of artists to apply AI to the production of new imagery, though this is already creating singular problems. Recently, the US Copyright Office review board determined that the AI-generated image ‘Théâtre d’Opéra Spatial’, the winner of the 2022 Colorado state fair annual art competition, could not be copyrighted because protection “excludes works produced by non-humans”. Artist Jason Allen, who had used AI-platform Midjourney to create the image, alleged that he was the author because he “entered a series of prompts, adjusted the scene, selected portions to focus on, and dictated the tone of the image”. The board adamantly replied that the image “lacks human authorship, and the Office will not register it”. This appears to be a mistake since Midjourney does not spontaneously generate images and depends on human artists to come up with creative ideas. Allen, thus, appears to be the author. I would call Allen a cheater if he denied having used Midjourney at all, or if the Colorado state fair annual art competition forbids the use of AI. Perhaps there should be a separate event, and artistic circuit, for AI-generated art.

            Authorship, as you can see, is a key word in the matter of ChatGPT, too. Just as Midjourney does not generate art unless it is prompted, ChatGPT does not generate writing without instructions from humans. Prompting is here another key word, for whereas I use a computer programme to write my posts, and to translate them into Spanish, I cannot prompt Word to write by itself (well, I do in the case of translation, but that comes from my own text). I could start using ChatGPT to write the posts in this blog, but then my authorship would be radically diminished to the point that I could not call myself an author (just a prompter?).

            Perhaps I am here destroying my own argument in support of Jason Allen’s authorship of ‘Théâtre d’Opéra Spatial’, but I believe that in writing you cannot claim as yours a text which you have only revised. I already have doubts about whether I am the full author of the Spanish version of my blog, as I use Word’s automatic translation feature to generate it, but so far copyright legislation acknowledges translated texts as the work of the original author (and of the translator, if there is a human translator; Word, or Google, or Deep-L do not get copyright for translated texts). I would be, however, cheating on you, my reader, if I passed off as mine posts generated by ChatGPT, for they would not be really my work. If, suppose, I asked a student to write this week’s post using a few ideas of mine, the post would not really be mine, so whatever is written with ChatGPT can not be claimed as one’s own, at least not in the way copyright operates today (I’m beginning to think Allen can claim authorship but not copyright for his image…).

            An important matter is to recall that pure AI-generated images or texts do not exist, as all bots like ChatGPT are prompted by humans. Some kind of collaboration between human and computer might be acceptable, though this muddles the distinction between authorship and copyright. As for students, if the use of ChatGPT is forbidden, because we require full human authorship, submitting an AI-generated text is cheating. The day we start testing what students can do with ChatGPT or other bots might come, but it is not a day I am looking forward to.