Yesterday, AI researchers published a new paper entitled Language Models are Few-Shot Learners. This paper introduces GPT-3 (Generative Pretrained Transformer 3), the follow-up to last year’s GPT-2, which at the time it was released was the largest language model out there. GPT-2 was particularly impactful because of a cycle of media hype and consternation when the organisation responsible, OpenAI, decided not to release the full version of the model publicly citing concerns about potential harmful impacts. Media outlets duly took up the topic of the AI ‘too dangerous to release’, and promulgated the possibilities of swathes of computer-generated text, essentially indistinguishable from human-written content except by the most attentive reader or a suitable detection algorithm, swamping online spaces and even being used for targeted trolling, political misinformation and the validation of armies of bots.
But GPT-2 now looks like a small wonder. Its successor, GPT-3, is much, much, much bigger. While the full – and finally made open to the public later on last year – version of GPT-2 had 1.6 billion parameters, the new model has 175 billion at its disposal to create a hugely overpowered language model. That’s ten times the size of anything that has come before. For comparison, I was able to use a much more limited version of GPT-2, released back in the midst of the media blitz, with only 117 million parameters (only!) to generate fake reports of clinical trials, paragraphs emulating my PhD thesis, and a new version of Wittgenstein’s Tractatus. More recently, I’ve used the larger GPT-2 model to create aphorisms in the styles of a range of philosophers, and a few paragraphs of an essay about David Hume and rational decision-making. I warned that academics should be aware of these developments, particularly in their roles as editors, peer reviewers and markers of student work.
This larger model is likely to produce more human-like text, which is harder to distinguish either by reader or by algorithms from human-written content. It is also very adept at turning its hand to new things and can be prompted to do so through simple text interactions (this is not entirely novel: try TalktoTransformer.com to play around with GPT-2 [update: (03/07/20) – TalktoTransformer has now been taken down by its creator because of the incredible cost of running it as a free service. Huge thanks are due to Adam King for all of his efforts in allowing so many people to engage with this technology. I hope some readers will be able to support his new paid project Inferkit] and you’ll see that it is fairly straightforward to persuade it to adopt a particular style or perform question-answering, list-writing, etc. simply by offering it a short snippet of that style to emulate or writing “Question: … ; Answer:”). According to the new paper, GPT-3, without any fine-tuning to particular tasks: “achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic” (Brown et al. 2020). It still has plenty of weaknesses and limitations, but those are eroding.
The authors are clearly aware, as they were in the release of GPT-2, that these large language models pose dangers. AI is a dual-use technology. Like electricity, the steam engine, and many other general-purpose technologies, it can almost always be applied both to try to solve a given problem or to make that problem much worse. This might be done from both ends simultaneously. A language model can generate fake news. But AI tools can also be used to detect it. Language models will make many tasks easier, quicker, cheaper and less onerous, while at the same time undermining trust in the products of those same tasks and many others. This is before we get to issues of bias and misrepresentation, driven by the fact that these language models are trained on a huge corpus of human-written text and therefore will tend to pick up, replicate and amplify our biases, prejudices, proclivities and misconceptions. The authors write: “Examples include misinformation, spam, phishing, abuse of legal and governmental processes, fraudulent academic essay writing and social engineering pretexting. Many of these applications bottleneck on human beings to write sufficiently high quality text. Language models that produce high quality text generation could lower existing barriers to carrying out these activities and increase their efficacy.” (Brown et al. 2020, p.34) They note that the exceptional performance of GPT-3 in producing text which humans cannot distinguish from human-written content is “a concerning milestone in this regard” (ibid.).
As the language models get more sophisticated and their ability to generate convincingly human-like text improves past the point at which even an attentive reader can distinguish human from computer-generated text, we must be ready to interrogate the source and ensure the provenance of new work. We might also need to get serious about the conversation about whether we want to see philosophical work generated automatically by computers in our fields and our journals, whether we want to take measures to prevent this, and what the value might be of some automatic philosophy. Finally (and I disclose my conflict of interest as someone who has now published AI-generated content three times on this site), we might want to ask who owns material which GPT-2, GPT-3 and its ilk may generate.