Imitating Imitation: a response to Floridi & Chiriatti

In their 2020 paper, Floridi and Chiriatti subject giant language model GPT-3 to three tests: mathematical, semantic and ethical. I show that these tests are misconfigured to prove the points Floridi and Chiriatti are trying to make. We should attend to how such giant language models function to understand both their responses to questions and the ethical and societal impacts.

The Stochastic Masquerade and the Streisand Effect

What does Google have in common with Barbra Streisand? Since Google fired AI ethicists Margaret Mitchell and Timnit Gebru, our attention should turn to what they don’t want us to read: “On Stochastic Parrots”. Will the attempts to suppress this paper lead to it being overlooked, or will Google face Barbra Streisand’s fate?

Automatic Gadfly: Socrates by Machine

Recently, I’ve been experimenting with creating philosophical work using massive machine learning language models such as GPT-2, sometimes prompted to adopt specific philosophers’ styles and sometimes just letting it run. I’ve generated essay text, clinical trial reports and aphorisms in different philosophers’ styles. After reading Justin Weinberg’s post on the …

Dual Use Technology and GPT-3

Yesterday, AI researchers published a new paper entitled Language Models are Few-Shot Learners. This paper introduces GPT-3 (Generative Pretrained Transformer 3), the follow-up to last year’s GPT-2, which at the time it was released was the largest language model out there. GPT-2 was particularly impactful because of a cycle of media hype and consternation …

Machine Evidence: Trial by AI

Take a look at the following snippets from descriptions of clinical trials, thinking about how you’d rate the quality and strength of the evidence that comes from each: 1: We conducted a clinical trial in which erythropoietin (EPO) administration was administered daily to patients with severe acne resulting in clinically significant …