The Jurassic Critique of Micozzi on Evidence Hierarchies

AI21 Labs have just released a public demo of their giant language model, Jurassic-1. At 178bn parameters, it rivals GPT-3. Feeding it my own work, it generated some interesting and potentially novel views on evidence hierarchies… and then attributed them to CAM researcher Marc Micozzi! Is Jurassic Micozzi’s critique of evidential pluralism in medicine sound?

Imitating Imitation: a response to Floridi & Chiriatti

In their 2020 paper, Floridi and Chiriatti subject giant language model GPT-3 to three tests: mathematical, semantic and ethical. I show that these tests are misconfigured to prove the points Floridi and Chiriatti are trying to make. We should attend to how such giant language models function to understand both their responses to questions and the ethical and societal impacts.

The Stochastic Masquerade and the Streisand Effect

What does Google have in common with Barbra Streisand? Since Google fired AI ethicists Margaret Mitchell and Timnit Gebru, our attention should turn to what they don’t want us to read: “On Stochastic Parrots”. Will the attempts to suppress this paper lead to it being overlooked, or will Google face Barbra Streisand’s fate?

Automatic Gadfly: Socrates by Machine

Recently, I’ve been experimenting with creating philosophical work using massive machine learning language models such as GPT-2, sometimes prompted to adopt specific philosophers’ styles and sometimes just letting it run. I’ve generated essay text, clinical trial reports and aphorisms in different philosophers’ styles. After reading Justin Weinberg’s post on the …

Dual Use Technology and GPT-3

Yesterday, AI researchers published a new paper entitled Language Models are Few-Shot Learners. This paper introduces GPT-3 (Generative Pretrained Transformer 3), the follow-up to last year’s GPT-2, which at the time it was released was the largest language model out there. GPT-2 was particularly impactful because of a cycle of media hype and consternation …