Imitating Imitation: a response to Floridi & Chiriatti

In their 2020 paper, Floridi and Chiriatti subject giant language model GPT-3 to three tests: mathematical, semantic and ethical. I show that these tests are misconfigured to prove the points Floridi and Chiriatti are trying to make. We should attend to how such giant language models function to understand both their responses to questions and the ethical and societal impacts.

Automatic Gadfly: Socrates by Machine

Recently, I’ve been experimenting with creating philosophical work using massive machine learning language models such as GPT-2, sometimes prompted to adopt specific philosophers’ styles and sometimes just letting it run. I’ve generated essay text, clinical trial reports and aphorisms in different philosophers’ styles. After reading Justin Weinberg’s post on the …