Machine Evidence II: The Abstract Setting

A recent study by Gao et al. (2022) validates the warning of ‘Machine Evidence’ (Blunt, 2019) that language models would soon become capable of beating detection attempts by human peer reviewers. This piece looks at the near-term steps that journal editors and conference organisers can take to prevent AI-generated abstracts bypassing their screening processes, along with a warning for the long-term viability of those strategies.

The Jurassic Critique of Micozzi on Evidence Hierarchies

AI21 Labs have just released a public demo of their giant language model, Jurassic-1. At 178bn parameters, it rivals GPT-3. Feeding it my own work, it generated some interesting and potentially novel views on evidence hierarchies… and then attributed them to CAM researcher Marc Micozzi! Is Jurassic Micozzi’s critique of evidential pluralism in medicine sound?

Imitating Imitation: a response to Floridi & Chiriatti

In their 2020 paper, Floridi and Chiriatti subject giant language model GPT-3 to three tests: mathematical, semantic and ethical. I show that these tests are misconfigured to prove the points Floridi and Chiriatti are trying to make. We should attend to how such giant language models function to understand both their responses to questions and the ethical and societal impacts.

The Stochastic Masquerade and the Streisand Effect

What does Google have in common with Barbra Streisand? Since Google fired AI ethicists Margaret Mitchell and Timnit Gebru, our attention should turn to what they don’t want us to read: “On Stochastic Parrots”. Will the attempts to suppress this paper lead to it being overlooked, or will Google face Barbra Streisand’s fate?