‘I did write that text’: Ownership and Authorship Claims by Language Models

Will large language models acknowledge authorship of their own generated texts? Will a language model claim authorship or ownership of texts which it did not create? A mistaken comprehension of ChatGPT’s abilities throws up a distinctive problem of intellectual property rights.

Machine Evidence II: The Abstract Setting

A recent study by Gao et al. (2022) validates the warning of ‘Machine Evidence’ (Blunt, 2019) that language models would soon become capable of beating detection attempts by human peer reviewers. This piece looks at the near-term steps that journal editors and conference organisers can take to prevent AI-generated abstracts bypassing their screening processes, along with a warning for the long-term viability of those strategies.

The Jurassic Critique of Micozzi on Evidence Hierarchies

AI21 Labs have just released a public demo of their giant language model, Jurassic-1. At 178bn parameters, it rivals GPT-3. Feeding it my own work, it generated some interesting and potentially novel views on evidence hierarchies… and then attributed them to CAM researcher Marc Micozzi! Is Jurassic Micozzi’s critique of evidential pluralism in medicine sound?

Imitating Imitation: a response to Floridi & Chiriatti

In their 2020 paper, Floridi and Chiriatti subject giant language model GPT-3 to three tests: mathematical, semantic and ethical. I show that these tests are misconfigured to prove the points Floridi and Chiriatti are trying to make. We should attend to how such giant language models function to understand both their responses to questions and the ethical and societal impacts.