Four Myths about Generative AI in Education

I outline four common misconceptions about the use of Generative AI which are widespread in Higher Education debates about the use of these tools: that it is possible and practical to detect the use of AI in writing, that text produced by GenAI is bland, repetitive or predictable, that GenAI tools struggle to cite sources accurately, and that more creative or reflective assessments are harder to complete using AI.

‘I did write that text’: Ownership and Authorship Claims by Language Models

Will large language models acknowledge authorship of their own generated texts? Will a language model claim authorship or ownership of texts which it did not create? A mistaken comprehension of ChatGPT’s abilities throws up a distinctive problem of intellectual property rights.

Machine Evidence II: The Abstract Setting

A recent study by Gao et al. (2022) validates the warning of ‘Machine Evidence’ (Blunt, 2019) that language models would soon become capable of beating detection attempts by human peer reviewers. This piece looks at the near-term steps that journal editors and conference organisers can take to prevent AI-generated abstracts bypassing their screening processes, along with a warning for the long-term viability of those strategies.