What skills do social scientists need to adapt to generative AI, and how should educators approach teaching them? A narrow focus on training everyone in technical skills is misguided – what’s needed is authorial voice, leadership and management skills, and the critical force of the social sciences.
Can a language model outperform old Edward Lear in describing philosophers in Limerick form? “There once was a Scotsman named Hume…”
Will large language models acknowledge authorship of their own generated texts? Will a language model claim authorship or ownership of texts which it did not create? A mistaken comprehension of ChatGPT’s abilities throws up a distinctive problem of intellectual property rights.
What is the relationship between perplexity, creativity and novelty? Following on from ‘Perplexing Perplexity’, I set out to demonstrate that high perplexity texts are not always creative, and to showcase ChatGPT’s ability to work with and even generate novel words – culminating in the Tale of Zonkamoozle.
Detectors such as GPTZero use the property of ‘perplexity’ to try to detect AI authorship of texts. But I show that by writing in a specifically dull style, or engineering the prompt given to a language model, we can easily and systematically fool such detectors to label AI text as human and vice versa.
A recent study by Gao et al. (2022) validates the warning of ‘Machine Evidence’ (Blunt, 2019) that language models would soon become capable of beating detection attempts by human peer reviewers. This piece looks at the near-term steps that journal editors and conference organisers can take to prevent AI-generated abstracts bypassing their screening processes, along with a warning for the long-term viability of those strategies.
Can a new iteration of GPT-3 write pop songs, raps and limericks…about Immanuel Kant’s Categorical Imperative? It is a moral duty to find out.
Can DALL·E 2 create images of ancient philosophers like Plato, Aristotle and Immanuel Kant as they’d look in modern day universities? Sort of. Should it? Definitely not.
Would you choose a black box AI surgeon with a 90% success rate over a human surgeon with 80% success? The answer exposes a fundamental and harmful assumption within dominant models of medical evidence.
DALLE 2 offers a far more powerful image generation AI than the popular open access ‘Craiyon’/’DALLE Mini’ model. How does DALLE 2 compare to DALLE Mini’s visions of hierarchies and pyramids of evidence?