NotebookLM: Cast with Care
Google’s NotebookLM “deep dive” feature is taking off in popularity. I subject three of my academic papers to the deep dive treatment, and reveal its tendency to subvert content for a happier ending.
Google’s NotebookLM “deep dive” feature is taking off in popularity. I subject three of my academic papers to the deep dive treatment, and reveal its tendency to subvert content for a happier ending.
I outline four common misconceptions about the use of Generative AI which are widespread in Higher Education debates about the use of these tools: that it is possible and practical to detect the use of AI in writing, that text produced by GenAI is bland, repetitive or predictable, that GenAI tools struggle to cite sources accurately, and that more creative or reflective assessments are harder to complete using AI.
As language models are fine-tuned to acquire more capabilities, we continue to seek new tasks to push the limits of Generative AI. In setting cryptic crossword clues, so far GPT-4 fails the test quite spectacularly.
What skills do social scientists need to adapt to generative AI, and how should educators approach teaching them? A narrow focus on training everyone in technical skills is misguided – what’s needed is authorial voice, leadership and management skills, and the critical force of the social sciences.
Can a language model outperform old Edward Lear in describing philosophers in Limerick form? “There once was a Scotsman named Hume…”
Will large language models acknowledge authorship of their own generated texts? Will a language model claim authorship or ownership of texts which it did not create? A mistaken comprehension of ChatGPT’s abilities throws up a distinctive problem of intellectual property rights.
What is the relationship between perplexity, creativity and novelty? Following on from ‘Perplexing Perplexity’, I set out to demonstrate that high perplexity texts are not always creative, and to showcase ChatGPT’s ability to work with and even generate novel words – culminating in the Tale of Zonkamoozle.
Detectors such as GPTZero use the property of ‘perplexity’ to try to detect AI authorship of texts. But I show that by writing in a specifically dull style, or engineering the prompt given to a language model, we can easily and systematically fool such detectors to label AI text as human and vice versa.
A recent study by Gao et al. (2022) validates the warning of ‘Machine Evidence’ (Blunt, 2019) that language models would soon become capable of beating detection attempts by human peer reviewers. This piece looks at the near-term steps that journal editors and conference organisers can take to prevent AI-generated abstracts bypassing their screening processes, along with a warning for the long-term viability of those strategies.
Can a new iteration of GPT-3 write pop songs, raps and limericks…about Immanuel Kant’s Categorical Imperative? It is a moral duty to find out.