How does a machine learning algorithm picture hierarchies of evidence and evidence-based medicine – and what do these visions of evidence remind us of the way we understand, order and assemble the information we use to guide clinical practice?
AI21 Labs have just released a public demo of their giant language model, Jurassic-1. At 178bn parameters, it rivals GPT-3. Feeding it my own work, it generated some interesting and potentially novel views on evidence hierarchies… and then attributed them to CAM researcher Marc Micozzi! Is Jurassic Micozzi’s critique of evidential pluralism in medicine sound?
In their 2020 paper, Floridi and Chiriatti subject giant language model GPT-3 to three tests: mathematical, semantic and ethical. I show that these tests are misconfigured to prove the points Floridi and Chiriatti are trying to make. We should attend to how such giant language models function to understand both their responses to questions and the ethical and societal impacts.