DALL·E is a machine learning system created in January 2021, based on a 12-billion parameter version of giant language model GPT-3. The ability to repurpose the language model in these interesting capacities has been a talking point, as tech firms such as Google have pushed the idea of ‘foundation models’, a controversial rebrand to say the least. Now, the follow up, DALL·E 2, is impressing many people with its achievements. As many languish on the waitlist to play with OpenAI’s model, an open source implementation known as DALL·E mini has proved an attention-grabbing diversion.
As always, the results of prompting a model like DALL·E mini with abstract ideas results in some rather unusual outputs. If we use Google Image search to find pictures of, for instance, “hierarchies of evidence” or “evidence-based medicine”, we get a wall of evidence pyramids in response – with a few Venn diagrams in the mix for good measure. DALL·E mini’s interpretation was a little different, emphasising the hierarchical structure above the simple rank ordering. A reminder, then, that when we think in terms of hierarchies, we can think in more nuanced ways about how the under-ranked elements in the hierarchy support those above, and to think not just in linear terms when providing a ranking, but to follow branching pathways to a complete evidence-base.
Here are some of DALL·E mini’s evidence hierarchies. As ever with content generated by AI, attribution and ownership are deep into the grey. I claim nothing.
Unlike the corporate brochure of Venn diagram, pyramid schemas and diagrams that characterise the imagery of ‘Evidence-Based Medicine’, DALL·E mini put the practitioner in the middle of its imagination of the concept:
But when we think about “evidence” in the abstract, it’s still not clinical trials or medical practice that come to mind as the paradigmatic example. The visions of evidence are still rooted in the courtroom.