About

I am a philosopher of medicine and Artificial Intelligence (AI), focusing on questions at the intersection of evidence and ethics. I am Assistant Professorial Lecturer and Deputy Director of LSE100 at the London School of Economics.

Chris J Blunt

My research focuses on Evidence-Based Medicine, hierarchies of evidence, and other problems related to evidence and ethics in biomedical research, medical genomics, machine learning, and complementary & alternative medicine. My work was profiled in Science News.

I received my PhD from LSE in 2016, with a thesis focused on ‘Hierarchies of Evidence’, available here. Hierarchies of Evidence are a popular tool for appraising evidence used in medical practice, and a common way of teaching evidence appraisal to medical students.

My work is now structured as a series of papers which form the basis for an upcoming book on evidence appraisal in medicine. These include The Authority of Evidence-Based Medicine, The Dismal Disease, The Positivity Machine, The Avoidable Scandal and The Parachute Problem. In addition, several chapters of the series Philosophy of Diagnosis are currently available online (I, II, III). I also work on problems related to evidence in education research (see Minding the Gaps) and philosophical questions in AI particularly relating to language models (see Imitating Imitation).

As Deputy Director of LSE100, I work on the delivery of LSE100, LSE’s flagship interdisciplinary course which is studied by all undergraduates in their first year. LSE100 delivers interdisciplinary seminars accompanied by bespoke online content featuring LSE academics. It serves as a forum for students and academics of all disciplines to meet and discuss major societal challenges, and a laboratory for pedagogic innovation in higher education.

In the 2019, 2020 and upcoming academic years, LSE100 focused on the social, political, economic and philosophical challenges and changes driven by AI. The module, entitled “How can we control AI?“, raises questions such as: Is AI biased by design? Should criminal justice be automated? Will self-driving cars fundamentally transform our built environment? Can privacy survive AI?

Recently, my interests in AI and medicine have been overlapping in a pair of new projects: (1) on the philosophy of diagnostics which brings differences and commonalities between AI and clinician diagnostics into focus, and (2) on the disparities between information theoretic and socio-political conceptualisations of anonymity and privacy with respect to the data access needed to train new machine learning models for diagnostics and therapeutics.

I have a PGCE in higher education and am a Fellow of the Higher Education Academy. Alongside LSE100, I have taught a range of courses in LSE’s Philosophy department at undergraduate and masters level, supervise MA student dissertations in Philosophy of Artificial Intelligence.

I maintain WritePhilosophy.com, a resource for teachers and students of Philosophy. The site has a wealth of articles covering topics from philosophical argumentation to elementary logic.

This site is an ongoing repository for all of my work. I have criticised academic publishing models extensively in the past, and so offer all of my research freely here.

I am always interested to hear about challenges faced in the analysis of evidence by both academics and practitioners working in healthcare and in artificial intelligence research. I am part of ongoing projects with colleagues working in fields including rheumatology, orthopaedic surgery and physiotherapy, and machine learning. My philosophical work is driven by the engagement and challenges faced by medical practitioners. If you have an interest in evidence hierarchies or philosophical challenges in medical evidence, do get in touch.

Contact: c.j.blunt <at> lse.ac.uk