Echoes of Evidence

Echoes of Evidence

Do you have an idea of what makes good medical evidence in your mind? Do you have ideas about the kind of evidence you’d pay attention to, and the kind of evidence you can safely disregard? Or ideas about the evidence that would strongly sway your beliefs and actions, and the evidence that would have a weak effect, or no effect, on your beliefs? Is there an archetype of high quality evidence in there somewhere? Are there examples of evidence that only the credulous, the gullible, or the underinformed would accept, at the back of your mind?

Many of us have pictures in our heads of what good and bad medical evidence looks like. We have ideas and notions about what makes strong evidence. For many people working in medical practice and policy today, those ideas of strong evidence are bound up with the idea that evidence from Randomised Controlled Trials (RCTs) is the ideal of strong, high-quality evidence in medicine. Some repeat the mantra that RCTs are the “gold standard” of medical evidence. Some are worried about dependency on trial data, and prefer to lean on evidence from meta-analyses, systematic reviews, Cochrane reviews or guidelines—ways of drawing together the evidence from multiple RCTs into a larger, stronger, higher-quality syntheses.

Those same practitioners and policy-makers, and others besides, often have ideas of bad evidence equally embedded in their thinking. Bad evidence might just be expert opinion. It might be biological reasoning and theorizing about why a treatment should work, without hard data to back up that rationale. Or bad evidence might bring to mind uncontrolled data, anecdotes, case series and case studies—or even observational research in its entirety. Alternatively, the fear of data mining might prompt us to regard subgroup analyses as dangerous and vulnerable to both bias and exploitation.

For some people, these ideas about evidence are firmly held beliefs or doctrines. They form the fundamental plank of a coherent ideology about medical evidence. For others, they are shortcuts or heuristics—acknowledged to be fallible, but used on an everyday basis because time and resource constraints make it impossible to go deeper. For yet others, these ideas are dimly held views or opinions, picked up in conversations with colleagues, from occasional lectures by proponents of ‘Evidence-Based Medicine’ (EBM), or from a distantly remembered course on evidence. Such ideas might not be firmly held, or the basis of any larger theory or framework of evidence. But they provide a reference point which allows some level of critical review of evidence when it is encountered.

I want to suggest, contrary to what you might think, that those ideas and assumptions that are in the minds of practitioners, policy-makers (and sometimes patients as well)—that may be in your mind at the moment—are incredibly influential in determining the kind of care patients receive. This is true even when these ideas are not held particularly sacred. Even if these ideas haven’t been thought through in detail by those who hold them, or are dimly regarded and applied from a forgotten corner of the mind, or are not taken very seriously and applied only as a shortcut to get the gist of the medical evidence when time is scarce.

How can these ideas wield substantial influence over patient care even when weakly held? Precisely because our ideas about evidence are applied over and over, in succession, at many stages of a process. How many hands has evidence been through before it reaches you? How many times could someone have rejected, toned down, or put a qualifier on the evidence you’re reading or hearing about, by the time you hear about it? How many times could people have accelerated, emphasized or enhanced the power of a finding by the time it finds its way to you?

Perhaps you get your evidence from the source—from the published results of a study in a medical journal. Unfiltered through reviews, reports, media interpretation or pharmaceutical company reps’ spin. How many people have touched that evidence?

It’s tempting to think that only the researchers’ ideas of evidence have impacted on what you’ll see in their study report. Those researchers have particular ideas about what counts as strong, high-quality evidence. Presumably they want to (and are incentivized to) produce work that matches those ideas. If they share your ideas of quality of evidence, then they probably wanted to design their study in a way that matched. They probably wanted to conduct a randomized controlled trial, to keep researchers and participants blind to treatment allocation as far as possible, to conduct a large, well-powered trial and perform standard statistical analyses. They might not have been able to meet those criteria. The question that they wanted to ask might not have been particularly amenable to that kind of study. But researchers who share your ideas of evidence might well take that as a sign that they should pursue a slightly different (or even very different) question—one that would allow them to provide the kind of evidence they think they want to see, and that they think you need.

But researchers choosing and implementing their own study unaided is extremely unlikely. Research is expensive. Their research might be funded by a pharmaceutical company, a government regulator, a private donor, or a funding body. Those funding sources each have numerous stages at which they can push the evidence that ends up being created towards the ideals they have in mind. Those ideals are quite likely to match the ones in the heads of doctors and policy-makers. Regulators have specific ideas about what should count for a treatment to be licensed, recommended and paid for. Those ideas affect the kinds of study they’ll be willing to commission, as well as the kinds of studies pharmaceutical companies are incentivized to fund and produce. Private donors and funding bodies want to see evidence that is well-regarded—high-quality, strong evidence. The ideas of high-quality, strong evidence that are prevalent in medicine will feed into the kind of study that gets funded.

So far, we’ve spoken only about the effects on which studies end up being performed. Researchers design studies to meet their (and your) ideas of strong evidence. Funding bodies fund research which matches those ideas—and they channel that funding towards researchers who agree with their ideas of strong evidence. Let’s imagine that each of these participants only has a soft, vague sense of what they think good evidence in medicine looks like. They don’t have hard and fast rules about what makes good evidence, but a few preconceptions about the evidence they’d prefer. A funding body likes to fund RCTs. They’re not particularly against funding other kinds of work. They like to fund and build relationships with researchers who think similarly to them—researchers who also like to run RCTs. Those researchers tend to take a particular research question or problem and think about how it could approach through an RCT. Or they choose research questions which are already amenable to that kind of research. Even if those tendencies are quite minimal, they are mutually reinforcing. The intense competition for funding and the competitive advantage of matching even a weak notion of quality will dictate a far stronger impact.

The next major stage at which more hands and heads get involved is publication. Competition for space in the most prestigious journals is intense. The editors of those journals and the peer-reviewers who decide whether to recommend the editors publish a paper also have ideas of evidence in mind. Some journals have adopted formal systems which rank different kinds of evidence, or have made statements about the forms of evidence they do and don’t publish. Because they want to attract readers and sponsors, and because editors and peer-reviewers are largely drawn from the population of practitioners and policy-makers, they generally share the same notions of strong evidence as their readers do. If you’re reading the most prestigious journals, then the chance of a paper which doesn’t conform to those standards of evidence in your mind—even if they’re loosely held or regarded only as a shortcut, not to be taken too seriously—have already been applied in earnest several times over in making publication decisions. Prestigious journals don’t need to compromise those standards for unconventional research papers.

That knowledge feeds back into the researchers and funding bodies. Both want to see their papers receive prestigious billing and to have a smooth road to publication. When making funding decisions, choosing research questions and designing studies, publication cannot be far from the minds of those involved.

All of these interacting and overlapping applications of ideas of evidence have been applied—to determine what research gets performed and where and whether it gets disseminated—before you catch sight of the papers. That’s assuming your habit is simply to browse through your journals of choice and read all the papers that are relevant to your field or interests. You aren’t applying your own filter of which of the papers that end up published in the journals you consider significant enough to peruse. What’s more, you aren’t reading the papers that come to your notice because of secondary reports or media coverage, from social media posts, or because a pharmaceutical rep, colleague or patient brought them to your attention. In that case, there are even more filters, applying the same concepts, ideas and assumptions about evidence on top of everything else.