Philosophical critique exposes flaws in medical evidence hierarchies

graphic of an evidence hierarchy
Evidence hierarchies, one version shown, classify types of studies according to the strength of evidence they provide. But a recent paper challenges the assumptions behind these hierarchies.

Immanuel Kant was famous for writing critiques.

He earned his status as the premier philosopher of modern times with such works as Critique of Pure Reason, Critique of Practical Reason and Critique of Judgment. It might have been helpful for medical science if he had also written a critique of evidence.

Scientific research supposedly provides reliable evidence for physicians to apply to treating patients. In recent years “evidence-based medicine” has been the guiding buzzword for clinical practice. But not all “evidence” is created equal. So many experts advocate the use of an evidence hierarchy, a ladder or pyramid classifying different types of studies in the order of their evidentiary strength. Anecdotes, for instance, might occupy the lowest level of the evidence pyramid. At the apex you’d typically find randomized controlled clinical trials, or perhaps meta-analyses, which combine multiple studies in a single analysis.

Kant died in 1804, so it’s hard to say what he would have thought about evidence hierarchy pyramids. But at least one modern-day philosopher thinks they’re bunk.

In a Ph.D. thesis submitted in September 2015 to the London School of Economics, philosopher of medicine Christopher Blunt analyzes evidence-based medicine’s evidence hierarchies in considerable depth (requiring 79,599 words). He notes that such hierarchies have been formally adopted by many prominent medicine-related organizations, such as the World Health Organization and the U.S. Preventive Services Task Force. But philosophical assessment of such hierarchies has generally focused on randomized clinical trials. It “has largely neglected the questions of what hierarchies are, what assumptions they require, and how they affect clinical practice,” Blunt asserts.

Throughout his thesis, Blunt examines the facts and logic underlying the development, use and interpretation of medical evidence hierarchies. He finds that “hierarchies in general embed untenable philosophical assumptions….” And he reaches a sobering conclusion: “Hierarchies are a poor basis for the application of evidence in clinical practice. The Evidence-Based Medicine movement should move beyond them and explore alternative tools for appraising the overall evidence for therapeutic claims.”

Each chapter of Blunt’s thesis confronts some aspect of evidence hierarchies that suggests the need for skepticism. For one thing, dozens of such hierarchies have been proposed (Blunt counts more than 80). There is no obvious way to judge which is the best one. Furthermore, developers of different hierarchies suggest different ways of interpreting them. Not to mention that various hierarchy versions don’t always agree on what “evidence” even means or what “counts” as evidence.

It’s not even clear that evidence hierarchies are really ways of ranking evidence. They actually rank methodologies — clinical trials supposedly being in some sense a better methodology than observational studies, for instance. But it’s not necessarily true that a “better” method always produces superior evidence. A poorly conducted clinical trial may produce inferior evidence to a high-quality observational study. And sometimes two clinical trials disagree — they can’t both…

Follow Me


COO at oneQube
COO @oneqube | Angel Investor | Proud mom | Advisor @TheTutuProject | Let's Go #NYRangers
Follow Me

More from Around the Web

Subscribe To Our Newsletter

Join our mailing list to receive the latest news from our network of site partners.

You have Successfully Subscribed!

Pin It on Pinterest