Flossing and the Art of Scientific Investigation
It’s bad enough that expertise is under attack these days from populist political movements that dismiss specialist opinion as just another establishment ruse. But lately expertise is being criticized from another direction, too — from would-be defenders of science.
Consider the recent controversy over flossing. In August, a widely read Associated Press report suggested that, contrary to the advice of dentists everywhere, flossing didn’t necessarily foster good oral health. The report looked at 25 studies that had generally compared toothbrushing and flossing with toothbrushing alone and concluded that the evidence for the benefits of flossing was weak.
In response, the Department of Health and Human services, the American Dental Association and the Academy of General
Dentistry reaffirmed the importance of interdental cleaning. But confusion persists: A lot of people now mistakenly think that “science” doesn’t support flossing.
What explains this confusion? Misconceptions about the relation between scientific research, evidence and expertise.
In the case of flossing’s benefits, the supposedly weak evidence cited by The Associated Press was the absence of support in the form of definitive randomized controlled trials, the so-called gold standard for scientific research. Why was there so little of this support? Because the kind of long-term randomized controlled trial needed to properly evaluate flossing is hardly, if ever, conducted — because such studies are hard to implement. For one thing, it’s unlikely that an Institutional Review Board would approve as ethical a trial in which, for example, people don’t floss for three years. It’s considered unethical to run randomized controlled trials without genuine uncertainty among experts regarding what works.And dentists know from a range of evidence, including clinical experience, that interdental cleaning is critical to oral health and that flossing, properly done, works. Yet the notion has taken hold that such expertise is fatally subjective and that only randomized controlled trials provide real knowledge.
The opposition between randomized controlled trials and expert opinion was fueled by the rise in the 1990s of the evidence-based medicine movement, which placed such trials atop a hierarchy of scientific methods, with expert opinion situated at the bottom. The doctor David Sackett, a father of the movement, once wrote that “progress towards the truth is impaired in the presence of an expert.”
But while all doctors agree about the importance of gauging the quality of evidence, many feel that a hierarchy of methods is simplistic. As the doctor Mark Tonelli has argued, distinct forms of knowledge can’t be judged by the same standards: what a patient prefers on the basis of personal experience; what a doctor thinks on the basis of clinical experience; and what clinical research has discovered — each of these is valuable in its own way. While scientists concur that randomized trials are ideal for evaluating the average effects of treatments, such precision isn’t necessary when the benefits are obvious or clear from other data.
Clinical expertise and rigorous evaluation also differ in their utility at different stages of scientific inquiry. For discovery and explanation, as the clinical epidemiologist Jan Vandenbroucke has argued, practitioners’ instincts, observations and case studies are most useful, whereas randomized controlled trials are least useful. Expertise and systematic evaluation are partners, not rivals.
Distrusting expertise makes it easy to confuse an absence of randomized evaluations with an absence of knowledge. And this leads to the false belief that knowledge of what works in social policy, education or fighting terrorism can come only from randomized evaluations. But by that logic (as a spoof scientific article claimed), we don’t know if parachutes really work because we have no randomized controlled trials of them.
Antagonism toward expertise can also waste time and effort by spurring researchers to test the efficacy of things we already know work. In the field of international development, for example, a recent study investigated the relationship between prescription glasses and school performance. A randomly selected group of Chinese children with poor eyesight were given glasses … and subsequently got better grades. Imagine: Kids who could see did better in school!
The cult of randomized controlled trials also neglects a rich body of potential hypotheses. In the field of talk therapy, for example, many psychologists believe that dismissing a century of clinical observation and knowledge as anecdotal, as research-driven schools like cognitive behavioral therapy have sometimes done, has weakened the bonds between clinical discovery and scholarly evaluation. The psychiatrist Drew Westen says the field is too often testing “uninformed hunches,” rather than ideas that therapists have developed over years of actual practice.
Experiments, of course, are invaluable and have, in the past, shown the consensus opinion of experts to be wrong. But those who fetishize this methodology, as the flossing example shows, can also impair progress toward the truth. A strong demand for evidence is a good thing. But nurturing a more nuanced view of expertise should be part of that demand.
No comments:
Post a Comment