Topic: I love this Blog | |
---|---|
Edited by
redonkulous
on
Fri 05/14/10 03:42 PM
|
|
Here is a particularly fun article on one of my favorite blogs, thought Id share.
http://scienceblogs.com/insolence/ ResearchBlogging.org "I don't want knowledge. I want certainty!" --David Bowie, from Law (Earthlings on Fire) If there's one universal trait among humans, it seems to be an unquenchable thirst for certainty. This should come as no surprise to those committed to science and rational thinking because there is a profound conflict between our human desire for certainty and the uncertainty of scientific knowledge. The reason is that the conclusions of science are always provisional. They are always subject to change based on new evidence. Although by no means the only reason, clearly this craving for certainty the human mind appears to demand is likely to be a major force that drives people into the arms of religion, even radical religions that have clearly irrational views, such as the idea that flying planes into large buildings and killing thousands of people is a one-way ticket to heaven. However, this craving for certainty isn't limited to religion. As anyone who accepts science as the basis of medical therapy knows, there's a lot of the same psychology going on in medicine as well. Although I'm not going to discuss this phenomenon primarily in the context of unscientific and pseudoscientific quackery in the "alternative" medicine world, I think it's instructive as an example. Much of quackery involves substituting the certainty of belief for the provisional nature of science. Examples, abound. Perhaps my favorite two examples include Hulda Clark, who attributed all cancer and serious disease to a common liver fluke, and Robert O. Young, who believes that virtually all disease is due to "excess acid." Time and time again, if you look carefully at "alt-med" concepts and the therapies that derive from those concepts, you find simplicity tarted up in complicated-sounding jargon. Homeopathy, for instance, is at its heart nothing more than sympathetic magic, with its concept of "like cures like," combined with the principle of contagion, with its concept that water somehow has a "memory" of the therapeutic substances with which it's come in contact but can somehow manage, as Tim Minchin so hilariously put it, forget all the poo it's been in contact with. Reiki and other "energy healing" modalities can be summed up as "wishing makes it so," with "intent" having the power to manipulate some fantastical life energy to heal people. It's faith healing, pure and simple. The simplicity of these concepts at their core makes them stubbornly resistant to evidence. Indeed, when scientific evidence meets a strong belief, the evidence usually loses. In some cases, it does more than just lose; the scientific evidence only hardens the position of believers. We see this very commonly in the anti-vaccine movement, where the more evidence is presented against a vaccine-autism link, seemingly the more deeply anti-vaccine activists dig their heels in to resist, cherry picking and twisting evidence, launching ad hominem attacks on their foes, and moving the goalposts faster than science can kick the evidence through the uprights. The same is true for any number of pseudoscientific beliefs. We see it all the time in quackery, where even failure of the tumor to shrink in response can lead patients to conclude that the tumor, although still there, just can't hurt them. 9/11 Truthers, creationists, Holocaust deniers, moon hoaxers, they all engage in the same sort of desperate resistance to science. They're not alone, though. Even those who in general accept science-based medicine can be prone to the same tendency to dismiss evidence that conflicts with their beliefs. About three weeks ago, I saw an article by Christie Aschwanden discussing just this problem. The article was entitled Convincing the Public to Accept New Medical Guidelines, and I feel it could almost have been written by Orac, only minus Orac's inimitable and obligatory "insolence" and snark. To set up its point that persuading people to accept the results of new medical science is exceedingly difficult, the article starts with the example of long distance runners who believe that taking ibuprofen (or "vitamin I") before a long run reduces their pain and inflammation resulting from the run: They call it "vitamin I." Among runners of ultra-long-distance races, ibuprofen use is so common that when scientist David Nieman tried to study the drug's use at the Western States Endurance Run in California's Sierra Nevada mountains he could hardly find participants willing to run the grueling 100-mile race without it. Nieman, director of the Human Performance Lab at Appalachian State University, eventually did recruit the subjects he needed for the study, comparing pain and inflammation in runners who took ibuprofen during the race with those who didn't, and the results were unequivocal. Ibuprofen failed to reduce muscle pain or soreness, and blood tests revealed that ibuprofen takers actually experienced greater levels of inflammation than those who eschewed the drug. "There is absolutely no reason for runners to be using ibuprofen," Nieman says. The following year, Nieman returned to the Western States race and presented his findings to runners. Afterward, he asked whether his study results would change their habits. The answer was a resounding no. "They really, really think it's helping," Nieman says. "Even in the face of data showing that it doesn't help, they still use it." As is pointed out, this is no anomaly. Aschwanden uses as another example a topic that's become a favorite of mine over the last six months or so since the USPSTF released revised guidelines for mammographic screening. Take a look at what she says about the reaction: This recommendation, along with the call for mammograms in women age 50 and older to be done every two years, rather than annually, seemed like a radical change to many observers. Oncologist Marisa C. Weiss, founder of Breastcancer.org, called the guidelines "a huge step backwards." If the new guidelines are adopted, "Countless American women may die needlessly from breast cancer," the American College of Radiology said. "We got letters saying we have blood on our hands," says Barbara Brenner, a breast cancer survivor and executive director of the San Francisco advocacy group Breast Cancer Action, which joined several other advocacy groups in backing the new recommendations. Brenner says the new guidelines strike a reasonable balance between mammography's risks and benefits. I discussed the guidelines and the reactions to them myself multiple times. Let's put it this way. I'm in the business, so to speak, and even I was shocked at the vehement reaction from not just patients and patient advocacy groups, but my very own colleagues. I was particularly disgusted by the reaction of the American College of Radiology, which was nothing more than blatant fear mongering that intentionally frightened women into thinking that the new guidelines would lead to their deaths from breast cancer. Radiologists dug foxholes from which to protect their turf. Even some of my colleagues were very resistant to the guideline, and in fact I was in the distinct minority at my own institution in cautiously supporting the new guidelines--with some misgivings. At least I managed to be a moderating force to keep the press release we ended up releasing from being too critical of the new guidelines. As much as we'd like to pretend otherwise, even science-based medical practitioners can fall prey to craving the certainty of known and accepted guidelines over the uncertainty of the new. And if it's so hard to get physicians to accept new guidelines and new science, imaging how hard it is to get patients to accept them. There is abundant evidence of how humans defend their views against evidence that would contradict them, and it's not just observational evidence that you or I see every day. Scientists often fall prey to what University of California, Berkeley, social psychologist Robert J. MacCoun calls the "truth wins" assumption. This assumption, stated simply, is that when the truth is correctly stated it will be universally recognized. Those of us who make it one of our major activities to combat pseudoscience know, of course, that the truth doesn't always win. Heck, I'm not even sure it wins a majority of the time--or even close to a majority of the time. The problem is that the "truth" often runs into a buzzsaw known as a phenomenon that philosophers call naive realism. This phenomenon, boiled down to its essence is the belief that whatever one believes, one believes it simply because it's true. In the service of naive realism, we all construct mental models that help us make sense of the world. When the "truth wins" assumption meets naive realism, guess what usually wins? At the risk of misusing the word, I'll just point out that the truth is that we all filter everything we learn through structure of our own beliefs and the mental models we construct to support those beliefs. I like to think of science as a powerful means of penetrating the structure of those mental models, but that's probably not a good analogy. That's because, for science to work at changing our preconceptions, we have to have the validity of science already strongly incorporated into the structure of our own mental models. If it's not, then science is more likely to bounce harmlessly off of the force field our beliefs create to repel it. (Yes, I'm a geek.) As a result, when people see studies that confirm their beliefs, they tend to view them as unbiased and well-designed, while if a study's conclusions contradict a person's beliefs that person is likely to see the study as biased or poorly done. As MacCoun puts it, "If a researcher produces a finding that confirms what I already believe, then of course it's correct. Conversely, when we encounter a finding we don't like, we have a need to explain it away." There's also another strategy that people use to dismiss science that doesn't conform to their beliefs. I hadn't thought of this one before, but it seems obvious in retrospect after I encountered a recent study that suggested it. That mechanism is to start to lose faith in science itself as a means of making sense of nature and the world. The study was by Geoffrey D. Munro of Towson University in Maryland and appeared in the Journal of Applied Social Psychology under the title of The Scientific Impotence Excuse: Discounting Belief-Threatening Scientific Abstracts. There were two main hypotheses and two studies included within this overall study. Basically, the hypothesis was that encountering evidence that conflicts with one's beliefs system would tend to make the subject move toward a belief that science can't study the hypothesis under consideration, a hypothesis known as the "scientific impotence" hypothesis or method. In essence, science is dismissed as "impotent" to study the issue where belief conflicts with evidence. In addition: The scientific impotence method of discounting scientific research that disconfirms a belief is certainly worrisome to scientists who tout the importance of objectivity. Even more worrisome, however, is the possibility that scientific impotence discounting might generalize beyond a specific topic to which a person has strong beliefs. In other words, once a person engages in the scientific impotence discounting process, does this erode the belief that scientific methods can answer any question? From the standpoint of the theory of cognitive dissonance (Festinger, 1957), the answer to this question could very well be "Yes." And: Using the scientific impotence excuse for one and only one topic as a result of exposure to belief-disconfirming information about that topic might put the individual at risk for having to acknowledge that the system of beliefs is somewhat biased and possibly hypocritical. Thus, to avoid this negative self-view, the person might arrive at the more consistent--and seemingly less biased--argument that science is impotent to address a variety of topics, one of which happens to be the topic in question. To test these hypotheses, basically Munro had a group of students recruited for his study read various abstracts (created by investigators) that confirmed or challenged their beliefs regarding homosexuality and whether homosexuality predisposes to mental illness. It turned out that those who read belief-challenging abstracts were more prone to use the scientific impotence excuse, while those who read belief-confirming abstracts were not less likely to subscribe to the scientific impotence excuse. Controls that substituted other terms for "homosexual" demonstrated that it was the belief-confirming nature of the abstracts that was associated with use of the scientific impotence excuse. A second study followed up and examined more subjects. The methodology was the same as the first study, except that there were additional measures performed to see if exposure to belief-disconfirming abstracts was associated with generalization of belief in scientific impotence. In essence, Munro found that, relative to those reading belief-confirming evidence, participants reading belief-disconfirming evidence indicated more belief that the topic could not be studied scientifically and more belief that a series of other unrelated topics also could not be studied scientifically. He concluded that being presented with belief-disconfirming scientific evidence may lead to an erosion of belief in the efficacy of scientific methods, also noting: A number of scientific issues (e.g., global warming, evolution, stem-cell research) have extended beyond the scientific laboratories and academic journals and into the cultural consciousness. Because of their divisive and politicized nature, scientific conclusions that might inform these issues are often met with resistance by partisans on one side or the other. That is, when one has strong beliefs about such topics, scientific conclusions that are inconsistent with the beliefs may have no impact in altering those beliefs. In fact, scientific conclusions that are inconsistent with strong beliefs may even reduce one's confidence in the scientific process more generally. Thus, in addition to the ongoing focus on creating and improving techniques that would improve understanding of the scientific process among schoolchildren, college students, and the general population, some attention should also be given to understanding how misconceptions about science are the result of belief-resistance processes and developing techniques that might short-circuit these processes. On a strictly anecdotal level, I've seen this time and time again in the alt-med movement. A particularly good example is homeopathy. How many times have we seen homeopaths, when confronted with scientific evidence finding that their magic water is no more effective at anything than a placebo, claiming that their magic can't be evaluated by randomized, double-blind clinical trials (RCTs). The excuses are legion: RCTs are too regimented; they don't take into account the "individualization" of homeopathic treatment; unblinded "pragmatic" trials are better; or anecdotal evidence trumps RCT evidence. Believers in alt-med then often generalize this scientific impotence discounting to many other areas of woo, claiming, for example, that science can't adequately measure that magical mystical life energy field known as qi or even, most incredibly, that subjecting their woo to science will guarantee it to fail. Unfortunately, when science is discounted this way, it allows believers in pseudoscience to dismiss science as "just another religion." A good rule of thumb is that you see such a dismissal, you know you're dealing with belief, and not science. Sadly, though, even physicians ostensibly dedicated to science-based medicine all too easily fall prey to this fallacy, although they usually don't dismiss science as being inadequate or unable to study the question in question. Rather, they wield their preexisting beleif systems and mental frameworks like a talisman to protect them from having to let disconfirming data force them to change their beliefs. Alternatively, they dismiss science itself as "just another belief." Perhaps the most egregious example I've seen of this in a long time occurred, not surprisingly, over the mammogram debate from six months ago, when Dr. John Lewin, a breast imaging specialist from Diversified Radiology of Colorado and medical director of the Rose Breast Center in Denver, so infamously said, "Just the way there are Democrats and Republicans, there are people who are against mammography. They aren't evil people. They really believe that mammography is not as important." Despite the 'nym, Orac is actually human. I get it. I get how hard it is to change one's views. I even understand the tendency to dismiss disconfirming evidence. What I like to think distinguishes me from pseudoscientists is that I do change my mind on scientific issues as the evidence merits. Perhaps the best example of this is the aforementioned USPSTF mammography screening kerfuffle. For the longest time, I bought into the idea that screening was almost completely a universal good. Then, over the last two or three years, I've become increasingly aware of the problem of lead time and length bias, the Will Rogers Effect, and overdiagnosis. This has led me to adjust my views about screening mammography. I haven't adjusted them all the way to the USPSTF recommendations, but I am much more open to changes in the guidelines published late last year, even to the point that encountering the resistance of my colleagues led me to feel as though I were an anomaly. Skepticism and science are hard in that they tend to go against some of the most deeply ingrained human traits there are, in particular the need for certainty and an intolerance of ambiguity. Also in play is our tendency to cling to our beliefs, no matter what, as though having to change them somehow devalues or dishonors us. Skepticism, critical thinking, and science can help us overcome these tendencies, but it's difficult. In the end, though, we need to strive to live up to the immortal words of Tim Minchin when describing how he'd change his mind about even homeopathy if presented with adequate evidence (I know I cited this fairly recently, but it's worth citing one more time): Science adjusts its beliefs based on what's observed Faith is the denial of observation so that Belief can be preserved. If you show me that, say, homeopathy works, Then I will change my mind I'll spin on a ****ing dime I'll be embarrassed as hell, But I will run through the streets yelling It's a miracle! Take physics and bin it! Water has memory! And while it's memory of a long lost drop of onion juice is Infinite It somehow forgets all the poo it's had in it! You show me that it works and how it works And when I've recovered from the shock I will take a compass and carve Fancy That on the side of my ****." Actually, as I've said before, I'd probably leave out the genital self-mutilation. (Wait! Scratch the word "probably.") Actually, Minchin may be a bit too flippant about the difficulty in changing one's mind. Even so, show me, for example, strong evidence that vaccines are associated with autistic regression, and I might not spin on a dime, but eventually, if the evidence is of a quality and quantity to cast serious doubt on the existing scientific evidence that does not support a vaccine-autism link, I will adjust my views to fit the evidence and science. That's just what it takes. No one said it would be easy, but the rewards of living in reality make it worth the struggle against our own human nature. In the end, I want knowledge, and science is the best way to get it about the natural world. Certainty is nice, but I can live without it. REFERENCE: Munro, G. (2010). The Scientific Impotence Excuse:�Discounting Belief-Threatening Scientific Abstracts Journal of Applied Social Psychology, 40 (3), 579-600 DOI: 10.1111/j.1559-1816.2010.00588.x Edit: Fixed my link. |
|
|