December 10, 2009

MEDICINE IS A CONSUMER GOOD, NOT A SCIENCE:

Mammogram Math (JOHN ALLEN PAULOS, 12/13/09, NY Times Magazine)

A little vignette with made-up numbers may shed some light. Assume there is a screening test for a certain cancer that is 95 percent accurate; that is, if someone has the cancer, the test will be positive 95 percent of the time. Let’s also assume that if someone doesn’t have the cancer, the test will be positive just 1 percent of the time. Assume further that 0.5 percent — one out of 200 people — actually have this type of cancer. Now imagine that you’ve taken the test and that your doctor somberly intones that you’ve tested positive. Does this mean you’re likely to have the cancer? Surprisingly, the answer is no.

To see why, let’s suppose 100,000 screenings for this cancer are conducted. Of these, how many are positive? On average, 500 of these 100,000 people (0.5 percent of 100,000) will have cancer, and so, since 95 percent of these 500 people will test positive, we will have, on average, 475 positive tests (.95 x 500). Of the 99,500 people without cancer, 1 percent will test positive for a total of 995 false-positive tests (.01 x 99,500 = 995). Thus of the total of 1,470 positive tests (995 + 475 = 1,470), most of them (995) will be false positives, and so the probability of having this cancer given that you tested positive for it is only 475/1,470, or about 32 percent! This is to be contrasted with the probability that you will test positive given that you have the cancer, which by assumption is 95 percent.

The arithmetic may be trivial, but the answer is decidedly counterintuitive and hence easy to reject or ignore. Most people don’t naturally think probabilistically, nor do they respond appropriately to very large or very small numbers. For many, the only probability values they know are “50-50” and “one in a million.” Whatever the probabilities associated with a medical test, the fact remains that there will commonly be a high percentage of false positives when screening for rare conditions. Moreover, these false positives will receive further treatments, a good percentage of which will have harmful consequences. This is especially likely with repeated testing over decades.

Another concern is measurement. Since we calculate the length of survival from the time of diagnosis, ever more sensitive screening starts the clock ticking sooner. As a result, survival times can appear to be longer even if the earlier diagnosis has no real effect on survival.

Cognitive biases also make it difficult to see the competing desiderata the panel was charged with balancing. One such bias is the availability heuristic, the tendency to estimate the frequency of a phenomenon by how easily it comes to mind. People can much more readily picture a friend dying of cancer than they can call up images of anonymous people suffering from the consequences of testing. Another bias is the anchoring effect, the tendency to be overly influenced by any initially proposed number. People quickly become anchored to such a number, whether it makes sense or not (“we use only 10 percent of our brains”), and they’re reluctant to abandon it. If accustomed to an annual mammography, they’re likely for that reason alone to resist biennial (or even semiannual) ones.

Whatever the role of these biases, the bottom line is that the new recommendations are evidence-based.


No one cares what makes sense, they just like being treated.

Posted by Orrin Judd at December 10, 2009 3:46 PM
blog comments powered by Disqus
« EASY BEING US: | Main | "WHAT'S ABOVE THE SUBTEXT": »