November 19, 2009


Dartmouth research examines the value of cancer screening (Dartmouth Medical School news release, February 2002)

As people consider the merits or drawbacks of cancer screening, a Dartmouth Medical School study weighs in with some new observations, based on a statistical analysis of past trials, that may help put cancer screening in better perspective.

The conventional way deaths were classified may have caused misclassifications that biased study results in favor of screening, Dartmouth researchers demonstrated. They suggest an additional method of tallying all deaths to help avoid the misinterpretations that can lead investigators to overestimate or underestimate the value of cancer screening.

The findings are reported in the Feb. 6 issue of the Journal of the National Cancer Institute by Dartmouth Medical School professors William C. Black, MD, of radiology and of community and family medicine, and H. Gilbert Welch, MD, of medicine and of community and family medicine, and former medical resident David Haggstrom, MD.

Classifying the cause of death by specific disease is the most widely accepted procedure in randomized trials that assess cancer screening. However, two biases--sticky-diagnosis bias and slippery-linkage bias--affect such classification and can alter the assessment of screening value, the researchers found.

The validity of disease-specific mortality assumes that the cause of death can be accurately determined. An alternative end point, all-cause mortality, depends only on an accurate determination of deaths and when they occur; therefore it is unaffected by misclassifications in the cause of death.

People making decisions about screening want to have pertinent information about what it means for them, explained Black, a member of a national expert panel that assesses cancer evidence. He uses the shark analogy popular among his peers. Instructions and aids to protect yourself from a shark attack are meaningless if you don't go in the water.

Similarly, people have to understand how likely they are to be at risk for certain cancers when they decide to be screened for them. "They should be asking their physicians if this screening intervention is likely to increase their life expectancy," Black says. And their physicians hope screening studies take as much information as possible into account.

He and his colleagues compared the two mortality groups in 12 randomized studies of cancer screening for which both disease-specific and all-cause mortality could be determined. These trials involved screening for cancer of the breast, colon or lung.

In five of the 12 trials, the two mortality end points suggested opposite effects of screening. The researchers attributed these discrepancies to the two forms of bias that affect cause of death classifications. In one form, called sticky-diagnosis bias, deaths from other causes in the screened group are falsely attributed to cancer because that cancer was detected by screening. This type of misclassification influences the disease-specific mortality results against screening.

In the second form, called slippery-linkage bias, deaths from the screening process or subsequent treatment are falsely attributed to other causes. For example, if an invasive evaluation causes a patient to have a fatal heart attack, the death may be attributed to a heart attack rather than to the disease being tested for. This misclassification tilts the disease-specific mortality results in favor of screening.

Both forms of bias affected the randomized screening trials, according to the analysis, but the Dartmouth researchers argue that slippery-linkage bias had a larger effect. The concept of "slippery linkage" has been hinted at before but never previously defined, notes Black, who says he and his colleagues are among the first to investigate the impact of this bias.

Integrating both types of mortality classification can help avoid flaws in screening assessment, according to the researchers. They conclude that all-cause mortality should always be analyzed and reported along with disease-specific mortality to ensure that major harms or benefits of screening are not missed due to misclassification in the cause of death. "All-cause mortality also puts the magnitude of expected benefit from screening into an appropriate perspective for prospective decision making," they say.

It would be worth your requesting a complimentary copy of this month's edition of the publication Dartmouth Medicine just for the profile of these two guys, Are We Hunting Too Hard? (Jennifer Durgin, Summer 2005, DartmouthMedicine), and their heretical work on cancer screening. It's an article of faith for folks that science has liberated us from religion and superstition and extended our lives immeasurably, but skeptics, like Richard Lewontin, have no trouble demonstrating that such is not the case. Indeed, simple improvements in hygiene and caloric intake likely account for nearly all of our improvements in longevity and mortality.

Here is how Ms Durgin explains the work Dr. Black has done:

All cancers are not created equal. Some grow rapidly and invade other tissue, others grow slowly and remain noninvasive, and some don't grow at all or may even recede. Many of the cancers that doctors are finding and treating today, says Black, are what's called "pseudodisease"--tumors that will never cause harm, let alone death. The trouble is that pseudodisease is nearly impossible to identify for sure in an individual who is still living, because the medical community doesn't know enough about some cancers to predict how they will behave over time. So it's safer, they reason, to label a questionable abnormality as "cancer" and to treat it, than it is to risk its growing out of control. Only after an untreated person dies from other causes can a cancer be declared pseudodisease. Only then is it clear that treatment of the cancer would have provided no benefit, only potential harm. in other words, you can't tell an "overdiagnosed," or overtreated, person from a person who has been cured. "One of the biggest downsides to cancer screening is overdiagnosis, but you don't know which people have been overdiagnosed," says Black. "And so a person who has been overdiagnosed will think they've been cured."

Meanwhile, as Ms Durgin tells us, early on in his book, Should I Be Tested for Cancer?: Maybe Not and Here's Why, Dr. Welch "takes on the concepts of overdiagnosis and pseudodisease, using prostate cancer as an example."
"The most compelling evidence that pseudodisease is a real problem comes from our national experience with prostate cancer," Welch writes. Prostate cancer is the second-leading cause of cancer-related death in American men, and over the last 30 years, more and more of it has been found. In 1975, about 100,000 new cases were diagnosed; in 2003, about 220,000. Atfirst glance, one might conclude that prostate cancer is on the rise. However, if cancer is "really increasing," says Welch, "you'd expect death rates to rise."

And that hasn't happened with prostate cancer. The death rate has remained more or less constant, hovering around 30,000 deaths per year in the U.S., with a slight decline in recent years. [...] Regardless of the small changes in the death rate, Welch believes that most of the new cases represent "nothing more than pseudodisease: disease that would never progress far enough to cause syptoms--or flat-out would never progress at all.

"But what, you might ask, is the harm in finding all this pseudodisease?" Welch writes. "Simply put: unnecessary treatment. Most of the million men whose prostate cancer is found because of superior screening have to undergo some sort of treatment, whether radical surgery or radiation. ... [A]nd many experience significant complications: 17% need additional treatment because they have difficulty urinating following surgery; 28% must wear pads because they have the opposite problem--they cannot hold their urine; and more than half are bothered by a loss of sexual function."

In addition to causing harm from unneeded treatment, overdiagnosis can distort our perceptions of how well certain cancer treatments work, says Black. Because very slow-growing and potentially harmless cancers are relatively easy to control and eliminate, finding and treating more of them makes therapies seem more effective. "We see that [such cancers] behave well when we treat them, and we falsely attribute their good behavior to our treatment," explains Black. "By definition, survival only pertains to people who are diagnosed with the disease. So when you have overdiagnosis, survival is very, very misleading. Survival is going to be very long in people who are overdiagnosed," as well as in people whose cancers are found very early. For these reasons, Black, Welch, and others at DMS are critical of using survival rates--such as the oft-cited five-year survival rate--as a measure of the effectivewness of screening. Looking at death rates is a better way of evaluating screening, they argue, but even that approach has problems.

"You can't just look at disease-specific mortality, because we're not sure what causes death in a lot of people," Black explains. "There are a lot of deaths that are difficult to determine the cause of, and you can bias your results strongly in one direction or the other with disease-specific mortality."

The best measure of cancer screening technique, Black and Welch contend, is total deaths--known as all-cause mortality. [...]

For example, among the trials Black and Welch examined was the well-known 1989 Swedish Two-County mammography study, which reported that mammography reduces breast-cancer mortality. But when Black and Welch looked at the number of deaths from all causes in the screened group and the non-screened group, they found that there were actually slightly more deaths in the screened population.

Such work has earned the doctors and Dartmout a reputation as "the center of antagonism for screening" and produced much angst in the medical community, but as Dr. Welch says, there is:
"...a real theology here...and I understand where it comes from. The idea is so appealing; earlier is better. Prevention is better than cure. Finding small, bad breast cancers must be good." But the closest Welch can come to the "truth" about cancer screening, he says, is that "the effects of screening are probably mixed in general. Very few are helped. And very few are hurt. And most [screenings] have no effect."

Of course, it's precisely because they're arguing with the theology of sciencism that their message is so unwelcome.

[originally posted: 7/11/05]

Reblog this post [with Zemanta]
Posted by Orrin Judd at November 19, 2009 7:30 AM
blog comments powered by Disqus