Which study is right? Investigating the impact of screening on breast cancer mortality


A splashy headline in The Washington Post caught my attention: “Breast cancer death rate dropped 58 percent over 44 years in U.S.” A Stanford Medicine news story reports that this victorious conclusion is based on “a new multicenter study led by Stanford Medicine clinicians and biomedical data scientists.” Using observational data, clinical trial data, and simulation modeling, the researchers found that approximately 25 percent of the reduction in breast cancer mortality was associated with screening mammography. The remainder was attributed to improvements in breast cancer treatment.

So, is it time to celebrate?

I wish I could say I was popping open a bottle of champagne, but instead, I’m writing this article. When I see these kinds of headlines, I’m reminded of the adage, “If it sounds too good to be true, it probably is.” It can be tempting to get shiny object syndrome and latch onto the promising headline of the latest news story. However, in light of this recent development, there are two landmark studies about mammography and breast cancer mortality that we would be wise to remember.

First, in the U.K., a randomized controlled trial was conducted in which 39- to 41-year-old women were randomly assigned either to the intervention group of annual mammography or to the control group. After 10 years, when the researchers followed up and compared the groups’ mortality rates, there was no statistically significant difference in breast cancer mortality rates between the groups. These results were published in The Lancet in 2006.

Second, a Canadian study assessed breast cancer incidence and mortality over the course of a 25-year follow-up period. The women in this study, aged 40-59, were randomly assigned either to the mammography arm (3,250 women) or to the control arm (3,133 women). During the entire study period, 500 women in the mammography arm and 505 women in the control arm died of breast cancer. Therefore, the researchers concluded that in women aged 40-59, annual mammography did not reduce mortality from breast cancer. These results were published in The BMJ in 2014.

The big question is, why is there such a wild discrepancy between the conclusion reached by the new Stanford Medicine-led study and the prior results of the U.K. and Canadian studies? The short answer is that the researchers used different methodologies to investigate the extent to which screening mammography has impacted breast cancer mortality. Both the U.K. and Canadian studies were randomized trials, whereas the Stanford Medicine-led study was based on simulation modeling.

All scientific research has limitations, but randomized controlled trials (RCTs) are the gold standard for a reason. They’re specifically designed to minimize bias as much as possible, and they help eliminate confounding variables that could otherwise distort the results. Therefore, the outcomes of RCTs are generally considered to be more reliable than other kinds of scientific research.

Models, on the other hand, are prone to lead us to spurious conclusions. Author, programmer, and consultant Tyler Vigen cleverly demonstrates this on his website. Using software he designed to scour enormous data sets, Vigen has discovered numerous unlikely statistical correlations. For example, milk consumption correlates with the divorce rate in Colorado. The project is tongue-in-cheek, but the bottom line is critically important. When it comes to big data, we need to remember one of the most fundamental research principles: correlation is not causation.

This is not to say that simulation modeling should be disregarded entirely. After all, as British statistician George Box famously quipped, “All models are wrong, but some are useful.” Since some models are useful, there is great potential for them to enrich scientific understanding. However, we need to ensure that this approach doesn’t surreptitiously replace the scientific method. In The Art of Statistics, another renowned British statistician, David Spiegelhalter, notes that while Box appreciated the power of models, he also understood “the danger of actually starting to believe in them too much.”

Whether you lean towards relying on the results from the U.K. and Canadian randomized trials or the results of the Stanford Medicine-led study based on simulation modeling, the impact of mammography on breast cancer mortality rates still hasn’t been definitively settled. It’s great to see people galvanizing around women’s health issues like breast cancer (which, of course, is not only a women’s health issue), but the irony is, it doesn’t exactly feel like we’re getting closer to the truth.

Shannon Casey is a physician assistant.






Source link

About The Author

Scroll to Top