How Deeply Flawed Studies on Abortion and Breast Cancer Become Anti-Choice Fodder


The anti-choice movement has been making a lot of noise over a new study out of China, published in the journal Cancer Causes & Control, that purports to show a 44 percent increase in breast cancer risk for women who have had an abortion, with the risk increasing after each subsequent abortion. The study claims this may help explain the “alarming” rise in breast cancer in China over the past 20 years, which parallels the one-child policy introduced in 1979.

But the study’s methodology and data appear seriously flawed, with the results likely reflecting “recall bias.” This would invalidate the study’s findings. Recall bias is a common hazard in case-control studies, which use questionnaires or interviews to gather historical data from participants. Results can be skewed or inaccurate because people have a tendency to forget past events, or neglect to mention them, especially if they are uncomfortable with sharing the information with researchers. For example, underreporting occurs when people are asked about substance use, criminal offenses, family background, or school performance.

Recall bias is even more of a problem when it comes to reporting reproductive history, especially past abortions. In the United States, only 47 percent of abortions were reported in the largest and most recent fertility survey (from 2002). A 1996 analysis cited numerous studies on the topic and found that, as a likely result of abortion stigma, women reported only 20 to 80 percent of their abortions. (The wide range is due to varying interview circumstances, geographic locations, or demographic characteristics of the women.) A significant body of evidence has accumulated on abortion underreporting, going back to the early days of legal abortion in 1960s eastern Europe, as documented by Christopher Tietze and Stanley K. Henshaw:

The classic example is the Fertility and Family Planning Study of 1966, conducted in Hungary a decade after the legalization of abortion. In that survey, the numbers of abortions reported by the respondents for the years 1960-65 corresponded to only 50-60 percent of the number actually performed. A comparable level of underreporting was also noted in 1977.

But how does an apparent association arise between abortion and breast cancer (ABC)? In case-control studies on the topic, researchers select and divide women into two groups: women with breast cancer (the “cases”) and women without the disease (the “controls”). The women in both groups will then be asked whether they’ve had an abortion to see if the disease might be more commonly associated with that. However, cancer patients will be strongly motivated to remember and share their full medical history in the search for answers (this is called “rumination bias”), while women acting as controls in a study have no stake in the outcome and so are less likely to mention past abortions. They would be even less likely to report several past abortions because of the increased stigma. The result is a flawed study, because it will appear that women with breast cancer had more abortions than those in the control group, when they probably didn’t.

This “differential recall” is a known risk in case-control studies in general, although few studies have been done to show the effect in studies on the ABC association. A 1991 analysis in Sweden compared two studies: one that used women’s abortion records from a national registry, and a case-control study that relied on women self-reporting their abortions (that were also recorded in the registry). In the end, 27.1 percent of controls underreported past abortions, compared to 20.8 percent of cases (see Rookus/Leeuwen letter). Another study took place in the Netherlands in 1996 in which case and control groups were interviewed in different regions of the country. The correlation between induced abortion and breast cancer was very strong in regions of the country with a predominantly Roman Catholic population, but much weaker in regions with less abortion stigma. Although the sample size of women who had abortions was small in the Catholic regions, a large number of women in those areas also underreported contraceptive use to a greater extent than in more liberal regions.

Anti-choice activist Dr. Joel Brind has been promoting the ABC association for over two decades. He claims that the Chinese study “neutralized” the recall bias argument. But Brind missed—or chose not to mention—that the journal article contained a confusing error, one that helped to hide the study’s own recall bias shortcomings. Early on, the study authors say:

The lack of a social stigma associated with induced abortion in China may limit the amount of underreporting.

But later in the study, the authors say:

[T]he self-reported number of IA [induced abortions] will probably be underestimated, as the stigma of abortion still exists in China, especially when a woman has more than two IAs. Therefore, this underestimation will inevitably create spurious associations between IA and breast cancer, especially for more IAs.

These two contradictory statements should never have gotten past the peer reviewers.

Regardless of whether abortion is stigmatized in China and to what degree, abortion is still underreported even in countries where abortion is more widely accepted, such as Estonia and Hungary. But the study authors are probably right in their second statement: Abortion stigma does exist in China. An increasing number of young unmarried women are having abortions—often multiple abortions—but there is a stigma in China against premarital sex and an even bigger stigma against out-of-wedlock pregnancies. In these circumstances, it would be very surprising indeed if young Chinese women were not underreporting their abortions. Further, since the study authors admit that abortion stigma in China is more pronounced for subsequent abortions, this would explain the rising association that the authors found between multiple abortions and breast cancer—because women in control groups would be increasingly less likely to report their second or third abortions.

Another type of study, called a “cohort study,” is considered more reliable than case-control studies. In a typical cohort study, researchers spend many years following large numbers of women, some of whom have had abortions, to see which ones develop breast cancer later. Recall bias is not an issue because abortion data is drawn from public records. The result is an accurate percentage of how many women got breast cancer compared to others who didn’t have abortions. Out of at least nine cohort studies done since 1996, not one has found a statistically significant association between abortion and breast cancer, and some found negative associations—meaning abortion might actually protect against breast cancer.

The Chinese study was not a cohort study or even a case-control study. It was a meta-analysis, which combines the results of numerous studies on the same topic to come up with a pooled average. The authors found 36 previous Chinese studies on the ABC association and combined their results to come up with an “odds ratio” of 1.44, which means a 44 percent increased risk of breast cancer for women who had one abortion. However, the authors used 34 case-control studies and only two cohort studies (not included in the nine mentioned above). Neither cohort study showed a statistically significant ABC association. Further, six of the case-control studies that were rated as having the highest quality methodology, according to the authors’ own evaluation, also showed no correlation. In other words, the supposed ABC association arose solely from the weakest 26 studies selected for the meta-analysis, some of which were not even published in peer-reviewed journals.

The major weakness of meta-analyses has a popular acronym—GIGO. It means “garbage in, garbage out.” In other words, if most of the studies you add to the mix are seriously flawed, your pooled result will be worthless as well. To their credit, the study’s authors make clear that induced abortion is not confirmed as a causal risk factor for breast cancer and that their own results should be interpreted with caution. In fact, the scientific community has already dismissed abortion as a risk factor based on the best studies. Given that the correlation only shows up in case-control studies but never cohort studies, it’s highly likely to be an artifact of recall bias.

Although correlation does not equal causation, anti-choice advocates are using the Chinese study to jump to the unwarranted conclusion that abortion causes an increased breast cancer risk. Unfortunately, the study authors never mention other possible risk factors that could help explain the recent rise in breast cancer in China, let alone why they should be rejected in favor of abortion. These include:

  • Fewer full-term pregnancies (one or two) because of the one-child policy;
  • Economic development leading to more affluence and rising body weight (as found in one of the two Chinese cohort studies);
  • Increased industrialization and dramatically increased exposure to environmental toxins in a country with few environmental controls; and
  • Improved protocols for cancer testing, leading to more diagnoses of breast cancer.

Because the study focuses only on China, it also obscures the lack of association between breast cancer and abortion in many other countries. For example, western Europe has low abortion rates and high breast cancer rates, while Russia has high abortion rates and moderate breast cancer rates. It is unreasonable to assume the existence of an ABC association when it’s found inconsistently and depends more on geography or study methodology. Further, if there really were a causal connection, it would show up more robustly across most studies, instead of being all over the map.

The study’s ABC association was quite weak in comparison to major risk factors for breast cancer, such as advanced age, having a family history of breast cancer, or being childless. In a specific population such as women in China, weak associations can turn up by chance, and are therefore random and meaningless. For example, if you compared the population of storks with the rates of childbirth outside hospitals in various countries, a correlation will appear in some of them. It does not mean that storks deliver babies in some countries but not in others. It just means that you can find a correlation between almost anything if you’re determined to find it.

The promotion of flawed studies to try to prove that abortion leads to breast cancer is a political effort spearheaded by anti-choice groups and individuals, who primarily use these studies to reinforce abortion stigma and frighten women. The studies may also be a vehicle to smuggle in dogmatic beliefs under the guise of objectivity and the scientific method. As such, they irresponsibly advance an anti-choice agenda at the expense of science and women’s welfare.

Stanley Henshaw and Dr. Christian Fiala supported the author in writing this article.

Like this story? Your $10 tax-deductible contribution helps support our research, reporting, and analysis.

Follow Joyce Arthur on Twitter: @@joycearthur

To schedule an interview with Joyce Arthur please contact Communications Director Rachel Perrone at rachel@rhrealitycheck.org.

  • http://dreamingofperfect.weebly.com/ Hannah J

    People under-reporting abortions would only make ABC MORE obvious with this study. Not less.