Power

Anti-Sex Ed Curriculum Makes the List: Don’t Blame Obama, Blame the System

A recently updated list of federally approved “evidence-based” teen pregnancy prevention programs has been causing a stir. Rather than blaming Obama for this, we’d all do better to recognize that it was the result of a fundamentally flawed system sorely in need of review and repair.

See all our coverage of Heritage Keepers Abstinence Education here.

A recently updated list of federally approved “evidence-based” teen pregnancy prevention programs has been causing a stir. This list specifies the programs that are eligible for federal funds and serves as the cornerstone of President Obama’s Teen Pregnancy Prevention Initiative.  Among the three programs making the list for the first time is the Abstinence-Only-Until-Marriage program Heritage Keepers Abstinence Education. Our friends and fellow advocates in the adolescent sexual health promotion field have denounced this program as medically inaccurate, biased, fear- and shame-based, and otherwise inappropriate for the classroom. Here we all agree, completely. A program like this has no place in our schools and communities, and especially not with government funding.

But we take issue with criticisms of the Obama administration for “backroom deals and secrecy,” “political expediency,” and “blatant hypocrisy,” among other barbs and arrows recently launched by understandably frustrated advocates. Rather than blaming Obama for this unfortunate development, we’d all do better to recognize that it was the result of a fundamentally flawed system operating according to explicit agreed-upon rules—a system sorely in need of review and repair.

What’s wrong with this system? Simply put, it is based on a fundamental misunderstanding of the nature of scientific evidence and its appropriate use. To earn a place on the list, a program needs only to produce one statistically significant outcome in one evaluation study–no matter how many outcomes were tested across how many studies. Yet it is a well-known principle of research statistics that the likelihood of a false finding increases as the number of outcomes tested increases. In fact, if a program has no effect, for every twenty outcomes tested one outcome can be expected to be incorrectly identified as a statistically significant effect merely due to chance alone. Even testing just two outcomes raises the probability of a false finding of effectiveness beyond the traditionally tolerated level of less than five percent. The technical name for taking advantage of this principle to obtain a statistically significant finding is “fishing for significance.”

And this is just one of the more blatant of the numerous problems with the evidence review system currently in place. These problems and their implications are described elsewhere in more detail. Suffice it to say that under current “evidence-based” standards of effectiveness, a Mickey Mouse cartoon could be listed as an effective teen pregnancy prevention program with just a moderate amount of evaluation creativity and persistence. Perhaps it is then no surprise that upon release of the original version of this evidence-based teen pregnancy prevention program list in 2010, the independent non-partisan research-use watchdog Coalition for Evidence-Based Policy commented that “HHS’s evidence-based teen pregnancy prevention program is an excellent first step, but only 2 of 28 approved models have strong evidence of effectiveness.”

The biggest challenge in research and research use in this area is that we as a field need to move away from asking these simplistic out-of-context yes/no questions about effectiveness of individual name-brand curricula. These types of questions inevitably lead to the picking and choosing of isolated favorable findings. Instead, we can do a better job of critically weighing and integrating the entire body of relevant program evaluation evidence —together with the broader body of scientific research evidence on adolescent health and development —as they inform a set of general principles of effective and responsible comprehensive sexuality education.

To complement this more encompassing view of evidence, while at the same time recognizing the understandable demand among funders and program providers for simple and straightforward guidance about program development and selection, we propose a move to standards-based lists. There are now many excellent sets of standards and guidelines for comprehensive sexuality education, from groups such as SIECUS, UNESCO, and IPPF, as well as the newly developed National Sexuality Education Standards.  These standards represent an enormous improvement over what is currently passing for comprehensive sexuality education, and enjoy widespread support from mainstream health and education organizations.  Any of these could be used as the basis of an objective and systematic process for rating curricula and other programs on the most important content and process criteria.

California has already provided a model of such a system, based on its Sexual Health Education Accountability Act and related California Education Code. These basic standards for comprehensive sexuality education provide 45 explicit criteria that serve as the foundation for an objective and systematic process used to rate curricula in California.  The successful experience in California with this system could help inform the adaption of such as system in other states, and for federal program review as well. It could be applied to any of the existing standards.

Advocates for Youth has promised to challenge the existing evidence-based paradigm and to “advocate for a recalibration of the current balance towards a vision of sex education that is evidence-informed and rights based.” We enthusiastically support this new focus, and will help however we can.