Is the 21st Century Cures Act a Solution or a Problem?

Robert Kaplan, The Regulatory Review: May 7, 2019.

In December, 2016—a time when the U.S. Congress could barely agree on anything—the U.S. House of Representatives and the U.S. Senate came together to pass the 21st Century Cures Act. Championed by Representative Fred Upton (R-Mich.) and Representative Diana DeGette (D-Colo.), the Act uses 312 pages to outline a plan to accelerate the licensing and delivery of medical cures. It includes many attractive features. But, as I will show here, it also contains provisions that could increase risks to patients.

The Act was attractive because it provides about $6.3 billion in funding, mostly for the National Institutes of Health (NIH), the major supplier of research funding for American universities and research institutions. Although the NIH faces very little political opposition, the agency had been deprived of adequate funding for at least a decade.

Beyond support for the NIH, the Act was appealing because it provides funding for mental health care. It endorses parity in payment in most private and some federal health insurance plans for physical health and mental health services and strengthened suicide prevention programs. It also provides much-needed funding to confront the opioid epidemic.

The 21st Century Cures Act had the support of patients, researchers, universities, and a broad political constituency. But there is a catch. In several key ways—including notably the use of surrogate markers instead of better forms of evidence to show the drug benefits—the Act threatens public health by lowering U.S. Food and Drug Administration (FDA) standards for new drugs and medical devices.

The Act was originated and promoted by the major pharmaceutical companies who employed over 1,300 lobbyists to promote the bill. The companies were concerned that FDA used strict methodological standards to evaluate the efficacy and safety of new pharmaceutical products. The companies argued that zealous concern for efficacy and safety deprived patients of new cures, and they succeeded in adding provisions to the Act that allow FDA to become less rigid.

Opponents were concerned that the Act deemphasized methodologies that have long been used to evaluate the safety and efficacy of new drugs. Indeed, supporters of the Act hoped to reduce reliance on the double-blind randomized clinical trial, the gold standard for establishing that medicines cause improvements in health outcome. Pharmaceutical companies regarded these methods as outdated and suggested that new drug licensing should depend on preclinical studies, including animal studies, case histories, and in some cases just the clinical experience of doctors. As it turns out, the companies’ suggestion became reality. The Act now includes a provision that allows for the consideration of “real world evidence,” which includes “sources other than randomized clinical trials.” Some of these alternative methods are associated with established biases.

Many health care researchers, including me, believe that relaxing methodological standards will put the public at risk. Risks take several forms. All drugs have potential benefits and side effects. Several systematic reviews suggest that as standards for conducting and reporting clinical trials tightened, the studies became less likely to show that treatments offer benefits to patients. Overestimating treatment benefits may harm patients by subjecting them to side effects for treatments that may not help them. When side effects are underestimated, patients may not be aware of the potential harms their medicines may cause.

Here is the crux of the issue: People use medications because they want to live longer and to feel better. Over the last few decades, FDA has put greater emphasis on measures of health outcome that are important to patients, such as length of life and quality of life.

However, drugs and devices are often evaluated on the basis of surrogate markers, including clinical lab tests that evaluate blood chemistry or tumor characters. These surrogates can be important if they are closely associated with health outcomes. But the surrogates are often uncorrelated with measures that are meaningful to patients.

For example, glycosylated hemoglobin is a good marker of diabetes control. Yet in some systematic clinical trials, patients who achieve lower glycosylated hemoglobin through aggressive medical management have a higher probability of death than those receiving usual care.

Diana Zuckerman and her colleagues at the National Center for Health Research recently studied the approval of new cancer drugs by FDA between 2008 and 2012. Among 54 new drug approvals, 36 had been evaluated on the basis of surrogate markers. In cancer studies, the surrogate measure is often tumor shrinkage. We might assume that a drug that shrinks tumors—the surrogate measure—should help people live longer higher-quality lives—the outcome. Yet, Zuckerman and her colleagues found that, for 18 of the 36 drugs, there was no evidence of improved life expectancy. The manufacturers never reported data on survival for another 13 of the drugs. It can be assumed that the companies would have reported improved survival data if such evidence existed. So, for 31 of the 36 of newly-approved cancer drugs, there was apparently no evidence that the treatment increased life expectancy.

The 18 drugs that did not improve life expectancy would still be valuable if they improved quality of life. But Zuckerman and her colleagues found only one was associated with any evidence of improved life quality. Of the 18 drugs, 15 did not improve quality of life and the remaining two drugs actually made quality of life worse. Even though the great majority of these new cancer drugs were unassociated with any benefits from the patient’s perspective, many continue to be used and are sold at a very high price. One of the drugs that reduces quality of life and does not increase life expectancy is sold for approximately $170,000 per person per year.

There at least three ways that focusing on surrogate markers rather than health outcomes can have a large negative impact on how we appraise the net benefit of medical interventions.[…]

See the original post here.