Artificial Intelligence Is Rushing Into Patient Care – And Could Raise Risks

Liz Szabo, Kaiser Health News,


Health products powered by artificial intelligence, or AI, are streaming into our lives, from virtual doctor apps to wearable sensors and drugstore chatbots.

IBM boasted that its AI could “outthink cancer.” Others say computer systems that read X-rays will make radiologists obsolete.

“There’s nothing that I’ve seen in my 30-plus years studying medicine that could be as impactful and transformative” as AI, said Eric Topol, a cardiologist and executive vice president of Scripps Research in La Jolla, Calif. AI can help doctors interpret MRIs of the heartCT scans of the head and photographs of the back of the eye, and could potentially take over many mundane medical chores, freeing doctors to spend more time talking to patients, Topol said.

Even the U.S. Food and Drug Administration—which has approved more than 40 AI products in the past five years—says “the potential of digital health is nothing short of revolutionary.”

Yet many health industry experts fear AI-based products won’t be able to match the hype. Many doctors and consumer advocates fear that the tech industry, which lives by the mantra “fail fast and fix it later,” is putting patients at risk—and that regulators aren’t doing enough to keep consumers safe.

[…]

Relaxed AI Standards At The FDA

The FDA has come under fire in recent years for allowing the sale of dangerous medical devices, which have been linked by the International Consortium of Investigative Journalists to 80,000 deaths and 1.7 million injuries over the past decade.

Many of these devices were cleared for use through a controversial process called the 510(k) pathway, which allows companies to market “moderate-risk” products with no clinical testing as long as they’re deemed similar to existing devices.
In 2011, a committee of the National Academy of Medicine concluded the 510(k) process is so fundamentally flawed that the FDA should throw it out and start over.

Instead, the FDA is using the process to greenlight AI devices.

Of the 14 AI products authorized by the FDA in 2017 and 2018, 11 were cleared through the 510(k) process, according to a November article in JAMA. None of these appear to have had new clinical testing, the study said. The FDA cleared an AI device designed to help diagnose liver and lung cancer in 2018 based on its similarity to imaging software approved 20 years earlier. That software had itself been cleared because it was deemed “substantially equivalent” to products marketed before 1976.

AI products cleared by the FDA today are largely “locked,” so that their calculations and results will not change after they enter the market, said Bakul Patel, director for digital health at the FDA’s Center for Devices and Radiological Health. The FDA has not yet authorized “unlocked” AI devices, whose results could vary from month to month in ways that developers cannot predict.

To deal with the flood of AI products, the FDA is testing a radically different approach to digital device regulation, focusing on evaluating companies, not products.

The FDA’s pilot “pre-certification” program, launched in 2017, is designed to “reduce the time and cost of market entry for software developers,” imposing the “least burdensome” system possible. FDA officials say they want to keep pace with AI software developers, who update their products much more frequently than makers of traditional devices, such as X-ray machines.

Scott Gottlieb said in 2017 while he was FDA commissioner that government regulators need to make sure its approach to innovative products “is efficient and that it fosters, not impedes, innovation.”

Under the plan, the FDA would pre-certify companies that “demonstrate a culture of quality and organizational excellence,” which would allow them to provide less upfront data about devices.

Pre-certified companies could then release devices with a “streamlined” review—or no FDA review at all. Once products are on the market, companies will be responsible for monitoring their own products’ safety and reporting back to the FDA. Nine companies have been selected for the pilot: Apple, FitBit, Samsung, Johnson & Johnson, Pear Therapeutics, Phosphorus, Roche, Tidepool and Verily Life Sciences.

High-risk products, such as software used in pacemakers, will still get a comprehensive FDA evaluation. “We definitely don’t want patients to be hurt,” said Patel, who noted that devices cleared through pre-certification can be recalled if needed. “There are a lot of guardrails still in place.”

But research shows that even low- and moderate-risk devices have been recalled due to serious risks to patients, said Diana Zuckerman, president of the National Center for Health Research. “People could be harmed because something wasn’t required to be proven accurate or safe before it is widely used.”

Johnson & Johnson, for example, has recalled hip implants and surgical mesh.

In a series of letters to the FDA, the American Medical Association and others have questioned the wisdom of allowing companies to monitor their own performance and product safety.

“The honor system is not a regulatory regime,” said Jesse Ehrenfeld, who chairs the physician group’s board of trustees. In an October letter to the FDA, Sens. Elizabeth Warren (D-Mass.), Tina Smith (D-Minn.) and Patty Murray (D-Wash.) questioned the agency’s ability to ensure company safety reports are “accurate, timely and based on all available information.”

[…]

Read original story here.