Two new studies find shortfalls in the U.S. Food and Drug Administration's approval process for heart devices such as pacemakers and stents.
Safety targets often were not clearly spelled out in the research submitted by device makers and important patient information was missing, according to one study conducted by researchers from the FDA and Boston's Beth Israel Deaconess Medical Center.
A separate analysis by researchers at the University of California, San Francisco, found heart devices frequently got the FDA's blessing based on research done outside the United States in small groups of patients. Many device studies lacked standards most scientists expect: randomization and a clear goal.
Dr. Jeffrey Shuren, the FDA's acting device center director, said the agency is taking a close look at its device program and making changes. It wants manufacturers to adhere to tougher research guidelines that will be out in 2010, Shuren said.
The FDA, the chief U.S. watchdog on device safety, approves products ranging from wrinkle fillers to artificial knees. Heart devices fall into a category of high-risk devices that require the toughest review before they can be marketed. They include implantable defibrillators, valves and stents, which are tiny mesh-metal tubes used to prop open arteries.
The new studies, published in separate medical journals, cap a year of scrutiny and criticism for the FDA's medical devices division. In August, the head of that division resigned, months after scientists under his leadership alleged they were pressured to approve certain products. The year began with congressional investigators saying the FDA should take immediate steps to make sure the riskiest devices are approved through the most stringent process.
The new studies did not examine the safety of the approved devices, and did not look for differences in the approval process for items that were later recalled. Global sales for heart and blood vessel devices were nearly $76.7 billion in 2008, according to market research firm BCC Research.
One of the new studies, published online Tuesday in the American Journal of Therapeutics, found about 40 percent of pivotal studies lacked precise targets for how safety would be measured. Studies also failed to fully account for what happened to all patients enrolled in the research and omitted important information on patients such as how many had heart disease or diabetes.
"Companies need to better define precisely what they're measuring and at what time point they intend to measure it," said study co-author Dr. William Maisel, director of the nonprofit Medical Device Safety Institute at Beth Israel Deaconess Medical Center. The analysis looked at the research behind 88 heart and blood vessel devices.
Maisel was an FDA consultant and another Beth Israel author was in an FDA fellowship program when the study was done. The FDA cleared their participation after conflict-of-interest screening. Three other authors are FDA staffers.
The second study appears in Wednesday's Journal of the American Medical Association. Researchers from the University of California, San Francisco, examined summaries of the research behind 78 heart and blood vessel devices. It found that many devices were approved based on small studies _ 300 patients on average _ and two-thirds were approved with results of just one study.
"We were surprised at the number of devices approved without high quality evidence," said study co-author Dr. Rita Redberg. The research was supported by the university's medical school.
Both studies looked at devices approved from 2000-2007.
Much is at stake with device approval. In 2008, the U.S. Supreme Court found that federal law bars patients from suing manufacturers for injuries caused by FDA-approved devices.
In contrast, consumers can sue drugmakers over FDA-approved drugs. Drugmakers submit rigorous studies when seeking approval of new drugs, generally they must submit large randomized studies.
Redberg said she believes new leaders at the FDA want to improve the approval process. She joined an advisory committee to the FDA on devices last year, after the years covered by the study.
In both new studies, the researchers looked only at so-called premarket approvals. They didn't include devices cleared through an alternative FDA process _ "510(k) submissions" _ used for less risky devices that are substantially similar to approved devices. The FDA has asked the Institute of Medicine to review its 510(k) reviews, following criticism from safety advocates and government watchdogs.
FDA officials said the University of California researchers looked only at summaries of device approvals, rather than the full research. The FDA said the researchers also made faulty assumptions about device research, which is inherently different from drug research.
The most rigorous research randomly assigns patients to get either the experimental treatment or a standard treatment, or sometimes a placebo. Patients and sometimes doctors are "blinded," meaning they don't know which patients receive the experimental treatment.
FDA officials said requiring randomized studies for second and third generation devices would delay bringing engineering refinements to the market. They said it's often impossible to conduct blinded studies with devices.
On the Net: