By John Reichard, CQ HealthBeat Editor
September 18, 2009 -- With Congress poised to spend possibly billions of dollars in coming years on research to help doctors identify the most worthwhile medical treatments for various conditions, how doctors respond to that research is key to the hopes of policy makers that it will "bend the curve" in health spending growth. An analysis released Friday at a meeting of the Medicare Payment Advisory Commission (MedPAC) made clear that while doctors are receptive to the studies, how they are carried out, written, and distributed will determine how much impact they have.
The research will only be useful if doctors know about it and find it "accessible" and credible, said MedPAC staffer Joan Sokolovsky. The staff findings were based on six focus groups MedPAC staff conducted with doctors in Baltimore, Chicago and Seattle in July and included a mix of primary care and specialty physicians.
"In general, the current initiatives are not well understood by practicing physicians," she noted. Perhaps the most surprising finding from the groups was that some doctors do not want any information comparing medical treatments for the same condition, Sokolovsky said.
Those who were opposed to the research said they got all the information they needed from journals, conferences, and drug sales representatives and expressed worry that the research would lead to "cookie cutter" medicine not properly adapted to the needs of individual patients, the staffer said. These physicians worried that the studies would lead to mandatory guidelines from the government and private insurers about how they should practice medicine. And they said personal experience was sufficient to make treatment decisions.
But Sokolovsky said those doctors were in the minority. The majority welcomed more comparative effectiveness research, or "CER," saying they lacked data to help them pick which of the varying approaches to treating the same medical condition works the best.
They wanted data comparing drugs, devices and medical procedures and said treatments now considered as "best practices" were not always based on medical evidence. But they said comparative studies needed to take into account the fact that subpopulations might respond differently to the same treatment.
They also expressed concern about the costs of the studies and what impact they might have on medical innovation, though one point of view was that the studies would spur rather than lessen innovation because they would result in fewer "me-too" products not shown to have distinctive value.
Doctors wanted the study descriptions to be concise and easy to read, with the ability to dig deeper if they wanted more data. And results should be written in a way that they can be read via e-mail using Blackberries and similar devices.
Trusted sources of comparative effectiveness research included not just specialty societies but also the Food and Drug Administration, the National Institutes of Health, and the Centers for Disease Control and Prevention. But the general view of doctors in the focus groups was that all research has biases—that "even the government could be biased toward less expensive treatments," Sokolovsky said. "Transparency" is key; doctors said researchers must report conflicts of interest, and details on research design, study methods and all results from a study.
The analysis found, too, that doctors wanted the studies to focus on high-priced, new technologies before they are widely diffused in clinical practice.
While doctors in the groups wanted more data from head-to-head comparisons of treatments, the emerging federal research agenda as recommended by the Institute of Medicine is lighter on those studies than some analysts expected.
Staff noted that a list of the 100 highest-priority research topics released by the Institute of Medicine in June was light on head-to-head comparisons of treatments. Half of the topics evaluate some aspect of the of the health care delivery system, a third address racial and ethnic disparities and a fifth address patients' functional limitations and disabilities, the analysis found.
MedPAC Executive Director Mark Miller said "we expected to see a lot more drug-drug, device-device, medical-treatment-versus-surgical" treatment comparisons in the IOM topics. Miller expressed curiosity about commissioner reactions to the topics.
"I had exactly the same response to the IOM list," replied commissioner Thomas M. Dean, a South Dakota family practice physician. "I was really surprised at how vague or kind of non-focused that some of the recommendations were and I certainly expected. . .much more specifics and at least from a clinical point of view that's what we would need," he said.
"We can't make good decisions if we don't get good data," he said.
MedPAC Vice Chairman Francis J. Crosson, an executive with the Kaiser Permanente Medical Group, observed that the studies have greater impact in Kaiser medical groups if they come from close peers. "Physicians tend to trust the judgments of individuals in their own specialty who have strong reputations," he said. Kaiser has tried to use those types of individuals not just to promulgate findings but to develop them he said.
It would be useful to have expert panels "standing behind" the recommendations from the research, Crosson counseled. Often physicians "turn right to the back page and say, 'Okay, whose recommendation is this?' and they look for a name they can trust."
September 18, 2009 -- With Congress poised to spend possibly billions of dollars in coming years on research to help doctors identify the most worthwhile medical treatments for various conditions, how doctors respond to that research is key to the hopes of policy makers that it will "bend the curve" in health spending growth. An analysis released Friday at a meeting of the Medicare Payment Advisory Commission (MedPAC) made clear that while doctors are receptive to the studies, how they are carried out, written, and distributed will determine how much impact they have.
The research will only be useful if doctors know about it and find it "accessible" and credible, said MedPAC staffer Joan Sokolovsky. The staff findings were based on six focus groups MedPAC staff conducted with doctors in Baltimore, Chicago and Seattle in July and included a mix of primary care and specialty physicians.
"In general, the current initiatives are not well understood by practicing physicians," she noted. Perhaps the most surprising finding from the groups was that some doctors do not want any information comparing medical treatments for the same condition, Sokolovsky said.
Those who were opposed to the research said they got all the information they needed from journals, conferences, and drug sales representatives and expressed worry that the research would lead to "cookie cutter" medicine not properly adapted to the needs of individual patients, the staffer said. These physicians worried that the studies would lead to mandatory guidelines from the government and private insurers about how they should practice medicine. And they said personal experience was sufficient to make treatment decisions.
But Sokolovsky said those doctors were in the minority. The majority welcomed more comparative effectiveness research, or "CER," saying they lacked data to help them pick which of the varying approaches to treating the same medical condition works the best.
They wanted data comparing drugs, devices and medical procedures and said treatments now considered as "best practices" were not always based on medical evidence. But they said comparative studies needed to take into account the fact that subpopulations might respond differently to the same treatment.
They also expressed concern about the costs of the studies and what impact they might have on medical innovation, though one point of view was that the studies would spur rather than lessen innovation because they would result in fewer "me-too" products not shown to have distinctive value.
Doctors wanted the study descriptions to be concise and easy to read, with the ability to dig deeper if they wanted more data. And results should be written in a way that they can be read via e-mail using Blackberries and similar devices.
Trusted sources of comparative effectiveness research included not just specialty societies but also the Food and Drug Administration, the National Institutes of Health, and the Centers for Disease Control and Prevention. But the general view of doctors in the focus groups was that all research has biases—that "even the government could be biased toward less expensive treatments," Sokolovsky said. "Transparency" is key; doctors said researchers must report conflicts of interest, and details on research design, study methods and all results from a study.
The analysis found, too, that doctors wanted the studies to focus on high-priced, new technologies before they are widely diffused in clinical practice.
While doctors in the groups wanted more data from head-to-head comparisons of treatments, the emerging federal research agenda as recommended by the Institute of Medicine is lighter on those studies than some analysts expected.
Staff noted that a list of the 100 highest-priority research topics released by the Institute of Medicine in June was light on head-to-head comparisons of treatments. Half of the topics evaluate some aspect of the of the health care delivery system, a third address racial and ethnic disparities and a fifth address patients' functional limitations and disabilities, the analysis found.
MedPAC Executive Director Mark Miller said "we expected to see a lot more drug-drug, device-device, medical-treatment-versus-surgical" treatment comparisons in the IOM topics. Miller expressed curiosity about commissioner reactions to the topics.
"I had exactly the same response to the IOM list," replied commissioner Thomas M. Dean, a South Dakota family practice physician. "I was really surprised at how vague or kind of non-focused that some of the recommendations were and I certainly expected. . .much more specifics and at least from a clinical point of view that's what we would need," he said.
"We can't make good decisions if we don't get good data," he said.
MedPAC Vice Chairman Francis J. Crosson, an executive with the Kaiser Permanente Medical Group, observed that the studies have greater impact in Kaiser medical groups if they come from close peers. "Physicians tend to trust the judgments of individuals in their own specialty who have strong reputations," he said. Kaiser has tried to use those types of individuals not just to promulgate findings but to develop them he said.
It would be useful to have expert panels "standing behind" the recommendations from the research, Crosson counseled. Often physicians "turn right to the back page and say, 'Okay, whose recommendation is this?' and they look for a name they can trust."