nytLogoOver the weekend, Gina Kolata, a New York Times health reporter, wrote a piece on the work of the new Center for Medicare and Medicaid Innovation (CMMI) that was created as part of health reform to test potential improvements in health care organization and delivery.

Interestingly, the slant on the story was on the lack of rigor in the work of CMMI in its failure to use true experimental research designs, those in which participants are randomized to experimental conditions. Using these designs, often referred to as randomized control trials in bio-medicine, yields results that can be interpreted most authoritatively as something about the treatment causing something about the outcome. (Causal inferences.)

Now, I have been critical of the work of CMMI ( See Groundhog Day and Good Judgment Comes from Experience) and of the Centers for Medicare and Medicaid Services (CMS ) more generally in its efforts to answer critical questions about improving health care in America and particularly for the older adult Medicare beneficiaries for whom they are uniquely responsible.

I also am a strong believer in truly experimental research that allows the strongest inferences about causality, free of the arguments and confounding effects of other approaches. (And I should also admit that I am married to a New York Times editor, own stock in the company, and recently served as a volunteer reviewer of the second round of Innovation Challenge grant applications to CMMI. Perhaps equal and opposite conflicts of interest?)

But I never would have thought that a main critique of CMS/CMMI’s work would be its failure to use randomized experimental designs. As the nation races to implement health care delivery system reform while the nature of that new delivery system is simultaneously evolving based on experience on the ground, the usual metaphor is that we are trying to strap jet engines on the health care airplane while it is already flying (and the new engines, wings, and guidance system keep changing).

The last thing that I think CMMI research needs is the relatively rigid and slow process used in the most rigorous experimental research, which will almost certainly give us the answer to last year’s question several years from now.

The article cites researchers from outside of health care and the “surprising” results of the Oregon Medicaid “experiment” (where, in a limited program expansion a few years ago, the state enrolled those it could afford to cover from the eligible population via a random lottery, leaving the unlucky uncovered to be a control condition).

But I don’t think any serious health service researchers were surprised that giving beneficiaries Medicaid coverage led to increased use of emergency rooms. Habits and choices as to locus of care are hard to change, at least in the short term, so immediately providing insurance coverage mostly makes it easier for a low-income population often with complex social and health care challenges to go to the emergency room without fear of charges they can’t pay.

If there is going to be an effect of insurance coverage reducing use of emergency and acute services, it will only be when there is time and enough improvement in primary care services to help beneficiaries actually get healthier and be better able to control their health challenges.

One of the principles we use at the John A. Hartford Foundation is to design projects with “the end in mind.” Thus the purpose of a health services research project is to inform decision makers regarding the choices that they face in a timely and adequately credible manner. Given that a decision is going to be made, it is the case that some information is better than none. So, yes—when decision makers are very skeptical and the burden of custom, usual thinking, and comfort is high (and one can afford the very high prices)—experiments are helpful.

In one of its most recent truly experimental research projects, Medicare Health Support, CMS randomly assigned Medicare beneficiaries to receive different versions of telephonic care management. The rigorous experiment took years, cost millions of dollars, and simply replicated the failure of the prior set of research projects in the Medicare Chronic Care Demonstration, which used pretty much the same kinds of interventions (or in some cases better ones) and also got very limited positive results. No serious research literature suggested that weak interventions such as those tested would work. As I asked of CMS leaders at the time, why would anyone think that a class of intervention models that had failed in less rigorous testing would be more effective in more rigorous testing? (It almost always works the other way.)

The Hartford Foundation still owns the world record for the largest experimental research trial in the treatment of depression in older adults for our IMPACT project (along with our co-funders, the California HealthCare Foundation, the Hogg Foundation for Mental Health, and The Robert Wood Johnson Foundation), in which we randomly assigned older adults to collaborative care or left them to the mercies of usual care. (As an aside, while the collaborative care treatment radically improved health outcomes for people as compared to controls, cost reduction took more time to manifest.) And we have funded other true experiments where we thought it was necessary.

But despite the success of the IMPACT experiment, CMS has been unwilling to translate the model into benefits or requirements for delivery organizations.

Another proviso of health reform is that CMS itself, through the secretary for Health and Human Services, is delegated very substantial authority to make changes to coverage and benefit design (i.e., changes to the delivery system) based on its best judgment of available evidence.

In other words, the decision makers that CMS has to convince are themselves. And the decisions they have to make are not about what is the very, very best, but rather what is better for now.

Which direction leads toward improvement? No demonstration or experiment is going to tell us what is best for all time; part of what we need in health care is a commitment to regular, well-resourced learning and continuing improvement. (See Declaration of Innovation)

So the real issue in CMMI’s research efforts is not the kind of designs they are using in their work, but rather the content of the interventions they are testing. Changing health outcomes of people with multiple chronic conditions is very hard. For example, using experimental designs and quasi-experimental designs, the Hartford Foundation (and others) has repeatedly failed to improve the costs and outcomes for older adults through efforts to improve primary care.

Getting to the best intervention model possible requires the distillation of the best available evidence blended with judgment, wisdom, and flexibility. Instead, CMMI has been forced by the legislation itself to test certain models favored by members of Congress or by the realpolitik of where the power and influence lie in health care to test others.

CMS doesn’t need better research designs. It needs more people who are really expert in the care of older adults to help them design models and learn from the demonstrations in the field.