Health AGEnda

Care Transitions Evaluation Is Premature and Confusing

Posted in category Care Models, Health Policy, Hospital Readmissions, Uncategorized

9 comments

As Orson Welles might have said: "We will evaluate no program before its time."

As Orson Welles might have said: “We will evaluate no program before its time.”

One of the first things you learn in “foundation school” is how easy it is to kill even great programs by evaluating them before they are ready.

Nothing innovative starts working on day one as well as it will with practice, adjustment, and refinement. Even more deadly is an evaluation with low-cost methods that doesn’t really provide the information you want and need. One of the painful lessons I’ve learned is to always buy the highest quality and therefore most expensive evaluation you can afford, because it’s cheaper in the long run.

These are two of the cardinal rules violated by a recently released evaluation of the Community-Based Care Transitions Program (CCTP) conducted by Econometrica  on behalf of the Center for Medicare and Medicaid Innovations (CMMI).

This report is premature and very confusing. I believe it generates much more heat than it sheds light on the program. Yet, it is already being picked up in the media and formed into a narrative of failure. Examples include recent coverage in the Washington Post politics section, the U.S.News and World Report health section, and McKnight’s.

That narrative of failure is wrong and I hope that by explaining why, we can contribute to better thinking about this innovation and the process of developing and testing models of care that can improve the lives of older adults and lower costs. Some other stakeholders in the program are also formulating their replies. For example N4A, the organization of area agencies on aging, offers this critique and Joanne Lynn at the Altarum Institute is also working on a reply. It’s interesting to note that the editor of McKnight’s, James M. Berklan, weighed in today on their story, writing that it’s too early to reach any conclusions about the program.

I should be transparent: The Community-Based Care Transition Program is something in which I and the John A. Hartford Foundation have tremendous pride. It was a $500 million (reduced to $300 million) provision in the Affordable Care Act under section 3026 creating a program to pay community-based agencies to deliver care transition support services to at-risk Medicare beneficiaries in collaboration with hospitals with high readmission rates.

It was placed in the U.S. Senate’s bill that eventually became law through the reconciliation process by U.S. Sen. Michael Bennet of Colorado, who was a close observer of Eric Coleman’s work to develop and test the Care Transitions Intervention (CTI) (see an early version of Sen. Bennet’s legislation ) The Hartford Foundation funded CTI’s randomized clinical trial as part of our Geriatric Interdisciplinary Teams in Practice initiative way back in 2001 and we have been funding its dissemination ever since.

We are currently funding a grant related to the CCTP to help Dr. Coleman’s technical assistance and training center become financially self-sustaining as it helps organizations implement the Care Transitions Intervention, the specific clinical intervention used by the majority of sites in the CCTP.

Policy makers may be interested in the $17 billion dollars in payments attributed to Medicare readmissions each year, some of which are certainly preventable. But I am principally concerned by the harm and suffering that each readmission signifies for patients and families.

Hospital readmissions are important to us primarily because they are a very visible sign of the failures of our health care system (hospital care that leaves older adults with newly acquired frailties, poor discharge planning and communication between hospitals and the outside world, and weak systems of outpatient care that leave patients with little choice but to call 911 when something goes wrong).

Policy makers may be interested in the $17 billion dollars in payments attributed to Medicare readmissions each year, some of which are certainly preventable. But I am principally concerned by the harm and suffering that each readmission signifies for patients and families.

Moreover, work on readmissions reduction and the CCTP comes out of a long line of work addressing the very high rates of rapid hospital readmission for older adults led by many grantees in the geriatrics community, including Mary Naylor and her Transitional Care Model (TCM), Mark Williams and the Society of Hospital Medicine’s Project BOOST, as well as Eric Coleman. But thanks to Eric’s successful randomized clinical trial of the Care Transition Intervention, published in the Archives of Internal Medicine, we learned that readmission rates could be reduced by a fairly lean intervention that coaches patients and families to have the skills and gain the confidence they need to manage their own care. It is also worth noting that this successful trial followed several years of pilot and feasibility testing of the model before going to the “gold standard” evaluation methodology of a real experiment.

So what is wrong with this new evaluation and why am I even talking about it?

To answer the second part of my rhetorical question first, while I would like to ignore the results and wait for a better evaluation, the report was posted on the CMS website on Jan. 2 and mentioned in the press in Kaiser Health News on Jan. 9 and has been spreading ever since. To fail to respond will leave what I believe is a misimpression in the minds of readers and potentially lead to policy mistakes later.

So turning to the evaluation itself … The first thing is to acknowledge that the report’s authors clearly state in multiple places and in different ways that the data are very preliminary and the conclusions need to be reexamined with more data.

On page 67, they write:

Again, however, these findings are based on a limited number of hospitals that were operational in the first full year of CCTP operation. No hospitals had a full year of program operation, and many of the first three cohorts had been operational for only a few months. In addition, the results reported are based on the early experience of only 47 CCTP sites and their hospital partners; an additional 54 CCTP sites entered the program in early 2013. Future analyses will be able to incorporate data on all the CCTP sites.

So I suspect that none of my critiques will be news to the authors. They were given a job by CMS and they did it, as far as I can tell. They are not to blame for the fact that it wasn’t a job that anyone should really want done. However, I do think that there were some errors in the design of the evaluation Econometrica executed that are very problematic.

It really, really matters how many patients got the intervention at a particular hospital if we are to have confidence in the general finding that there was not statistically significant reduction in readmissions at the hospital level. If not very many patients received the intervention at a particular hospital, then not even the best intervention in the world could shift their overall rates.

Let me make this more concrete and clarify a few issues. First, unlike prior experimental trials of readmission reduction interventions, this evaluation does not look at what happened to the patients who got the treatment, specifically.

This evaluation is at the level of the hospitals and looks at the overall readmissions rate for all Medicare Fee-for-Service beneficiaries in the period before the intervention started, as well as the before and after effects at a similar group of hospitals that did not get the CCTP.

So it really, really matters how many patients got the intervention at a particular hospital if we are to have confidence in the general finding that there was not statistically significant reduction in readmissions at the hospital level. If not very many patients received the intervention at a particular hospital, then not even the best intervention in the world could shift their overall rates.

There are three cohorts of sites that started offering services to patients leaving partner hospitals within 2012. The first cohort of seven sites served patients for between nine and 11 months; the second cohort of 22 sites served patients for between five and eight months; and the third cohort of 18 sites served patients for between three and five months. Not very much time to practice up one’s work and refine all of the moving parts. Probably the third cohort should just have been dropped entirely and much of the second as well.

But deeper into the numbers, how many patients were actually served anyway? The report doesn’t say—something very odd to which I will return. We can do some estimating, however. The average site set as its enrollment target somewhere around 7,000 patients over two years, or 291 per month. Sounds pretty big. However, the average community-based organization (CBO) was working with four different hospitals, so that 291 becomes an average of 73 people per month. Moreover, while we don’t know exactly how badly sites were doing in terms of meeting their enrollment targets, we know that overall it has been a struggle, and especially so at the beginning of the program.

The thing that is most innovative about CCTP is not the interventions to reduce readmission themselves—CTI, TCM, and Project BOOST have all had years of development and good evidence. What is innovative is the creation of new partnership arrangements between community-based organizations, largely area agencies on aging, and hospitals—arrangements where only CBOs could get paid.

There are many medications such as those for hypertension that show these same pattern of results. When patients take anti-hypertensive medications, they do better. However, if you look at the population rates of hypertension, the results are not so good. Should we conclude that, because national hypertension rates are not where we want them, therefore the drugs don’t work or that insurance shouldn’t pay for them?

Obviously not.

The CCTP sites frequently report inadequate communications, difficulty accessing cases in a timely way, and surprise at how hard it was to bridge the CBO-Hospital gap. Thinking of this from the hospital side, I can see how it would be provocative. Who are these people? What do they think we are doing wrong? Why do they think they can do better? And why oh why is CMS paying these people $300+ per person they serve when CMS should be paying us?

So maybe the number of patients enrolled at any single hospital in a month might have been half of the target estimate. Instead of 73 patients a month, it would have been approximately 40. Even if a CCTP did a great job targeting the right people at really high risk of readmission and had a really big impact on their outcomes, it might be the difference at the hospital level of perhaps six prevented readmissions per hospital per month. (With 40 people served, assume 30 percent are readmitted if there were no CCTP services, that’s 12. Assume only 15 percent are readmitted with CCTP services [a big effect], that’s six people who would not be readmitted.)

Thus, we are expecting to find an effect in the overall readmission rate for all Fee-For-Service Medicare beneficiaries based on 66 people in the hospitals in the cohort that served the longest or only 18 people in the hospitals in the shortest-serving cohort. And these are not small hospitals—the median size being between 200-500 beds, with about 15 percent of hospitals involved having more than 500 beds.

There is no statistical power analysis provided in the report to tell us if effect sizes like this can be detected or how many months of data are needed to produce reliable estimates purely on a statistical basis, but I think it is pretty unlikely that this is a reasonable test.

It is also unclear to me if hospital-level readmission rates are even a good question to ask about the impact of CCTP, at least at this point. Perhaps an analogy might be helpful: There are many medications such as those for hypertension that show these same pattern of results. When patients take anti-hypertensive medications, they do better. However, if you look at the population rates of hypertension, the results are not so good. Should we conclude that, because national hypertension rates are not where we want them, therefore the drugs don’t work or that insurance shouldn’t pay for them?

Obviously not.

I think it is most sensible that the CCTP should be evaluated on the same basis: people who get served as compared to similar people who didn’t. In fact, it is paid that way—CBOs providing CCTP services only get paid for enrolled cases, not for every Medicare Fee-For-Service beneficiary in a region or even served at a hospital.

So given that program costs are tied to the number served, the implied cost benefit ratio at the population level seems to be comparing radishes to cucumbers. It just turns out that it is cheaper and more convenient to look at hospital-level data. It doesn’t require any on-the-ground data collection from patients to find out what happens to them or to construct a comparison group.

This evaluation does not tell us how many people were served … In fact, it tells us nothing about the people served or the people not served.

But I mentioned early that I would return to the point that this evaluation does not tell us how many people were served. And it does not—I’ve searched carefully and asked others. In fact, it tells us nothing about the people served or the people not served.

The data sources for the evaluation are brief general interviews with each of the 47 participating sites (“How would you rate your progress?” “Do you plan to make changes?” “What are your challenges?”); a list of the type and size of the “partnering” hospitals provided by the CCTP sites; administrative data on the region (e.g., MDs/1000); and, of course, examination of claims data at the individual hospitals using pre-programed statistical routines and software to look at all patient readmission rates (as well as emergency department use and observation use).

In conducting an early preliminary evaluation (what is known as formative in the business because it helps you shape or reshape the “form” of your program), you really need more concrete information at the level of the services being provided, not uninterpretable early “final results.”

If you were running a CCTP site, wouldn’t you want to know what patients thought, what hospital staff thought, as well as operational things such as how quickly professionals visited patients in their homes after hospital discharge? I might also be curious about the readmission rates and reasons for patients I actually served, but I don’t think it would be very helpful to rush directly to the end to find out what the “population” level benefit was in places where I was still struggling to get services to people who need them.

However, that kind of data collection is hard and expensive. And as we should all know by now, you get what you pay for.

I’ve gone on way too long, even for our own blog, but there is one more point I can’t help but make. Everyone knows the old joke about the hikers facing a charging bear, where one stops to put on his running shoes and the other says, “What are you doing? Do you think you can outrun the bear?” And the first hiker looks up from tying his shoes and says, “I don’t have to run faster than the bear . . . I just have to run faster than you!”

Well, the application here is that Econometrica’s very sophisticated analysis of changes in readmissions rates due to CCTP also controlled for local activity of ACOs, medical homes, and other innovative models. This means that they estimated the impact of these innovations as well as CCTP.

While they don’t report specifically on the estimated effects of these other variables, by their silence I think we can deduce that there were no discernable effects of those innovations, either—even though readmission rates are declining overall. Is it CCTP that doesn’t work (and as I noted, it’s too soon to tell), or is this evaluation methodology just flawed?

Food for thought.

image_print

9 thoughts on “Care Transitions Evaluation Is Premature and Confusing

  1. Thank you so much for this very thoughtful and informative blog – it directly responds to this terrible and unfair negative press CCTP has received. The CCTP program leaders at CMS seem to have fallen for these negative perceptions and very faulty measures of success as well. I am so appreciative to see your response and hope you can get the press that picked up the original news to publish your excellent rebuttal. We have been tryiing to think how and where to respond – you have done it and done it so well. This just needs to be widely spread so those who saw the first news will also see your great counterpoint!!!

  2. “But I am principally concerned by the harm and suffering that each readmission signifies for patients and families.”

    Agree! Very informative post, thank you!

  3. Thank you, Dr. Langston for your critique of the analysis methods that may have lead to incorrect conclusions about the CCTP. I agree with your points about the under-powered sample and the attribution errors that can occur when readmissions rates are used as a proxy for what is actually occurring at the individual (patient/family) level. Better data collection and analysis methods should be developed and utilized. I am hopeful that your response will be disseminated to other media outlets.

  4. Dr. Langston, Excellent points! Similar situation is in the evaluation of Rapid Response Teams, in which a RCT found no efficacy/effectiveness. Dr. Don Berwick from the Institute of Healthcare Improvement delivered a keynote on the topic ” Eating Soup with a Fork”…..this presentation identified the areas that must be evaluated if we are to improve the quality and safety of healthcare.

    http://www.ihi.org/education/WebTraining/OnDemand/SoupwithFork/Pages/default.aspx

  5. Pingback: Letters To The Editor: Chronic Care Transitions, Proton Therapy, California’s Caregivers | Health Insurance Exchange

  6. Thanks Chris for a thoughtful and informative analysis. One of your key points I appreciated so much and which has been sadly overlooked as so much focus is on payment, is the question about how the patients themselves fared – those served especially, but then also those not served and what happened to them. Those stories, perhaps small in number in research terms, will be invaluable as we learn more about these demos from the voices of the consumers themselves. As always, thanks for your insights.

  7. Chis,

    this is an outstanding commentary and, as a front line CCTP site leader from the hospitla side, as well as an establised multisenter clinical trialist, I can tell you your analysis is right on target. We do asolutely need to get beyond heart warming stories to rigorous testing but we need to make sure that the testing methodology applied is truely rigourous.

    Again, many thanks for your thoughtful input.

    Jeff

    • Thanks Jeff, it means a great deal that someone with your research experience is also concerned about this methodology. I believe that Lewin is collecting some other information and obviously we will just have more months of data from more sites in another year. But based on this example, I am concerned whether the eventual evaluation work will be appropriate and sufficiently transparent so that we can all understand how conclusions are being drawn.

  8. Pingback: Health Wonk Review: Super Bowl Edition « Healthcare Economist

Leave a Reply

Your email address will not be published.