HospitalOlderWoman_400pLast week, the Journal of the American Medical Association (JAMA) published a large and well-designed study of a post-hospital readmission reduction program called the “virtual ward,” which grew up in the UK and was tested by our cousins to the north in Toronto.

The model partakes of some elements of other evidence-based work done by John A. Hartford Foundation grantees, including Mary Naylor’s Transitional Care Model, the Society of Hospital Medicine's Project BOOST, and Eric Coleman's Care Transitions Intervention.

A press release and JAMA Report video are available for those who don’t subscribe to JAMA.

Unfortunately, the results of the study were negative. As the press release headline trumpeted: “Following Hospital Discharge, Use of a ‘Virtual Ward’ Model of Care Does Not Reduce Readmissions, Risk of Death.” The study found that the intervention was only able to reduce care readmission rates among their high-risk patients from 24.6 percent to 21.2 percent.

(As an aside, I was struck by the fact that while enrollment in the study was open to pretty much all comers older than 18 who lived in the region and spoke English, the study population defined as “at risk” wound up with an average age of 71! It may not be clear to everyone yet, but readmission is not random with respect to age—it is a geriatric problem.)

Virtual Ward staff meet to discuss patients. From the JAMA Report video. Virtual Ward staff meet to discuss patients. From the JAMA Report video, "Providing Team Based Care After Hospital Discharge Did Not Reduce Readmission Rates Or Deaths."

In some ways, this publication is great. It is essential to publish studies that don't show results so that we aren't lulled into a false belief in the effectiveness of what we do by publication bias. We don't want to hide null results in health services research (HSR) any more than we want to make drugs look successful in trials by hiding the bulk of those that didn't.

However, on reading this paper and thinking about other recent negative studies, I was struck by the thought that there are many ways that a health services intervention is much more complicated than a drug trial and that they really warrant a more theoretically based kind of test.

(Another recent example was an August JAMA report of a negative result from a well-designed study of the efficacy of a brief counseling intervention for problem drug use in low-income clinics in Seattle.And, of course, we discussed the negative RAND study of patient-centered medical homes earlier this year extensively on Health AGEnda.)

Part of the problem with the current approach to health services research lies in thinking of HSR interventions as if they are drugs. In drug trials, there is a very applied question regarding whether the drug works or not, but this comes at the final stage of research. Before a drug reaches that stage, there have been countless theoretically informed tests of the genes, molecular pathways, and biochemical mechanisms to guide drug development.

HospitalStaff_300pIn health services research, we seem to be jumping to the efficacy/effectiveness testing of very complex interventions without the underpinning of theory to guide our work and make results (positive or negative) interpretable.

While I understand that HSR is a much more applied discipline than my basic training in social science, I think that these negative studies show why this atheoretical approach is a problem. In theory-based science, one derives testable predictions about observable phenomena from the theory and some methodology for collecting observations. I would argue that in this framework, as long as the methodology is sufficiently credible, either positive or negative results are informative and either add or subtract credibility from the theory.

It is also possible on the basis of theory to describe a very specific causal pathway of one’s proposed effect and test that as part of the overall study. For example, one might hypothesize that a particular gene expression is related to disease. One can manipulate the expression of the gene to test the basic relationship.

But one might also hypothesize that the pathway to an effect of the gene was through its regulation of another gene. That pathway might or might not be true despite an overall effect. And that mediation pathway is informative and useful knowledge as well.

This is the level of theoretical description that is missing in these studies and which would have suggested measures beyond overall efficacy/effectiveness that would have made these results much more valuable. While there are times when one has a clear and specific applied question—head-to-head comparisons of interventions in common practice that have different costs and risks—in geriatric care, we aren't at that point. We have to admit to ourselves that we really know very little about how and why our intervention models work and that we have an obligation to learn more deeply as we go rather than treat our interventions as if they were fixed, unalterable, and handed down to us by some higher power.

Moreover, I think our approach in HSR often represents overconfidence in the interventions we are testing. I know that in those cases where we at the Hartford Foundation have failed to collect important process-of-care information and to measure mediating pathways, it has been because we were too sure that an intervention would “just work.”

VirtualWard_300pFor example, in the “virtual ward” study, the design did not capture the specifics of the care delivered to the patients (number of calls, number of home visits, etc.) except for a small sample, making it impossible for the investigators to look for different outcomes for people who received “higher” or “lower” doses of the intervention. (As an aside, clearly one of the tacit assumptions of the study—despite the relative novelty of the model—was that the intervention would always be implemented faithfully. Therefore, there was relatively little concern to be sure to measure, or even describe, what was done.)

However, returning to the point about theory, an example might be helpful. I think Eric Coleman has laid out a fairly clear theory of how his post-discharge intervention provides benefits:

  • Medication reconciliation. There are high rates of medication errors and confusion among patients, family, and providers post-discharge. So restarting missing meds, stopping duplicates, and helping people take meds as intended has a beneficial, biologically mediated effect. This study does not report how the “virtual ward” helps with medications, if there are medication errors in Canada, or what the difference between intervention and control might be.
  • Patient/family knowledge of early warning signs, or red flags. Education about signs and symptoms that are early warnings or very dangerous can help people take early appropriate action (for example, call a provider and have a dose increased or decreased). Education about what might be red flags also reduces unwarranted fear of those things that are not early warning signs and increases trust in providers and confidence in self-management. We don’t know if the “virtual ward” addressed red flags and we don’t know if patients and families learned anything.
  • The coaching approach. Providing coaching is hypothesized to increase self-efficacy for self-management, which improves the effectiveness of patient/family interactions with the health care system and probably directly reduces calls to 911 that are ultimately caused by confusion, fear, and lack of trust.

This theoretical level of description gives us things to measure beyond overall effectiveness, the potential to understand why the intervention might not generalize to a population or situation, and suggestions as to directions for improvement. Medical journals may not want to give space to it, but I think we need to be more thoughtful about the theoretical rationale for various health services interventions, to measure as if theory mattered as well as practical outcomes, and to use this knowledge to drive more rapid improvements in the care of older adults.

As Kurt Lewin, one of my academic heroes put it, “There’s nothing so practical as good theory.”

Editor’s Note: For our regular readers, starting on Oct. 16, the Health AGEnda blog will be updated with a new post each Thursday. We will supplement these weekly posts with additional, time-sensitive blogs to cover breaking news, announcements, and other developments.