BigData_466582229_400We swim in an ever expanding ocean of electronic data.

Every Google search or credit card transaction can be aggregated and analyzed by companies that often seem to know what we want before we do. With new and cheaper analytic tools and more sophisticated modeling, the result is the ability to individualize and target, as well as spot broad trends among populations.

In health care, the explosion of electronic health records is adding to the sea of data that already exists from billing and claims. This raises a number of questions, including:

  • Can health care catch up to finance and other sectors in harnessing the power of this “big data” to meet the needs of real people in real hospitals, doctor’s offices, and homes?
  • Can it improve care, in particular for older adults and others with complex chronic conditions who are more vulnerable to the effects of adverse events and medical error?
  • Can it help us become more efficient with our resources?

HealthAffairs_July2014_cover150Along with other funders, we joined the Gordon and Betty Moore Foundation’s lead support of a recent theme issue of the journal Health Affairs that tackles the numerous topics related to the potential promise and problems with big data in health care.

Big data—referring not only to huge volumes of information, but also to its variety and the velocity at which it can be analyzed and used—raises many issues, ranging from the ethical to the technical. Health Affairs organized a briefing of some of the issue’s authors to elucidate these issues and spur further conversations, especially among policy makers and health system leaders. The briefing featured panel sessions covering three main themes that arose from the articles. (You can view a recording of the event on the Health Affairs website).

First, at the site of care, clinicians now have access to data through electronic health records (and eventually through other data sources, possibly directly from patients themselves) that could transform care. Analytics and predictive modeling could help in areas such as identifying and intervening with high-cost patients, reducing avoidable readmissions, improving triage, predicting decompensation (when a patient’s condition worsens), preventing adverse events, and optimizing treatment for diseases affecting multiple organ systems, just to name a few.

Second, big data raises the potential of radically changing the research enterprise that can lead to higher value care. While randomized clinical trials may be the gold standard in building evidence, they can’t be our only approach (we’ve written about this before). We additionally need rapid ways of looking at and using data to make quality improvements and then wrap that back into more traditional evaluation. We should consider the value in observational research—seeing where the data lead us—in addition to traditional hypothesis-driven inquiries. And technology allows us to have patients and families generate data and become more involved in making sure the use of that data in both research and quality improvement efforts is relevant and useful.

The third theme is the role of government in the production and availability of data, developing and improving standards, and ensuring protections for consumers. The topic of big data raises huge policy implications that many parts of government need to address, including managing and linking its own data resources and making them as available as possible. There is a tremendous need for data transparency, including from publicly funded demonstration projects, for which the data and results are too often delayed or suppressed.

Of course, each of these themes comes with its array of problems. Privacy and consent issues are paramount. You may have heard about the recent dustup when people found out Facebook was conducting research on its users. These issues need to be wrestled with on the policy side and even in our conceptualization of data ownership. “Data trusts” that come with the same kind of fiduciary standards as financial trusts may be one possible approach.

For clinicians to use big data in decision making, their health systems need an infrastructure for collecting and transmitting relevant data, including more sophisticated ways of sharing data through “nodes” that cross academic centers, health care delivery systems, and other health system components.

Even more importantly, in my opinion, is the need for the right culture, one that is rooted in the principles of a rapid learning health system as described by Lynn Etheridge. (It’s interesting to note that Etheridge believes that using these new big-data approaches should be a high priority in the large, publicly funded Medicare and Medicaid programs. As he rightly points out, older adults have largely been left out of randomized control trials, and so could truly benefit from real-time clinical data aggregation that can help better decisions be made.)

Finally, clinicians need decision supports and other tools to help them grapple with all this big data, and their education needs to help them prepare for this different way of working. As former Beeson Scholar Harlan Krumholz describes it, medicine is largely viewed as deterministic—doctors are supposed to diagnose a problem and then treat it. In reality, medicine is probabilistic at best and can greatly benefit from the massive amounts of data now flowing.

I think he sums it up well in his article: “Incorporating big data and next-generation analytics into clinical and population health research and practice will require not only new data sources but also new thinking, training, and tools.”