Bringing the Health Picture Into Focus


A common theme in the world of “big data” is that it’s worthless if you don’t know what to do with it.

The truth is, I’m not really a fan of the term big data when dealing with health care.

To me, big data is taking trillions of web searches and looking for patterns, like who is going to win the next election. In health care, I think the industry is dealing more with “a whole lot of data.” The primary difference between the two is the aggregators are working with a lot of discrete data points – lists of lab results, medications, heights, weights, and many others – that all work together to create a “picture” of the patient.

It’s like a sign I walked by the other day – it was made up of a number of small lights – up close I could see each light and what color it was, but it wasn’t until I stepped away that I could see it was a picture of a person – all the dots worked together to form the picture.

The challenge, then, is taking those discrete data points and creating a picture of the patient. If you have the correct data model, it’s much easier to do the things you want to do in population health.

Consider how you may represent a claim; you might easily say it could be a single table with all the claim information, but this would not be very useful from a data perspective. Instead, you need to deconstruct the claim and add discrete data to the patient’s record to create a usable set of points. A diagnosis from the claim can be added to the patient’s problem list, procedures from the claim can be added to the list of services the patient has received – the data becomes more usable as the list gets longer.

This “pixilation” of the patient picture coming into view relies heavily on increasing interoperability. From a technical perspective, the focus will be on making good use of the data instead of struggling with how to get the data. At its highest level, interoperability will help us achieve the triple aim – including physician engagement – as well as address inefficiencies that are plaguing the health care data sector.

So, What Happens Next?

A shift is starting in the form of moving from a traditional electronic medical record (EMR) to more of a holistic care management solution. EMRs were originally created to automate the processes in an office – recording notes on the visit, securing payment – but they were not focused on the care of the patient beyond each specific visit.

In practice, accountable care needs more than an EMR, it needs a suite of aggregation protocol to provide patients the best care management solutions available. It requires multiple systems to work together; moving beyond interoperability focused solely on the sharing of data and moving toward workflow interoperability – the ability to have multiple systems work together seamlessly.

I believe there are three key steps to ensure all interested parties develop successful interoperability – for patients, providers and everyone involved in health care systems.

SEE ALSO: Big Data for the Small Practice

Share Data

A few years ago, my father had a heart attack while traveling back east. I flew out to drive him home with his trunk full of medical records. In this case, my dad was the aggregator and transport mechanism for medical information. A patient should be at the center of medical care, but he should not be the database of his conditions.

It’s like the pharmaceutical commercials on TV – “tell your doctor if you have cancer, HIV, or another disease.” – I suspect most people think “shouldn’t my doctor know that?” But without the ability to share health data, they may not.

Interoperability of data provides benefits to patients in two ways. One is the reduced costs/improved quality mentioned earlier, but the second benefit is that the burden is taken off the patient as the aggregator of information.

Creators of data need to realize sharing data does not take away a competitive advantage, but facilitates better care of a patient and reduced costs in the health care system. It is very much the concept of a “rising tide lifts all ships.” Changing the mindset of provider organizations to be open to sharing data is key.

Change Dataflow

When it comes to population health management, just getting your arms around the huge data sets that come with it is a big challenge, which presents itself in a number of ways.

Much of the health care field has migrated, or is in the process of migrating, to EMRs. In theory, this should make it easier to aggregate data from multiple data sources, however, in practice it is unfortunately not working out that way.

Even with meaningful use, there is not a standard way to retrieve information from a specific EMR. The emergence of health care Application Program Interfaces (APIs) will most likely help this, but it depends on the API’s development cycle. Also, an API can be visualized as the pipes coming in and out of a building. The bigger challenge is how those pipes connect to a broader infrastructure, allowing the movement of information between organizations.

The interconnectivity of solutions has led to point-to-point solutions, which is very inefficient. If a population health service organization wants to aggregate data, the aggregation will have to occur within the solution, and individual compatibilities will most likely have to be built to each EMR. Health Information Exchanges (HIEs) were supposed to help with this, but have not for a variety of reasons. To address the deficiencies, there is a need for the equivalent of clearinghouses to vet and aid the movement of claims and eligibility information. It is still too early to determine if HIEs will be the approach for this, especially with the challenges of funding.

Fix the Lack of Standardization

Every EMR implementation or workflow is different and that’s a huge problem for collecting usable data. This manifests itself in many ways, including, but not limited to, values being in different formats based on units of measure, and even more challenging is retrieving data that is not standard. Even the rollout of APIs will not help in this space since the API provides you access to the data, but does not guarantee the data is in the correct format, useable or meaningful.

For example, population health looks a lot at screenings, whether for cancer, colonoscopies, or other similar services. Sometimes this information has a discrete field in an EMR, but in another platform, it is put in free text, or in yet another EMR platform, sometimes this data doesn’t have a home at all. This inconsistency has been a big challenge, as the data is not being presented in a usable format.

Likewise, making sure the data is common when it comes from different sources is another instance of a lack of standardization – is an A1C result from Lab A the same as an A1C result from Lab B? How is a program supposed to know which patient certain data belongs to – since new privacy protocol has removed a patient’s social security number as the default identification, there is no longer a common ID for these systems.

Lack of standardization has also led to dealing with duplicates in data. There are instances when a lab result may come from a data feed from the lab itself, from an EMR interface and from a claim (a claim won’t have the result, but will show the test was done). To make sense of this data, there has to be a way to determine what tests are duplicates and what tests are similar, but separate. Moreover, the difference needs to be recognizable, so interoperability measures can help flag these duplicate data points.

When medical data is shared in a way that brings focus to the entire health landscape surrounding each patient, it presents a marked improvement, which benefits everyone involved. If data becomes standardized, making its aggregation more manageable as a first step will enable the NEXT step – using the data to positively influence health outcomes.

Keith Blankenship is Vice President of Technical Consulting, Lumeris.

About The Author