A Guide to Governing Healthcare Claims Data Successfully: Lessons from OSF HealthCare

My Folder

This is the second and final part of our series on lessons OSF Healthcare learned about managing and governing claims data using a data warehouse. The first part includes lessons on implementing the right infrastructure, the right team, and the right solutions to ensure that claims data becomes a valuable asset in any ACO or population health management effort. The second part focuses on the most effective methodologies for developing a system to manage claims data, as well as the ideal database architecture, and outline some of the challenges organizations can expect to face when using claims data to manage population health.

Patient FormFrom the moment OSF Healthcare was selected as a Pioneer ACO, understanding our population and its associated risk became an urgent priority. It is impossible to overstate this urgency. After all, we committed that by 2020, 75 percent of our primary care patients will be part of a value-based program, and we established a healthcare transformation task force to assist in that care transformation.

With this much at stake, we at OSF, had to consider several things as we kicked off this vital initiative:

  • Our success or failure would depend on our ability to manage data effectively.
  • Although clinical data would prove essential, claims data would be the key to jump starting our efforts.
  • Our EHR didn’t have the functionality to support this effort.

We decided to build a data warehouse in-house to help us reach our goals.

An Essential Foundation: Beneficiary Claims Data

A lot of healthcare providers say that they need to understand their population better, but as we sat down to tackle this problem in a real-world implementation, we had to ask ourselves what that really meant. Specifically, we asked ourselves: What key data do we need to focus on to jump start this process?

The answer was simple. The foundation of our understanding would rest on beneficiary claims data.

Beneficiary data was the single most important category of information for us to start with because, without it, it is impossible for us to know what patients we are accountable for. Focusing first on who our patients are might sound like common sense—but it is by no means easy. The challenge comes from taking the beneficiary information from the payers we work with and matching it to our existing patient information within the EHR. Patient matching is a complex process and one that requires master patient indexing functionality (which we call master data management). Building this master patient management into our data warehouse was an essential step in our success. Once the link between beneficiary data and EHR was created, our clinicians could start targeting specific patients to identify and fill gaps in care.

Identifying the Business Need and Aligning with Leaders

Identifying the business need the data warehouse would address and aligning with the business leaders and their staff were critical to the success of our initiative. Simply aggregating data into a data warehouse would not be effective. Instead, we had to define what value the data warehouse would provide and what need it would fill. We realized, of course, that it wouldn’t be possible to define everything exactly at the outset—definitions can (and should) evolve as the project evolves—but coming up with the best definition possible was a critical first step for us.

This process wasn’t something we could just leave up to a project manager. Instead, we had to engage the business leader who would finally be responsible for the use of the data, whether that was the chief clinical officer, the director of care management, or a leader over some other aspect of population health management. These leaders needed to understand the value of the initiative, define what value the data in the warehouse will provide, and have the ultimate goal in mind. They also needed to align their staff with this goal. Their involvement was key to ensuring that the data we provide is useful to the staff who rely on it.

Ensuring Value from the Data Warehouse

One of the main reasons it is so important to keep specific business needs top of mind is that data warehousing efforts can consume a lot of resources. We had to constantly ensure that the value we were getting from our data warehouse exceeded the cost of the overhead we were generating through this process.

Consider this example. As a provider, when you embark on a data warehouse initiative, you have to determine how much data you’re going to bring into the warehouse. The more data you bring in, the greater the effort that is required. The amount of claims data you need will vary based on the market you serve. If you’re in a big market with a lot of competition, you likely won’t have a complete picture of a patient based on EMR data unless you load up a large volume of claims data. In contrast, if you have the greatest market share in a small city, you won’t need to load as much data.

As we implemented our data warehouse, we constantly referred back to the business need:

  • Did we simply need to know who our patients were and understand the attribution so our care teams could work on them?
  • Was our purpose to find care gaps, understand utilization, or capture leakage?
  • What level of detail in the data would be needed for care management? What details would be needed for managing cost structure?

Defining our goals and business needs up front helped us to not overwhelm ourselves with too much data too soon. It helped us to make sure we were focusing our resources on the right efforts for each stage of our project.

Prioritizing the Use of Claims Data

One important consideration of our project is that we didn’t want to wait until the data warehouse was pristine, complete, and perfect to get started managing our population. Setting expectations that being directionally correct was an important first step made all the difference. We needed to be able to load data incrementally and begin deriving value as soon as possible.

So, we had to prioritize, and we did so by using the Pareto principle or 80/20 rule. What would give us the biggest return for the smallest amount of effort?

As mentioned above, our highest-priority deliverable was patient matching and assignment. We had to know who our patients were. Then, once we identified the patients, the priority became to know which patients carried the highest risk. After that, our priority would be to uncover the cost of their care … and so forth.

The important point here is that we didn’t have to load the totality of available claims data in order to get started. We first linked the beneficiary data to clinical data in our EHRs. From there, we continued to bring more data into the warehouse and our analysts were able to start creating lists of patients that caregivers might consider working that exceed certain risk thresholds.

Think of it as drastically reducing the amount of hay when looking for the needle; instead of a hay stack, physicians were given a hand-full of hay to sift through. Again, the goal is directionally correct. This quick turnaround to the caregivers was important in engaging them in the initiative. The information they received didn’t have to perfect and 100 percent complete to be useful.

Loading Claims Data

Our next priority was loading claims data. Again, it was important for us to identify what kinds of claims data would deliver the most value so that we could decide what data to load first. For example, we determined that inpatient data was more relevant for our purposes than outpatient data and that Medicare parts A and B data was more critical than pharmacy data. We planned to eventually load all of this data, but we weighed the immediate value of each type of data with the effort that would be required to load it. We continued to develop the data warehouse in parallel with these efforts, and our information became more complete and our ability to target high-risk patients more accurately.

Lessons from Building and Managing a Data Warehouse

In short, we knew to start with beneficiary claims data using patient matching to identify the patients in our ACO, and we knew we needed a data warehouse. We saw that identifying the business need and getting the leaders to support the data warehouse effort was critical to success. This forced us to constantly refer back to what issues the data warehouse would be addressing in practical terms, which in turn helped us to focus our resources on the right efforts at each stage. We identified priorities for the data using the Pareto principle, and thus guided, loaded the claims data.

We learned these practical lessons while going through the process of building and managing a data warehouse. In the second part of this commentary, we’ll share the most effective methodologies for developing a system to manage claims data and outline some of the challenges organizations can expect to face when using claims data to manage population health.

Project Methodologies and Important Team Members

One important lesson we learned about building and maintaining a data warehouse is this: We needed an agile methodology to kick start the project, but we shifted to Waterfall (PMI) methodology to maintain the initiative as it matured. Agility was essential in the beginning so that we could load data quickly, grow the initiative incrementally, and get data into the hands of clinicians as it became available so they could act on it. In short, we needed to move fast and devise the program as we went along.

Once we attained a steady state with the data from any particular payer, that’s when we shifted to a Waterfall approach. The data warehouse had become the one reliable source in the organization for linked beneficiary and clinical information—essentially, it was serving as the system of record—and, as such, we had to build and maintain quality controls around it. Therefore, we established a monthly process for change control to ensure quality. We began performing regression testing and impact and gap analysis, and we established other quality-control processes such as exception reports and ETL logs.

Throughout this process, we also formed and matured a data governance group. Like our project management methodology, our data governance began informally and became more structured as the initiative progressed. Our core team for both data governance and project development consisted of a business/outcomes analyst from both OSF HealthCare and the payer, a population health business lead, an IT analyst (again from both our organization and the payer), a data architect, and a dedicated project manager. Generally, the payer representatives were only involved in the early parts of the project. But we also learned how important it is to maintain touch points with these representatives. Every year, data formats and other items changed and being engaged with the payers helped us deal with those changes.

Database Architecture and Development

A foundational consideration for any data warehousing project is which database architecture to choose. For managing clinical data, we use a late-binding data warehouse approach, which adapts easily to the volatility of clinical use cases and business rules. But claims data is very standardized with stable business rules, so a star schema architecture was the ideal choice for our claims data warehouse.

Keeping our data warehouse’s architecture scalable, robust, and sustainable has been an important focus for us. Sustainability is critical, especially because we have dedicated ourselves to data-driven improvement for the long haul. As we began our initiative, we planned ahead for sustainability by establishing a concrete vision for master data management. Because of our agile approach, we didn’t have every detail nailed down from the beginning, but we had a solid vision and strategy for how we wanted the data to evolve. We also adopted tools and standards to enable our team to keep the development process efficient. Tools and standards also ensured that a new team member could take up the work if another team member moved on or needed help.

Security was, of course, a prime concern as we developed the data warehouse. We won’t share an exhaustive list of security requirements here. A specific aspect of planning for claims data security that we do want to highlight, however, is the importance of archiving each payer’s data separately. Some payers require that we delete all of their data once their arrangement with us is finished. Keeping the data separate makes this important security process much simpler. We keep the backups of the data separate as well for the same reason.

Some Challenges to Expect When Working With Healthcare Claims Data

We hope the lessons that we shared in this two-part series will help other ACOs that are starting along the path of using claims data to improve care. To conclude, we’d like to mention some of the challenges that these organizations should expect to face, as well as share some words of advice:

  • Be prepared for the fact that no standardization in layout exists among payers. This will take some time to mature in the industry. The good news is that, after 12 to 18 months, we were able to standardize the process. The type of data elements each payer sends also varies; for example, some provide restatement of claims and others don’t.
  • Beneficiary assignment happens once a year. Plan resources accordingly so that you can quickly turn around information and get it in the hands of the care teams. Furthermore, payers will change, remove, or add beneficiaries every plan year over a one-to-two-month period. When we work with payers—particularly for the first time—they send us a list of patients we’re accountable for. The next month, it’s not unusual for us to receive another file with more, fewer, or different patients, because the payer is still trying to figure it out. This happens every year, but the process does run more smoothly with time.
  • Avoid the trap of getting caught up in the accuracy of a claim amount. The data warehouse is not designed to function as a claims management system; rather, it is designed to provide a signal as to which patients we should focus on to improve quality and cost. Although we need a good system and solid architecture, we don’t need every field and data element to be completely accurate. For our purposes, we must always consider whether the cost of achieving that accuracy is justified. Generally, it is more valuable to dedicate resources to improving the quality of patient care than to worry about getting the dollars accurate down to the cent.

Our data warehousing initiative has improved our organization’s ability to manage the transition to accountable care. The initiative has been an exciting journey, though we’ve faced some challenges. Always having our goal in mind and keeping the whole team aligned with that goal has kept us from falling down the rabbit hole. Celebrating success with the team has also proven essential. Our teams are working fast and furiously to incorporate new information and get it out to clinicians so that the data can have an impact on care. Few things are more important than recognizing the team for that work.

Presentation Slides

Would you like to use or share these concepts? Download this presentation highlighting the key main points.

Click Here to Download Slides

Loading next article...