Healthcare data-informed decision making’s complexity and consequences demand the highest-quality data—a relationship that COVID-19 has amplified. Decision-making challenges associated with pandemic-driven urgency, variety of data, and a lack of resources have made it more critical than ever that organization’s build a data quality coalition and strategy to ensure systemwide data is fit for purpose. Having the people, processes, and technology necessary to define, evaluate, and monitor data quality allows for a quick, effective, and sustained response at an organizational scale. The coalition keeps all resources working together on the task at hand within a well-defined structure.
Learn more about Taylor Larsen
Taylor Larsen joined Health Catalyst in December 2014 as a Data Architect. Prior to coming to Health Catalyst, he worked for the Colorado Department of Health Care Policy and Financing as a Budget and Data Analyst. Taylor has a Master’s degree in Economics from the University of Colorado
Read articles by Taylor Larsen
Healthcare providers knew that COVID-19 would threaten the lives of their patients, but few understood the greater ripple effects across their business and industry as a whole. For providers, two significant COVID-19-induced challenges arose: analytic strain and resource limitations. These challenges highlighted the critical importance of data quality.
Healthcare leaders can improve data quality throughout their organizations by understanding the data quality lessons learned from COVID-19. Five guidelines from these lessons will help organizations prepare for the next pandemic or significant analytic use case:
Assess data quality throughout the pipeline.
Do not leave analysts to firefight.
Look outside the four walls of the organization.
Data context and purpose matters.
Use a singular vision to scale data quality.
Healthcare organizations increasingly understand the value of data quality, but many lack a systematic process for establishing and maintaining that quality. However, as COVID-19 response and recovery further underscores the need for timely, actionable data, organizations must take a more proactive approach to data quality.
A structured process engages technical and subject matter expertise to define, evaluate, and monitor data quality throughout the pipeline. Health systems can follow a simple, four-level framework to measure and monitor data quality, ensuring that data is fit to drive quality data-informed decisions:
Think of data as a product.
Address structural data quality first.
Define content level data quality with subject matter experts.
Create a coalition for multidisciplinary support.
As health systems continue to adopt machine learning to impact significant outcomes (e.g., reducing readmissions, preventing hospital-acquired infections, and reducing length of stay), they must also leverage evidence-based medicine. Evidence adds critical insight to machine learning models, ensuring that models incorporate all necessary variables in their risk prediction, and builds credibility among clinicians.
Evidence-based medicine brings three essential elements to healthcare machine learning:
Boosts machine learning model credibility.
Engages data experts around healthcare projects.
Saves time and money and increases ROI.
In today’s improvement-driven healthcare environment, organizations must ensure that improvement measures help them reach desired outcomes and focus on the opportunities with optimal ROI. With data science-based analysis, health systems leverage machine learning to determine if improvement measures align with specific outcomes and avoid the risk and cost of carrying out interventions that are unlikely to support their goals.
There are four essential reasons that insights from data science help health systems implement and sustain improvement:
Measures aligned with desired outcomes drive improvement.
Improvement teams focus on processes they can impact.
Outcome-specific interventions might impact other outcomes.
Identifies opportunities with optimal ROI.
Under value-based healthcare and the 2012 Hospital Readmission Reduction Program, healthcare organizations are more motivated than ever to reduce their incidence of preventable readmissions.
Health systems can reduce risk of hospital readmissions by developing readmission risk scores tailored specifically to their populations. A risk model that meets the following five requirements will have significant predictive value and is most likely to achieve systemwide adoption:
Identifies at-risk patients early.
Separates patients relevant to the disease-specific identification method and intervention strategy from all other in-hospital patients.
Uses organization-specific data to train a disease-specific model.
Exceeds performance of existing models.
Is developed in collaboration with domain experts.
Machine learning is a term that crops up often in healthcare lately, but it’s important to understand what really constitutes learning in this context. What some call machine learning is actually unintentional programming, but true learning is derived during the process of building a predictive model. This article delves into the nuts and bolts of a healthcare machine learning model and describes the training process a model undergoes to impact outcomes for patients.
The key ingredient is data and the key deliverable is to complete the feedback loop so those responsible for managing care have actionable information at their disposal.
Machine learning is beyond conceptual; it’s incorporated into a growing list of predictive models for various disease classifications.