Three Ways Evidence-Based Medicine Improves Machine Learning
Health systems that leverage evidence-based medicine and machine learning models are positioned to not only improve outcomes, but also improve systemwide engagement and buy-in. This article describes an evidence-based approach to machine learning, how each element (machine learning and evidence-based medicine) works independently and together, and how the approach helps health systems improve outcomes as well as systemwide engagement.
Evidence-Based Medicine and Machine Learning: Key to Better Outcomes and Engagement
Understanding an evidence-based approach to machine learning starts with an independent understanding of machine learning and evidence-based medicine:
Machine Learning Predicts Events and Helps Identify Interventions
Machine learning drives healthcare improvement by using data, algorithms, and models to predict an event and simulate interventions. It’s more of a prioritization and automation assistant, rather than a decision maker. Across clinical, operational, and financial groups, machine learning can help staff with limited time target their efforts, provide a window into how a health system will perform in the future, anticipate outcomes, accelerate the identification of over- and under-performing subgroups, and automate routine classification tasks.
With its predictive and proactive capabilities, machine learning has the potential to save lives and transform healthcare in significant areas, including the following and beyond:
The Dangers of Commoditized Machine Learning in Healthcare: 5 Key Differentiators that Lead to Success
- Reducing readmissions.
- Preventing hospital-acquired infections (HAIs).
- Reducing length of stay.
- Predicting chronic disease.
Evidence-Based Medicine Drives Best Practices
As evidence-based medicine pioneer David Sackett, MD, explained in his 1996 article in The BMJ, evidence-based medicine is the practice of using the best evidence from the best available published studies to make the best decisions about the care of individual patients. In contrast to eminence-based medicine, where influential experts can steer practices away from the evidence and towards subjective opinion, evidence-based practices follow objective evidence. Best practice guidelines were historically more opinion-, or eminence-, based, meaning influential clinicians decided on the right action.
Clinical expertise, experience, and opinion remain an essential component of evidence-based medicine, as the expert clinician interprets the evidence and decides the best way to use it. True evidence-based practices must start with evidence (versus an eminence-first approach that puts opinion before the critical guidance of the evidence). Major clinical decision-making guideline efforts today stress evidence as a starting point for the best treatment decisions.
Evidence-Based Machine Learning: Working Together to Close the Decision-Making Gap
The goal with evidence-based machine learning is to produce algorithms that reflect the evidence and the mass of data driving it. For health systems and clinicians committed to patient safety, evidence-based machine learning is a critical tool.
When data scientists must explain a predictive variable, or a clinician must make recommendations based on the model’s prediction, an evidence-based approach is essential. Without an evidence-based approach, a machine learning model might exclude variables that clinical evidence supports as risk factors for a certain condition; this exclusion might not impact the model’s predictive performance, but it could decrease the model’s clinical value, potentially leading a clinician to recommend an inappropriate intervention and possibly putting patients at risk for harm.
Leaving Evidence-Based Analysis to the Experts
Clinical time constraints generally have prompted most health systems to outsource their evidence-based analysis to develop best practices. These outsourced evidence-based content experts have the specific research acumen to find and follow the best available published evidence to inform their practices.
Even with the best intentions to do evidence-based work themselves, health systems often rely only on a prominent guideline in the clinical area in question. Because guidelines are only updated every five years at best, these organizations risk using outdated evidence and clinical guidance. Limited time and resources prohibit most clinicians from doing a deep dive into the published literature (the evidence).
Machine Learning Without Evidence-Based Medicine: Less Accurate, Less Credible
Even with the direct correlation between evidence-based medicine and best practices, few health systems leverage evidence-based medicine in machine learning. By skipping the evidence-based step, organizations face significant consequences:
- Machine learning models alone aren’t sufficient in healthcare. For example, evidence indicates that male patients with cofounding risk factors have a significantly higher risk of sepsis. A machine learning model for sepsis risk prediction that excludes gender could miss flagging a patient at high risk for sepsis, leading a clinician to miss the opportunity for early recognition and treatment.
- To get clinician buy-in, machine learning must be evidence-based (health system data alone won’t suffice because it’s not evidence until it’s analyzed and published). Clinical champions in the machine learning work are also critical to clinical buy-in, as they can then help sell the model to others.
Three Ways Evidence-Based Machine Learning Backs Up Expert Opinion and Algorithms
Reviewing the published evidence base for predicting risk of developing a certain condition (e.g., sepsis, CLABSI, CAUTI, and pressure ulcers) can identify more risk factors for use in developing a machine learning model than expert opinion alone. Without evidence, data scientists would feed an algorithm as much data as possible to identify historical relationships between certain variables or features and certain outcomes (they include all the features, and the model determines what is predictive). In this scenario, the algorithm helps analyst teams select which features will make the model most predictive. Using clinical interpretation, algorithms, and evidence together increases the predictive accuracy and interpretability of a machine learning model; this improves the clinical value of the model by enabling the transformation of more clinical patient data (risk factors) in intuitive ways before adding the data to the model.
1. Evidence Boosts Machine Learning Model Credibility
Incorporating evidence also added credibility to the sepsis risk prediction machine learning model. The literature review allowed the vendor to produce an evidence-supported list of the model’s predictive variables that explained why these variables were selected over others, thereby providing documentation and rationale for the clinicians who will use the model. Clinicians can see the structure and discipline behind the model’s development in the familiar currency of medical evidence. With credibility, clinicians are more likely to adopt the machine learning model, and adoption is a critical first step in implementing machine learning in healthcare.
2. Evidence Engages Data Experts Around Healthcare Projects
Many data analysts and scientists don’t have healthcare backgrounds. An evidence-based approach to machine learning modeling engages and informs these data experts by educating them on the important aspects of a clinical area in which they’re working, helping them better collaborate with clinical teams.
3. Evidence Saves Time and Money, Increases ROI
From an efficiency standpoint, using a machine-learning model to identify patients at higher risk of sepsis, CLABSI, etc., saves staff time and money compared to tracking these patients manually. Additionally, building and iterating on features takes up the bulk of technical work in establishing a machine learning model, and a curated set of evidence-based variables results in a more accurate model in less time. Evidence also helps with provisioning data—as a known entity, evidence gives the process a head-start over focusing a project based on expert opinion.
Evidence-Based Machine Learning in Action
Health Catalyst is using evidence-based research to identify features for its predictive models (such as in the Patient Safety Monitor™ Suite: Surveillance Module). The vendor follows the literature to identify clinical best practice and ranks those features according to predictions in other use cases. Health Catalyst data scientists base the initial machine learning model development on evidence to avoid suggesting a health system do something that’s either not intuitive or won’t result in a change for the patient. Data scientists then customize the model based on each client’s data.
In evidence-based feature selection, analyst teams base criteria for predictive variables on definitions they identify in the evidence. In other words, they’re using evidence to support the feature engineering and the features they select to create a well-supported model that healthcare users recognize and believe in.
Evidence Drives Effective Healthcare Data Science
Healthcare must prioritize creating meaningful, evidence-based machine learning models that clinicians can trust. Health systems that use an evidence-based process in their machine learning modeling will have better output in several areas, including outcomes, clinician engagement, operations, and more. To do data science well, organizations need clinical expertise, data science expertise, and evidence-based medicine expertise.
Would you like to learn more about this topic? Here are some articles we suggest:
- Machine Learning in Healthcare: What C-Suite Executives Must Know to Use it Effectively in Their Organizations
- 5 Reasons the Practice of Evidence-Based Medicine Is a Hot Topic
- Evidence-Based Care Standardization Reduces Pneumonia Mortality Rates and LOS
- The Dangers of Commoditized Machine Learning in Healthcare: 5 Key Differentiators that Lead to Success
- Defining and Choosing the Right Outcome Variable for Your Healthcare Machine Learning Model