This article is based on a 2020 Healthcare Analytics Summit (HAS 20 Virtual) presentation by Jaclyn Bernard, Innovations Team Manager, Texas Children’s Hospital, titled, “Beginning a Predictive Analytics Journey: Choosing the Right Use Cases for Success.”
Research shows that the healthcare industry will spend over $34 billion on artificial intelligence (AI) by 2025. With this increase in AI investment, analytics enters a new level of prominence in the healthcare landscape. Organizations must now optimize their predictive analytics—the most common use of AI in healthcare—and prepare for the next phase of the AI journey.
Although predictive analytics in healthcare can add significant value to an organization by increasing visibility into the future, the benefits of predictive analytics only go as far as their use cases allow. Simply put, predictive analytics—and every use of AI beyond predictive models—can produce meaningful results and prove a worthwhile investment if data science teams choose the right scenarios (use cases) for predictive analytics (or AI) to succeed. Selecting a poor use case can result in incorrect predictions, underperformance, and a lack of leadership support for predictive models and the broader AI fields in the future.
Predictive analytics, an early step in leveraging AI, uses historical data to forecast clinical, operational, and financial needs in different areas of an organization, such as staffing, resources, patient outcomes, and high-risk patient groups. With predictive analytics providing data-driven visibility around needs, health systems can better allocate resources and prepare for different scenarios (e.g., an ICU nearing full capacity), decreasing guesswork and scrambling when unforeseen situations occur. Accurate predictive modeling also lays the foundation for advanced AI beyond predictive analytics, including retrospective comparisons and prescriptive optimization.
For any type of AI to be successful, health systems must have a strong analytics foundation. Before committing to a use case and starting the predictive analytics journey, organizations should have supporting technology in place and follow a brainstorming process:
The robust technology that can effectively support predictive analytics in healthcare includes three core elements:
By following a two-step brainstorming process, improvement teams can start exploring possible use cases to identify those most likely to succeed and those they can start eliminating:
Laying the data and analytics foundation and becoming familiar with the brainstorming process is crucial in any predictive analytics project and advanced AI use cases. Therefore, mastering the two-step process earlier makes future AI projects go more smoothly.
After health systems have laid the technological foundation for predictive analytics and brainstormed possible use cases, they are ready to take the next step of the AI journey. Data science teams are now prepared to evaluate the use case candidates for predictive models (or other AI uses) with the following four-step framework:
Health systems can think of the first framework step as the “question” step. At this point, data science teams meet with other organizational partners to discuss challenges within the organization that predictive analytics could potentially solve. It is important that both teams first agree on the problem. The teams can then move forward and address data (e.g., What data is available?), resource availability (e.g., What current resources are available for the project?), and the business/clinical value (e.g., Will it impact a hospital metric that supports organizational goals?).
For example, if a health system is not meeting the Healthcare Effectiveness Data and Information Set (HEDIS) measures for human papillomavirus (HPV) vaccines, reaching HPV vaccine target rates could be a good use case for a predictive analytics model. In this case, the model could forecast which teenagers are less likely to receive their immunization. Health systems can collect sufficient information about HPV vaccine rates by accessing past vaccine adherence data and patient demographic data. The health system may have a subject matter expert to provide information about why some teenagers fail to get the vaccine and effective approaches to increase vaccination rates.
Lastly, the project has a high clinical and quality impact because it will help the health system meet its HEDIS immunization rate goals. With agreement around the problem (low HPV vaccine rates), ample data, identified resources, and a clear business/clinical value, a health system will likely decide this is a use case they should pursue with predictive analytics. As this use case shows, accurate estimations about which adolescents aren’t receiving their vaccine allow a health system to grow their outreach efforts to appropriate groups, increasing the likelihood that teenagers will receive the HPV vaccine.
By the project kickoff, the data science team has decided that a specific use case qualifies for predictive analytics. After the team has committed to a ML project, it needs to understand the nuances of data quality, quantity, and accessibility related to the use case.
For example, in the HPV example above, a health system might have clinical and utilization information already available in the hospital, but the team is also interested in additional social determinants of health (SDOH) data. While the hospital didn’t necessarily collect SDOH information about each patient, public data sets are available that teams can use to identify potential areas where the likelihood of receiving the vaccine is lower. This information can guide a health system’s outreach efforts to increase immunization rates.
In step 3, teams begin building the predictive model. It is important to remember that the model development step is an iterative process. Throughout the predictive model development, many data science practitioners will discover they need additional data. If a team decides to find more data, it needs to ask the same questions from the project kickoff step (e.g., What data is available?). Skipping the “question” step at this point could lead to adding irrelevant data that could skew results.
Another approach data science teams may take as they are developing the predictive model is to ask, “Do we need to look at existing data in a new way?” Sometimes teams have the appropriate data but need to reframe it. Looking at data differently can change how a model predicts future outcomes. For example, instead of reviewing diagnosis and medication data individually, a team could review the data in groups.
The last step of the four-step framework is to operationalize the predictive model. In this step, the data scientists and collaborative partners reap the benefits of the first stages of asking questions, organizing information, and building a predictive model. Teams shift their focus from the data going into the model to the data coming out of the model. Using the predictive model results, data science teams collaborate with operational and clinical partners to discuss and understand the new insights, how to use those insights to design interventions, and which data they need to support workflows.
To increase the speed and efficiency of AI in healthcare from predictive analytics to prescriptive optimization, organizations should begin by identifying the best use cases to pursue. With the right use case assigned to a predictive analytics project, improvement teams can transform care, prepare for possible outcomes, intervene early, and deliver results that garner support from high-level leaders. When data science teams start the AI journey with the best use cases, they empower themselves to apply these skills to AI pursuits beyond predictive analytics that unleash even more insight.
Would you like to learn more about this topic? Here are some articles we suggest: