A Health Catalyst Overview: Learn How a Data-first Strategy Can Drive Increased Outcomes Improvements


Jared Crapo:                 Thanks, Chris, and thank you for joining us today. In the next hour, Sam and I would like to share with you some of our ideas about how we can use a data first strategy to advance clinical practice and outcomes improvement. We’ll talk about some big ideas and some practical recommendations we hope that you can use right away. I’d like to begin by telling you two stories.

Ignaz Semmelweis was a young physician who was appointed the chief resident at the first obstetrical clinic of Vienna Hospital in 1846. The first clinic, the first obstetric clinic trained physicians, and the second obstetric clinic trained midwives. The mortality rate in the second clinic was about 3%. The maternal mortality rate in the first clinic averaged 10%, but often spiked as high as 30%. Most of the women died of a disease that they called childbed fever, which today we would call sepsis. Semmelweis couldn’t figure out why the mortality rate was so much higher in the first clinic until one day his friend and colleague at the clinic was cut with the student’s scalpel while performing a post-mortem examination and a few days later, Jacob died with a disease very similar to childbed fever.

Semmelweis instituted a hand washing policy using chlorinated lime because he found that this solution worked best to remove the putrid smell of infected autopsy tissue. After instituting the hand washing policy, the mortality rate dropped by nearly 90%. Semmelweis had no scientific basis for this chlorine hand wash, but the data clearly showed that his intervention worked. Everybody else thought he was crazy. He got fired from his job. He was committed to an insane asylum and that’s where he died.

In the late 1850s, Louis Pasteur developed and proved his germ theory, which provided a scientific explanation for Semmelweis’ observations. Building on Louis Pasteur’s germ theory, Joseph Lister, the guy who Listerine is named after, discovered that carbolic acid reduced infection rates for surgical patients and antiseptic surgical procedures began to be … physicians began to experiment with those kinds of procedures.

Here’s a picture of an apparatus that one of those surgeons would’ve used during a surgical procedure. And what it did is it warmed the carbolic acid and then turned it into an aerosol, and they thought that this aerosol would then kill the germs in the air, which would prevent the infection. It turns out there’s a much greater likelihood of infection from contact than there is from the air, and the aerosol that fell around the surgical site was actually providing the most benefit. But as late as 1882, at the annual meeting of the American Surgical Association, anti-listerism was still the posture of the majority of those members.

Fast forward, another 20 years to the early 1900s, a few days before King Edward VII was scheduled to be coronated as the king of England, he came down with appendicitis. The king asked for Joseph Lister’s advice about how to proceed, and an antiseptic surgical procedure was performed per Lister’s recommendations. This procedure for the king marked approximately the time period when antiseptic surgical procedures became common practice. So it took 54 years from the time Semmelweis had data to show that a chlorinated lime hand wash reduced infections, till the time it became common practice.

So now the second story. Harald zur Hausen was a German physician, who in 1976 published a theory that HPV caused cervical cancer. And at that time that was a pretty outlandish theory. Not many people believed him. A few years later, he was able to identify two strains of HPV in cervical cancers. And by 2006, a commercially available vaccine was developed. Last year, 60% of adolescent children had received that HPV vaccine. And in between there, Doctor zur Hausen won the Nobel prize for medicine. Now, half the women diagnosed with cervical cancer are between the ages of 35 and 55, and the earliest cohort of vaccinated adolescents are still only 25 years old. So we have a ways to go before the full impact of this vaccine will be felt, but it’s in widespread clinical practice, and it only took 42 years. We shaved 12 years off from the century before.

So we have a poll question. How long does it take your institution to turn high quality medical evidence into common practice? Now, Lister and zur Hausen had to make their own discoveries and validate them today. Today, 36,000 randomized control trials are published each year. So from the results that are published, how long does it take your institution to turn that evidence into common practice?

Chris Keller:                 Thanks, Jared. We’ve got the poll question up. Please go ahead and fill that out. Option one, less than one year. Option two, two to three years. Option three, four to five years. Option four, six to eight years. Option five, nine or more years, leading up from just one more moment until you answer the polls. I also want to remind you, the presentation will be provided after the webinar today within 24 hours. You can ask your questions in the questions panel within the go to webinar control panel.

All right, we’re going to go ahead and close that poll and launch the results. Jared, here are the results. We have 7% who said less than one year, 31% said two to three years, 33% said four to five, 11% said six to eight, 18% said nine or more years.

Jared Crapo:                 Excellent. So nearly two thirds of you said that it was between two and five years for your institution to turn evidence into common practice. There had been a few studies in the literature that have examined this question. The New England Journal of Medicine published a paper that says it takes 17 years for validated findings to reach broad clinical practice nationally, so not just your institution, but the majority of institutions in the country. Another interesting data point at Kaiser Permanente, they launch about one implementation of a new evidence based practice per month, and the meantime from publication of evidence to the launch of implementation at Kaiser is 14 months. Unfortunately, they didn’t publish how long it took to become wide spread. But those are kind of some interesting data points related to this question.

Today, nearly six million Americans suffer from Alzheimer’s and deaths are up 123% since 2000. We spend more than a quarter of a trillion dollars on Alzheimer’s care in 2018, and it’s the only top 10 cause of death for which there is no known way to prevent, cure, or slow the progression of the disease.

So let’s go back to our HPV example. It took 42 years to go from discovery to common practice. Let’s say we could shave another 12 years off that timeline, and it takes us 30 years. If we made the discovery this year, that would mean it’d be 2048 before we had a treatment in common practice. By 2050, it’s projected that we’ll spend 1.1 trillion dollars per year in the United States to care for Alzheimer’s patients.

I believe that the use of data to advance clinical practice will have a bigger impact on healthcare than the discovery of antibiotics. We are at the beginning of a very exciting time in healthcare where data is going to provide the fuel for future advancements in health.

Today we’d like to talk about a data first strategy that you can use to help advance the practice of medicine in your institution. We’ll talk about some techniques and some strategies to build institutional analytical skills. And then we’ll share with you an example of how those things came together at one of our customers to deliver real results. So let’s start with the data first strategy. Ignaz Semmelweis kept all his data in his personal notebook. Today, with the explosion of data that we’ve experienced, there is no way that a personal notebook can keep track of everything we need to have. In fact, some research done at the University of Alberta determined that 8% of the data that they needed for population health could be found in their electronic health record, which means more than 90% of the data that they needed had to come from elsewhere. With this explosion of data in the human data ecosystem, we need a strategy to help us manage and apply that data to the advancement of healthcare.

And so I’d like to share with you my six rights of a data first strategy. The first one is you need the right data. You need a strategy and a platform that can utilize all the available data that you have, not just from your EHR, but from socio-economics, social determinants of health, environmental data. Here’s a couple of low maybe interesting factoids. The number of chronic conditions that you have is a great predictor of emergency room usage. It turns out that whether you have a swimming pool is also a great predictor of emergency room usage. So that’s an example of how we could combine other environmental data that maybe we use and we wouldn’t think of that could be valuable. In addition to collecting the data that we have, we need to generate better data and higher quality data. For example, many institutions have no idea what their actual cost to provide a procedure is. And so maybe we need to generate better activity based costing data. Maybe we need to do a better job of capturing patient reported outcomes.

We also need the right governance. We need to have mechanisms that we can define and then share definitions for metrics, and populations, and patient, and physician identity, and terminology. We also need a strategy where we can decentralize stewardship over the data. The more data you have, the more decentralized that stewardship needs to become, so that those who are closest to the data have responsibility for that data and can then choose how to share that with others who might benefit from that data. And finally, institutions need to develop a culture of data driven decision making and prioritization.

You need to have data available at the right time. Amazon believes that speed is a competitive advantage and they’ve leveraged that speed to a great effect. If speed is a competitive advantage, you need a framework or a rubric to evaluate the cost of being slow. And we have to be willing to examine the entire process. Sometimes when we think about speed, we only think about how long does it take us to process the data. We also need to think about how long does it take us to generate or enter the data? How can that process be improved? How long does it take us to move data? How long does it take humans to analyze that data and determine what it means?

We need to have the right skills so that we can make sense of data. And there are these seven core skills which we’re going to talk about more shortly. And there’s also three orders of complexity that our analysis might take and we’ll talk more about these later. But the right skills are important in order for us to make use of the data that we have.

We also need to make that data or that analysis available at the right place. It needs to be available to decision makers at the point that they’re making decisions. And if that’s a clinician, that’s the point that you’re prescribing or that you’re ordering tests. If you work in accounts receivable, it sure be nice to know the propensity to pay for a particular patient before you call them on the telephone. We need to make data available in a variety of form factors and modalities, and we need to make it available to everybody in our organization.

Finally, we need to have the right kinds of applications that can interact with that data. Most applications that are deployed today are process first applications. Those applications were designed to automate and electrify a particular process. We need to decouple the data from those applications, which then allows us to innovate with new tools and workflows and capabilities. Applications need to be able to both consume data and produce new data that can be equally shared amongst all the applications in an ecosystem.

So Health Catalyst has a data operating system platform that has been designed to support a data first strategy. It combines modern software engineering practices with lessons learned from many years of health care data warehousing work. And as you can see, there’s a core analytics platform that’s good at moving data, at ingesting data, at co-locating data. There are also a core set of what we call fabric data services. These are software APIs that allow new capabilities to be plugged into the platform and allow higher level services to access and utilize those core building blocks. And then on top of this platform, Health Catalyst has developed a number of our own applications. Our clients have also built their own tools on top of this platform. And we’re at the beginning stages of allowing third parties to build and deploy and sell their own applications that work on top of this data operating system.

We don’t have time to go into a lot of detail about the data operating system today, but if you’re interested in learning more, we’ll have a way that you can reach out to us at the end of the webinar to get some more information about the data operating system.

So now that we have talked some strategy, Sam’s going to tell us a little bit about a couple of practical techniques that we can use to help execute on that strategy.

Sam Turmen:                Perfect. Thank you so much. So a few practical examples. This may seem a little odd, but I wanted to draw a comparison to something outside of healthcare. So Ford or Chevrolet, what’s your preference? If not Ford or Chevrolet, why don’t we think about Apple or Android, Costco or Sam’s Club? If you don’t have a good example, just think of something you really love. For example, chick-fil-A. Keep that in mind as we walk through this example.

When I think Ford or Chevrolet, I immediately think of the bias that my father and my favorite uncle had for the other brand. And why is there that bias? Talking to my father in 1972, he owned a Ford that was horrible and he swore he would never, ever drive another Ford vehicle. Hence, we are now a Chevy family, right? My favorite uncle in ’73 had a Chevrolet that was horrible, and hence he drives a Ford. And this bias is driven by huge variation in a product and performance, and customer experience back in the early ’70s. Fast forward to 2018, I happen to be in the market for a truck myself. I really don’t have a preference whether a Ford or Chevrolet. I’m going to choose what economically makes sense and what also has a feature that maybe is most compelling to me.

So why is there that big difference between 1972 and 2018. I’d say is because there’s common structures that are reducing variation. So here’s my dream truck, a 2018 Chevy Silverado. Their latest mantra is “When nothing less than the most dependable will do.” Well, what is driving that dependability for these brands today? I would say that there’s limited to no variation in the base platform, the same chassis, same power train, same transmission, body panel. Overall, common design exists amongst all half ton pickup trucks.

What does this do for us? It supports safety, low variable cost, easy maintenance and dependability for the customer. Well, not everyone wants the exact same truck, so there is customization available, right? So we can customize this base platform with any tools we see fit. We can do that through the manufacturer, through the base model, the luxury model, the most luxurious of the luxury models and apply on top of that, anything that we would like from a third market or third party dealer. So we can go with a third market on that base platform, third party and gets new tires, rims, light kits, you name it, to make this tool as efficient and as specified as I need it for my needs.

So it is, I would say with our data structures, let’s take claims, for example. Claims with nothing less than the most dependable will do, right? I think if anyone on this call had the privilege of working with claims data will probably say that it is not the most joyous thing to delve into. However, what can we do to make it a little bit better? We can start to limit that variation to create dependability. What creates dependability? That is limiting the variation within that data structure, within the coding, within the mapping of the pipelines, within the governance of our claims data. And what will that do for us? Same things for the [inaudible 00:21:45]. It’s going to promote safety, low variation, easy maintenance and dependability. Are there customizations available on top of this? Absolutely. Through the manufacturer in our case Health Catalyst, we have a number of applications such as a measures builder, we’ll talk through here in a moment, a population builder activity called activity based costing tools, et cetera.

But we also have available a number of third party tools, so we offer an open API on our operating system. Any third party tool you already have and are leveraging, we can point at our data structures. But we also have, using today, a number of the EMPI tools such as vision wear, the multi-view tool, natural language processing tools through in-depth offered by the Regent [inaudible 00:22:34], a number of other clinical architecture technology services built directly for some customizations on top of these common baseline structures of our data.

So is there just a multiple baseline structure for the half ton Silverado? Absolutely not. We have it for the mid-sized Colorado based structures, the same common structures for a three quarter ton pickup, as well as a one ton. Also think of the many cars, vans and other truck lines that Chevrolet or Ford might offer. So it is with data and with Health Catalyst, we have a number of … we have our claims infrastructure, we have population and registries based platforms that we can customize and build on top of, admissions, orders, labs, best governance in use cases for off of a base use platform as well as costing, to name a few, but we can then add some variation on top of.

So once we had these base platforms in place in a common architecture, how do I go about making some of those customers that customizations and how can I do that easily? So creating custom in common data structures, I’m going to show two quick demonstrations. One, I’m going to show through screenshots and one will actually jump into the tool. The first of which is our measure builders application. So across healthcare we have the burden of reporting on certain measures, be it to CMS or to individual contracts, and of course, none of them have the same definition or population attached. We don’t necessarily have an app to fix the regulation. However, we do have the ability to customize which measures we choose, see them in a streamlined process, pull them off of the disparate spreadsheets and desktops that they’re being managed on now into one central repository. And then once we’ve done that, we can start to build populations around those measures for specific outcomes for downstream applications.

So jumping quickly into the measure builder tool, let me show a few screenshots. So typically we find that our customers, that most of our measures are kept literally on spreadsheets. So we want to upload those into a central repository. So we’re going to choose our CSV file, make sure we parse those measures, convert those measures, and once they’re in and finalized, we can start to look through the value sets of each measure. What are their descriptions, code sets and codes? We can start to look by specific contract, what is the estimated data upside to meeting these measures? What are those associated measures as well? And maybe more importantly, let me go through and search across all my measures for diabetes, choose a handful of them here on the left-hand side, and then quickly start to run a comparison and see where do we have overlap for those measures where we can get the most juice for the squeeze, for specific interventions and measures to make sure they are being met? And where can I go to free up some of the time of my office manager and analysts so that they can actually automize the process of managing all of these different disparate measures and start spending more time on real outcome improvement work, potentially with curing Alzheimer’s as we work towards improving that outcome, or whatever other outcome there might be?

Another example I want to share is our population builder tool. So great, we’ve identified a lot of measures, how do I go about on top of that base structure, common structure of my population of patients that I have? And how do I start to refine those cohorts? How do I edit them, save them, replicate them, edit them, and move them to a downstream application? I’d like to show a quick demonstration of how we can do that with our population builder tool. So first and foremost, we look at this tool. I’m looking at a population of 1.85 million patients. And this use case, which we’ll use now, we’ll also piggyback on a little bit later, I’m looking for those with total hip and total knee joint replacements. And I’m going to click and drag over my encounter detail and looking by MSDRG, I know that for this measure I’m looking specifically for MSDRG 469 and for MSDRG 470.

So I click and move those over and apply them and choose to execute. I’m going to see my 1.85 million patients go down to 2032. That may be specific enough for this use case, but maybe we need to get a little more detailed and I’m looking for the demographics even though its medicare’s highest reimbursement is for hip and knees and highest expenditure, maybe I want to make sure that those demographics are certainly for my geriatric population, so I’m going to look from age 65 to 110 and execute and apply those filters. Now I’m down to a little over 1200 patients. And maybe for this next use case I want to look at some lab data, so who has had the proper workup before this procedure? So I’m going to select everyone who’s had a CBC, everyone who’s had a urinalysis, and let’s look for everyone who’s had arterial blood gas, which I understand is extremely painful and I hope I never have to have one of those, and let’s execute that query.

Okay. That took another 20 or so patients away. We’re down to 1215. Again, is this detailed enough? Maybe it is or maybe it’s not. How we can get even further and further down a into refining this population, in this example, I’m going to use a diagnosis. As we all know, we have a bit of an opioid epidemic here in the U.S., so I’m going to look for anyone who’s ever had a diagnosis for substance abuse. And while it’s searching for that and all those different codes, what I’m going to do is highlight those specific patients who’ve had a diagnosis in the past of substance abuse. As it’s executing, I see that it comes down to 12 patients. Maybe these patients need a little extra care, maybe they might be good candidates for care campaign post surgery, for me to give them a little extra attention and make sure that we’re managing their pain efficiently, and keeping them away from taking too many potential opioids. I can see specifically who are these 12 patients, export them to a CSV file. I can also, once I reuse them in a downstream application for care campaign, I can then come and rerun this, but simply exclude those 12 patients, and I’m back to my list of 1203 patients as well.

So what we’ve done here, is we’ve gone through me as a non-clinical user, have been able to come through as a provider of care, identified down to a population of 12, very specific patients that I’m wanting to use for an outcome improvement around my hip and joint applications. I don’t need to know sequel. I don’t need to send out a ticket to my analyst to create a report just to find out, “I wonder if we could book just that the diagnosis for substance abuse and see who those are.” However, I do have the ability to see that sequel code here, if I need to use it elsewhere or copy it to use somewhere else, I can save this registry, I can publish it so I can use it in other applications, and of course come through and give it its own name as we go forward as well.

One example of how we can take the base structure of data, customize it for specific use case, and then a little later in our presentation today, I’m going to show how we actually use this population for very specific outcome improvement. [inaudible 00:30:21] back to you.

Jared Crapo:                 Thanks, Sam. So there’s two examples of how we can share metrics and manage those more effectively, and also how we can easily define and share cohorts of patients, so two great examples that can support a data first strategy.

Next, we’re going to talk about building institutional analytics skills. Once you have a good platform, you have the right skills to leverage that platform. Here are a list of four job titles in an analyst job family, starting with data scientists at the top and going down to report writer at the bottom. If you look at many healthcare institutions, we’ve observed that there’s lots of report writers and analysts and not very many data scientists. An ideal mix of these resources is probably something different than where we. We’ve observed this in lots of different institutions that we’ve talked to. So how do we measure that gap and then how do we close it? But before we talk about that, I’d like you to think about your institution’s analytic situation. So we have another poll question.

Chris Keller:                 Very good. Thanks, Jared. Thanks, Sam. Poll question is up. Please go ahead and take a few moments to answer this. And we had a little fun with the options that we presented in this poll question, so hear these out. Option one to the question, how would you describe your institution’s analytic situation? Large report queue supplemented with occasional ad-hoc analysis. Option two, broad access to modern, self-service visualizations. Option three, IT thinks they are doing a good job, but clinicians and the finance team don’t feel their needs are being met. Option four, analysts collaborate with clinicians and provide real time insights, data scientists regularly create and update machine learning models. Option five, data scientists and analytics engineers spend lots of time on Snapchat because there isn’t enough for them to do. I think you might have biased the polls there, Jared.

Jared Crapo:                 Sorry, man.

Chris Keller:                 Okay. We’ll leave that up for just another moment here. Again, as a reminder, slides will be sent out after the webinar today within 24 hours. You can ask your questions within the questions panel, within the control panel of go to webinar. Okay, we’ll go ahead and close our poll and show the results. Here you go, Jared and Sam. 40% said large report queue supplemented with periodic analysis. 13% said option two, 29% option three, 13% said option four, 6% said option five.

Jared Crapo:                 Man, I need to go work at one of those six percent facilities. There are a couple of encouraging things here. 13% of you feel like your analysts collaborate well with clinicians and you have effective data science program in place, that’s fantastic. I think one of the observations here, almost a third of you would describe your situation as finance and clinical users don’t feel like they have satisfactory analysis. And it’s also very common that there’s analytics as reports. There are a lot of institutions where if you have a question, we’ll build your report and it’s fairly static, some pdf that comes back with occasional ad-hoc analysis. So super interesting. Thanks for your responses.

So let’s talk about, let’s drill, let’s press into this a little bit further and talk about eight core analytic skills. The first one of those is data movement. How do we move data from one place to another? We may need to transform it in order to effect that movement. That’s a super important skill in data analysis. The second skill is data visualization. How can I surface insights from that data in a way that’s easier for humans to digest? The third core analytic skill is query. How can I construct a query that assembles data for multiple disparate data sets into a uniform result set? Fourth, how do I model and design data structures that can store data in a way that is representative of what that data represents in the real world? So that data modeling exercise, how do I design data structures to store data is a super important analytics skill.

To be an effective analyst, you need to have some domain expertise in the area in which you’re providing that analysis, and that’s super important skill. And generally in healthcare we do a pretty good job in area of having domain expertise. Analysis, that’s a core skill. Some people do it really, really well. They’re able to understand what the data is telling them and then describe to others what that means. So that’s super critical, so important that we named the whole category after it. Machine learning. Humans are really good at understanding data, but when you get to vast quantities of data, it’s hard for humans to keep it all in our brains. Turns out machines are really good at that and there’s a distinct skill of teaching machines how to look for correlations and insights in data, and how do we build and train those models so that machines can look for insights instead of just humans, which is analysis. And finally, how do we apply our data processing skills and capabilities to process improvement in other areas of our business? How do we work with clinicians to improve clinical outcomes? How do we guide our finance team and our executives to make better decisions using the analysis that we’ve created?

So these are eight core analytic skills that are super important to build out inside of your organization. In addition, there’s three orders of complexity in analytics, and you may have heard these terms before, and I’m going to use a little metaphor for mathematics to help articulate the difference in these three orders of complexity. The first order of complexity, or some would call it the zero order, is descriptive. That helps us answer the question, what happened in the past and what’s happening now? And our mathematical metaphor is to Algebra. This is the quadratic formula that we can use to solve quadratic equations. So, that’s the first order of complexity is roughly equivalent to Algebra.

The second order of complexity in analytics is called predictive analytics, and that helps us answer the question, what’s likely to happen in the future? That’s a lot more complex than to be able to just tell us what’s happening now. And so here’s an example of a statistical function that computes the correlation between two data sets. We might use a statistical function like this to help us figure out what’s likely to happen in the future. The highest order of complexity in analytics is prescriptive. What interventions should I apply that will have the biggest impact on my desired outcome? And that’s like multi-variant calculus and I’m somewhat smart told me that that’s what that is. I have no idea what that means, but I can pair the words out. But those three types of analytics, descriptive, predictive, and prescriptive, are very in their complexity.

So Health Catalyst has provided for many of our clients a technical assessment that does three things. It looks at the demand for analytics in their institution and that’s what these multicolored bars represent. And we do that in each of the orders of complexity. What’s the demand for descriptive complexity work, for predictive complexity work, and for prescriptive complexity work? And then we interview analysts and in this case, in this example, there were 41 analysts, and we plot their current skills on the chart over the demand for those skills. And as you can see in this example, in descriptive analytics, we have some capacity. Our skills exceed the demand for that type of work. And in predictive and prescriptive, we have more demand than we have skills, and we have a skills gap.

So this kind of assessment can be super helpful for you to figure out what you need to do to expand your skills and in which areas you need to focus. So let’s quickly talk about some strategies that you can use to close that skills gap between wherever you are and what the demand for analysis is in your institution.

The first strategy is to consolidate analytic expertise. Go look for all the people in your institution that have analyst in their job title, and I’ll bet that you’ll find they’re scattered all over the place. If you can consolidate many, I’m not saying you have to consolidate all, but if you can consolidate many of those people into a single group, you now can more effectively prioritize and efficiently respond to requests for analysis, and you also, by consolidating, you now have a great way that you can raise the level of skills and expertise on those analysts. So the second strategy is mentorship and education. How do we elevate the skill level of our analysts so that we can better meet the demands for analysis that our organization or institution has? And finally, the third option is outsourcing. I can bring in experts from outside which can help us quickly expand our capability to meet those demands.

So let’s just compare and contrast a little bit, some of the benefits of each of these options. If you choose a mentoring and education option, you need to get the skills mix right, but you also need to organically, expand and grow those skills and that requires some investment in mentoring and in training, both in terms of time and of money, in order to expand that. And it generally takes some time for those investments to pay off. And so if this is the value that is delivered by our analytics organization, you’ll see that there’s kind of a slow ramp as our investments mature and begin to pay off. Contrast that with an outsourcing approach whereby bringing in outside experts, we may have to invest less in terms of dollars and in terms of time to accelerate our delivery of value.

These options aren’t mutually exclusive. You might consider mixing those. And Health Catalyst has customers that fall on both ends of the spectrum. We have some customers where we are the recipient of that investment from clients in mentoring and education. And our job is to teach. We have other clients who have said, “We would like to outsource our entire analytic function to Health Catalyst.” And so you can mix and match these and find the right balance between these two strategies to meet your appetite for how much you want to spend and how long you’re willing to wait to advance your expertise.

So those are some techniques and some tools that you can use, and a framework to think about building your institution’s analytic skills. So once we have the right strategy in place, we have the right skills in place, how do we bring that all together to improve clinical practice? And Sam’s going to walk us through a use case from one of our customers and show you how they were able to use these tools and techniques, and the results they were able to accomplish.

Sam Turmen:                Absolutely. Thanks so much. Very quickly looking at how we’ve done with a Thibodaux Regional Medical Center, their small community hospital in the Mississippi Delta. So though they’re small, they have won numerous J.D Power and Associates Awards. In this case, they are focusing on the Triple Aim for better health, better value, and better population from IHI Triple Aim. And what they’ve found is the research is showing that over seven million Americans living in the U.S. right now have some type of hardware in either their hip or their knee. These total joint replacements are the most common surgeries for Medicare beneficiaries to the point where Medicare reports over seven billion dollars annually for this type of hospitalization alone, and that we’re performing over 400,000 of these cases a year, and that there is substantial variation in the cost. Medicare shares that the cost ranges from $16,500 to $33,000 per case.

In the case of Thibodaux Regional Medical Center when they started working with Health Catalyst, hip and knee, emerged as one of the top two driving clinical areas of variation in their care process. They were able to identify sizable variation and readmissions, length of stay, the cost of care and complication rates, and to the point where they had developed a care team they called the Care Transformation Orthopedic team with that set multiple outcome goals. The goals were focused around education, standardizing their care process, redesigning order sets and flows in deploying advanced application that Health Catalyst joint replacement, hip and knee tool that I’ll show you here in a moment. The results speak for themselves. They’re able to leverage what we commonly call the three systems at Health Catalyst that’s having the right analytics, having the right best practices, what they should be doing, and then being able to drive adoption with the providers of care. And the results kinda speak for themselves there in the bottom right hand corner.

So 76.5% relative reduction in complication rates, 38.5% reduction in length of [inaudible 00:45:46] reduction in length of stay for their hip patients, 23.3% for their knees. They were able to improve patient education, early mobilization and decrease use of opioids, which contributed to their shortened length of stay. And under two years they were able to save up to $815,000 in costs. All this, they were able to accomplish while maintaining a very high patient satisfaction rate.

So let me show you the tool and what they use to actually track their performance here. So their team identified areas of outcomes they wanted to improve, mortality rate, the variable costs, the length of stay in their 30 day readmissions. And in my outcomes tab here, I can see how we’re doing across our 30, 60, 90 day readmission rates. I can look on my discharge versus variable costs where we were discharged to, how we’re performing with our discharged [inaudible 00:46:46] in our length of stays, but most probably most importantly is this pre-op and post-op tab. So pre-operatively, how are we doing as our patients are coming into our care for their hip and knees? They found at Thibodaux, that a lot of the patients were showing up and weren’t necessarily in the best shape to be a good candidate for the surgery. So what are some of their general risk? What’s their diabetic scores? Are they heavy tobacco users? What’s their obesity rates for their patients? How are we doing overall on who we’re electing to be able to have the surgery on? And then how are we doing preparing those patients?

So they started a joint class attendance, who’s actually coming to that joint class to set proper expectations? What do they need as they come into our care? Who do they need to take them home? What can they expect? And set those expectations going forward. How often are we creating and completing a pre-op phone call? A simple intervention, but one that the data showed was not being completed as often as we would hope. And then post operatively, how are they doing as they’re leaving our care? How are we managing their pain? How are we doing by provider? What’s the average pain score and what’s the average morphine equivalent? How are we doing on the activity? How quickly are we getting them up out of bed for their first physical therapy? How many hours until their first occupational therapy? How are we doing with our pressure ulcer rates? Et cetera.

And then we can start to look and compare these cohorts and see where there might be some variation. So I’m looking just at my hip procedure patients and going to my cohort comparison. Looking from my blue cohort over to my orange, let me change my hip to those of my knee and see how we’re comparing across the board here. Or maybe another use case, let’s look at our oldest and most at risk patients, 75 and older, copy that cohort over here and see how they compare to those 55 to 65. We see that our 75 and older and being discharged to skilled nursing facilities a lot more often where others to home, which is to be expected. But then we can go back through these different tabs and start to look for what variation by provider, by MSDRG, et cetera, and take us down to a patient line item level. So we’ve applied these filters. Let’s look at one patient and see a snapshot of their pre-op, intra-op, and post op information, some of their demographics, and how they’ve done as they’ve come into our care.

So earlier today we identified a population based off of a common structure. We went through our population builder and we narrowed down our population. That population can then be applied downstream in this type of an application to see how we’re doing driving specific outcomes, leveraging best practices, our analytics and adoption methodologies offered through health catalyst. I’ll turn it back over to you.

Jared Crapo:                 Thanks Sam. So let’s review. We talked about the six rights of a data first strategy, need the right data, the right governance, make it available at the right time, we need the right institutional skills, need to make analysis available at the right place and in the right applications. We talked about some strategies that you can use to build institutional analytics skills so that we get the ideal skill mix to meet the demand for analysis in our organization. And then finally, we showed how this has all been applied at Thibodaux Regional Medical Center, which is a small community hospital in the Mississippi River Delta.

Thank you for being with us today. We’ve got some time for questions. I hope that you found something in our discussion today that you both found interesting, as well as I hope you had a little aha moment of something that you could do in your institution to try and use data to better help us deliver outstanding care forever for our patients.

Chris Keller:                 Very good. Thank you Jared and Sam. Great presentation. Before we move to questions and answers, let us first ask one last poll question. Let me launch that now. If you are interested in having someone from Health Catalyst reach out to you to schedule a personal demonstration of our solutions and services specific to your institution, please answer this poll question. Okay, we’ll leave this up for just a few moments while we go ahead and begin answering audience questions, and I remind you that you can ask those questions via the questions panel in the go-to webinar control panel. So it’s an individual pane, open it up, submit the questions. We’ve got a handful here that we’re going to begin with. Let me open those up.

Okay. I’d like to ask the first question here from the audience, Sam and Jared, what you’ve shared today is very interesting, compiled, slick, well put together. What are the biggest complexities in getting going and taking this great set of material and putting it into practice?

Jared Crapo:                 I think there are a couple of really easy things that you can do to get started. First, do, even if it’s a cursory assessment of what your current capabilities are, both from a technical perspective and from a skills perspective, and try and think and be a little critical if you’re in IT, be a little critical on yourself about what our current capabilities are because that will give you the first idea of where you maybe can do better. There are a couple of other things that are really easy to typically get started with. One of those we demonstrated, go look at and try and get a list of all the measures that you’re computing, infection control and quality, in finance. Just try and understand how many of those you have and see if there’s any opportunity to simplify.

And then there’s another technical thing that you can do to quickly get started, and that is taking a take stock of all the different data sources that you have in your current analytic environment and see if there’s anything you can do to expand those, so that you have additional data available. You’re trying to pull things out of spreadsheets and into an analytic environment. And I’ll bet you with a little bit of looking, you’ll be able to find something you can do right away in all of those areas.

Chris Keller:                 Excellent. Thank you, Jared. Another question here. The Thibodaux example has some really awesome outcomes, some really awesome results. What do you think was most critical for them getting those outcomes and those results? And are those something that could be comparable against other organizations across the country?

Jared Crapo:                 So at Thibodaux, I think the critical thing that they did is they built this care transformation team that focused on hip and knee replacements. And that team is what looked at the data, of course that had to be supported by analysis, but that team looked at the data, looked at the evidence to try and determine, what things could we work on in our institution to deliver better care? And they ended up doing a lot of those interventions and we summarized two years worth of work into two minutes, but that, the establishment of that team, which focused on this in a sustained period, this is not a swat team that worked on it for six months and then went on to the next project, it’s a permanent team. And if you establish a permanent team to focus on improvement in an area, you can get these kinds of results over time.

Sam Turmen:                If I could just add, it’s that team is multidisciplinary in nature, which also has providers of care in it, which drives that adoption, right? I think if we include folks from multiple different disciplines and even influential provider, it’s no longer an IT project that they’re dropping off on my desk, it now becomes my project and our project, and gain some ownership and really drives that adoption for it as well.

Chris Keller:                 Okay. So I’m hearing you say dedication to focus, dedicated team, multidisciplinary in nature. Excellent. Thank you there. We have a good question from Mike Miller talking about whether or not we had the ability to use display control charts to sustain the improvements and signal special causes of variation.

Jared Crapo:                 The short answer is yes. Special cause of variation is a little bit trickier because it depends on how you’re going to document those. Usually that’s buried inside a text note, inside an EMR, which makes it hard to surface on the chart. But yes, we can create those kinds of control charts.

Chris Keller:                 Excellent. Rapid fire here. We’ve got a couple of questions from Heidi Bell. Thank you, Heidi. She’s talking about the technology used to display the data, whether it’s a specific vendor that we partner with, whether we have some customized. Can you talk about the tools used to display the data?

Jared Crapo:                 Sure. Health Catalyst uses a wide range of tools to visualize data. The demos that you start today, you probably, if you’re familiar with ClickView, you probably notice that we use ClickView in our demo today. We have many clients who prefer Tableau, and our visualizations are also available and fully supported in Tableau. We also have some tools that you use native web user interface, and we showed one of those, the population builder tool you saw today was a native web interface. So we have a variety of visualization tools that we support on our platform.

Chris Keller:                 Excellent. A question from Justin Potts talking specifics here. How customizable are those visualizations that you showed, Sam? And a second question also from Justin talks about the data that can be ingested into the platform, whether it’s specific to a certain EMR data source or whether it’s … what the range of possibilities are.

Jared Crapo:                 Hey, Sam, do you want to take those two?

Sam Turmen:                Yeah. I’ll take the first part and then I’ll have to have you repeat the second because I was thinking about my response to the first when you said it. Sorry. First question, yes. Customization is absolutely available. That’s a part of contracting with Health Catalyst, is we can help you what we would call fingerprint, or put your fingerprints on these applications to customize them and also show you how to do that yourself so you can be self sufficient if you have the skills and ability to do that. And then either Jared can take the second question or just repeat it for me real quick so I can [inaudible 00:57:23].

Speaker 1:                    You bet. So here’s the repeat, Sam. Justin is also asking about options when it comes to data ingesting, do we have specific data sources that we can and cannot ingest data from?

Sam Turmen:                Oh, you got it. So we have a library, probably 210 plus data sources that we’ve ingested to date. We really don’t have any data source that we’ve ever said no to. And we do have some proprietary tools that help us ingest data very quickly and efficiently. So there’s none that are completely hands off. There are some that we may have not done yet, but that doesn’t mean we’re not interested or willing to map those into our operating system.

Chris Keller:                 Excellent. Thanks Sam. Here’s a fun question about math, Jared. I think someone’s calling you on a carpet and asking you to actually execute those set of formulas there. So that’s a challenge you have to-

Jared Crapo:                 No can’t do, man.

Chris Keller:                 You’ll have to take that on after the webinar.

Jared Crapo:                 Yeah

Chris Keller:                 Another question here about the assessments. Technical assessments seems really interesting, but how do you identify gaps in skills without offending, causing political problems, organizational problems to confront those skill gaps? How do you actually take that on organizationally?

Jared Crapo:                 Yeah. So this is the hard part of any assessment, right? How do you conduct the assessment in such a way that we don’t make people feel bad because they’re not what we think they might need to be? And then also once you have an assessment, how do you implement change in such a way that you keep it positive and keep everybody engaged? And the way you solve those problems varies wildly by institution. We’ve had a lot of practice on the assessment part of conducting our interviews in such a way that we’re able to identify the skill gaps as best we can without having everyone feel like they’re terrible at their job. But the way your institution chooses to deploy change, if you decide you want to close that skill gap, that varies wildly from institution to institution. Some institutions just lay people off. A lot of institutions want to take a less drastic or dramatic approach. So that varies wildly.

Chris Keller:                 Okay. Sounds like a complex problem to solve, but I appreciate the options there. We have reached almost the top of the hour. [inaudible 00:59:51], we didn’t have time to answer your question, but we will reply afterwards to you. Thank you for that question. Actually, there’s one other question that came in. Let’s ask that and then we’ll cut to the end. This person asks, thank you, Eric. What is a realistic timetable for implementation of useful analytical feedback that can be utilized in practice? Last question, then we’ll go ahead and end.

Jared Crapo:                 So, Health Catalyst has customers that from the time of contract signature till the time we have multidisciplinary care improvement teams using data to drive their intervention decision making in eight months. There’s opportunity to compress that even further depending on what domain you’re working in. But if you can’t do it in a year, if it takes you longer than a year to go from contracting to useful analytical feedback, then I think you’re probably doing something wrong or your vendor is doing something wrong. So that would be the target. Faster is better. I think it’s doable in six months to ingest data, analyze it, and to begin to surface useful insights to improvement teams.

Chris Keller:                 Okay, great. I appreciate that, Jared and Sam. I’ll call this a webinar a golden webinar. Sure appreciate the good information you shared with us today, and we hope the best for everyone who’s attended. Thank you for joining us today. Shortly after this webinar, you will receive an email with links to the recording of this webinar, the presentation slides, and the poll question summary results. Also, please look forward to a transcript, which we’ll include as soon as it’s available.

On behalf of Jared Crapo, Sam Turman, as well as the rest of us here at Health Catalyst, thank you for joining us today. This webinar is now concluded.