A Health Catalyst Overview: Learn How a Data First Strategy Can Drive Increased Outcomes Improvements

Jared Crapo:                 Hey. Thanks, everybody, for spending some time with us today. We’re glad to have you with us. Today I’d like to share with you a few things that we’ve learned as we’ve worked with our clients, and some of the lessons that we think can help you to build your own data first strategy, which will lead to better outcome improvements.

Let’s start with a population health story. This story has probably been played out countless thousands of times throughout the healthcare delivery system around the world. Often, we notice something that’s interesting, like an increase in cancer related surgeries and inpatient admissions, and we’re not quite sure why it’s happening. Maybe if we even understand why it’s happening, we’re not sure what we can do about it. We need generally more data and better access to that data in order to answer these kinds of questions and to deliver better care for our patients.

If you think about a healthcare community. It’s not just an acute facility with the data that they have. There’s also usually many other venues of care, like specialty clinics, or skilled nursing facilities, other providers. And all of those providers also have their own data that they’ve captured about the patient. In recent years, patients have also started to collect their own data and manage their own data.

So how are we going to bring all that data together to help us solve our care delivery problems, so that we can provide the best care we possible can for our patients? And usually we end up asking questions like these: Do we have the right data? And if we have that data, do we have permission to access it? And then maybe most importantly, is that data available to the right person when they need to make decisions for the care of patients?

Sarah Stokes:                Okay, sorry. We’re here at our first poll question. Thanks, Jared. And here we wanted to know, what is the biggest barrier for your organization to provide more data driven insights to clinicians? Your options today are data sharing agreements, technical hurdles, too many competing priorities, or we do this well. That is also an option. I’ll give you just a moment here to respond to these.

Great. We are seeing your votes coming in. We’ll give you one more moment here. After you have voted, we do encourage you to be brainstorming those questions and submit them when you have a free moment, once you’ve voted.

Okay. Just one more moment. The votes are still coming in. Okay, it looks like they’re slowing down here, so we’re going to go ahead and close that poll and share the results with you.

It looks like 25% reported, “data sharing agreements.” 32% reported, “technical hurdles.” 34% reported, “too many competing priorities.”’ And 9% said, “We do this well.” What do you think about that, Jared?

Jared Crapo:                 So, not super surprising. I think there are very few institutions that really feel like they’ve tackled all of these problems. These are hard problems to solve. Today, I hope that we can share a few insights so that you might learn something new about what you can do in your institution to try and overcome some of these hurdles. At Health Catalyst we believe that when you elevate data as a strategic asset, it enables significantly better decision making and promotes massive improvement in health costs and experience outcomes.

I think most people probably agree with that statement. So how are we going to get that done? That just doesn’t happen by accident. We’ve got to be intentional and deliberate in order to accomplish this goal. It turns out that if you have plans, processes, and principles you proactively apply to ensure that a community’s data is managed, you can maximize the value of that data to the community, and we call that data governance. That’s an important part. This is how you be intentional about accomplishing that data driven goal. So we’re going to talk about some of these aspects today.

The first thing that will help us along in that discussion will be to talk about the data lifecycle. How do we capture data? How do we integrate data from multiple sources together? How do we ensure that the right people have access to that data, so that they can gain some new insight and therefore act differently as a result of having gained that insight. And those actions then generate new data which we capture, integrate, grant access, and so on. This data lifecycle will help us think about some practical tools we can use to achieve our lofty goal.

There are a lot of challenges that are typically seen in this data lifecycle. There’s data quality issues. We have incomplete, or inaccurate, or delayed data. We have data utilization challenges where we might have great data available, but people don’t know how to interpret it and how to apply it to their work. Or we have fiefdoms where HR doesn’t want to share their data with finance, who doesn’t want to share their data with clinicians. And so we’re not able to fully utilize that data to deliver better care for our patients.

And finally, often we don’t have great data literacy tools. We don’t have the right mix of resources, who can help us analyze and interpret that data. And we don’t have the right knowledge, skills, and attitudes in order to apply that data to the decisions we make. As a result, it’s very common to see that data is not driving better decision making, and that data infrastructure is seen as an expense, not an asset.

What can we do about that? We’re going to talk about the four E’s of a data governance framework. We need to elevate the status of data as a strategic asset in our organization. We need to define, establish, and operate an effective data governance organization structure. Once we have plans in place, we need to execute improvements in that data lifecycle that we talked about earlier, and apply those governance principles to our actions in that data lifecycle. And then once we’ve done that, how can we pervasively extend that throughout our organization? So this is what we’re going to talk about for the next few minutes.

In the old movie, The Music Man, there’s a traveling salesman who comes to River City, Iowa, and he wants to start a boys band. He arrives in town. He doesn’t know anybody, except he has one friend in town. He asks his friend, “Hey. What’s new? What’s everyone talking about in town?” His friend tells him, “Well, we just got a pool table.” And so Harold hill, this traveling salesman, makes his pitch about the boys band to have everything to do with the trouble that the kids are getting into because of this pool table. Because that was the talk of the town, and so he tied his sales pitch to what everyone was talking about.

When we’re trying to get engagement in our institution, we need to follow a similar pattern. We need to identify something that is important to our organization and figure out how we can tie data driven improvements to help solve that problem. Right? Harold Hill said, “A boys band will keep your kids from spending too much time in the pool hall because they’ll have something more wholesome to do.” Similarly in our organizations, we need to find ways and problems that data can help solve because that will drive engagement from others in our organization, if we can show them how data can help solve the problems that our organization is faced with. If we can show that, that data can help us solve our strategic mission in the organization, that will help us elevate both visibility and engagement of our organization in this part of our governance strategy.

I believe that after the people who work at your organization, your data is your most strategic asset. So how can we help organizations realize and think about data as an asset? There’s three things that I think you can do. Dr. Brent James from Intermountain Healthcare has projected that up to one third of the costs, one third of the things that we do in healthcare are wasteful. And whether that’s true or whether that’s not, I don’t think that anyone would agree that there’s no waste in healthcare. And so if we can use data to help us identify wasteful activities and wasteful spending in our organizations, that can help elevate data as a strategic asset. And if we can demonstrate with pilot projects that, indeed, there are big opportunities there, and we can extract value and drive improvement from those opportunities, that will help elevate people’s thinking in our organization, that they should be thinking of data as an asset that can help us solve our problems instead of as a cost sink.

When we establish a data governance organization, there’s a lot of strategies that we could take for how to organize that. W. Edwards Deming, who was a pioneer of improvement and one of many who’ve contributed to the theory of organizational design and continuous improvement, he said that we should organize everything around front line value added work processes. And the reason he said that is because that’s the least wasteful way to organize. When we organize structures around IT systems, if we tried to organize data governance around our EMR deployment, or around our human resources system, or around our finance and accounting systems, we’ll spend a lot of time trying to translate those to processes that actually add value. And so if we can reduce that translation that we have to do by organizing around value added work processes, that gives us more direct feedback both to and from those work process.

And so some questions to think about. Do you know what your key processes are? And I’ll share a few thoughts that we have about how you can identify what those key processes are. Do you know how much they cost to deliver? Do you know which ones cost the most? Do you know how much variation is in those processes? Do you know which of those processes are currently effectively supported with data to make them better?

So, at Health Catalyst, we have a framework that helps us think about value producing healthcare delivery processes, and there’s a lot of them. It’s super hard to smash it onto one slide. But the way this slide works and the way you can think about it: The columns in this slide indicate broad areas of clinical care, like cardiovascular care, woman’s and children’s, surgery, general medicine, oncology, mental health. The horizontal bars in this chart are support services that span multiple clinical conditions, for example, laboratory services span a broad range of clinical conditions. But laboratory services, regardless of the clinical condition for which those tests are ordered, produce value.

And so this chart helps you think about value producing processes and can be a starting point for how you think about how you should organize data governance. It’s much more effective to organize data governance around cardiovascular care or around oncology than it is to organize around, “Well, this is our human resources system.” Right? Because the people who understand the domain of oncology will understand what’s valuable, and what’s not valuable there, and what we should be measuring, and what good performance is, and what poor performance is on those things that we decide to measure. So, this can be a starting point for you to think about how to organize around value producing processes.

Once you’ve identified those processes, you have to have a framework to drive improvements in those processes. At Health Catalyst, we think there are three key components to improving a process. The first is we need to be able to answer a question, “What should we be doing?” And that’s best practices from the evidence, from the literature, and from experts in the field. Once we know what we should be doing, we need a way to measure what we are doing, and that’s what the analytics system does. It helps us understand or provides instrumentation on our current performance.

And once we understand the gap between what are we doing now and what should we be doing, we’re probably going to want to effect some change, and adoption is the mechanism by which, and adoption is the mechanism by which we adopt change within our organization. This cycle, if you apply all three of those elements in a consistent fashion, you can drive sustained outcomes improvement in any process. Within that framework, in the analytic triad of that three-part framework, is where our data lifecycle that we spoke about lives. Each process improvement initiative is going to be supported with this data lifecycle where we capture some data relevant to that process, we integrate it with other data sources that will be applicable, we decide who are the right people who should have access to this information so that they can have better insights to then improve that particular process.

That data lifecycle is embedded inside of our process improvement efforts. When we have a whole bunch of these improvement efforts, we need to have a structured and robust method, which is data governance, to manage that data lifecycle across all of our initiatives, and they could span that big I-chart of clinical conditions or horizontal support services. We could have potentially a hundred of these improvement processes running in our organization at any one time, and we need a robust enough data governance structure that we can manage all of those efficiently.

We talked about how we can elevate data as a strategic asset in our organization, some things to think about, how we can establish an effective data governance structure. Now let’s talk about how we actually execute against that strategy and against that structure.

A study done by the Alberta Health Organization, they wanted to know of the data that they think they need to provide an effective population health and precision medicine program, how much of that data was available in their EHR. For them, they believe it’s 8% of the data is currently in their EHR, which provides a very low-resolution picture. If we were to make an analogy to a photograph, it would be a very pixelated, low-resolution photograph of the patient if we only had 8% of the detail.

As an industry, we are just at the beginning of the digitization of healthcare delivery. We’ve done a pretty good job in the last 10 or 15 years with a major increase in EHR adoption, and something like 97% of hospitals in the US now have an electronic health record deployed, which is a big step forward from where we were 10 or 15 years ago. We’ve made a lot of progress in this area.

There are many data domains that can support population health and precision medicine where we are just at the very infant stages of developing our data assets in those domains. We have the cost of genomic sequencing has dropped dramatically in the last 20 years, and we can now run genetic tests for a few hundred dollars. We are beginning to capture biometric data at scale. We’re beginning to incorporate social determinants of health into our digital ecosystem of human health data, but we’ve got a lot of progress to make in this area.

For you to have an effective data-first strategy, you need to have a long-term data acquisition plan where you’re intentional and thoughtful about what kinds of data do we need to go find, acquire, or capture that will help us to deliver better care for our patients. Sometimes, it’s easy to see maybe one year into the future and say, “Well, we just have a new affiliation with this other hospital, and it would sure be great if we could have their EHR data as part of our system,” or to go back to our population health example from the beginning, “We have some affiliated practices in our community that send their patients to our hospital for acute care or for diagnostic imaging or for laboratory services. It would sure be great if we had some data from them,” but you also need to think longer term than that so that you don’t fail to see the forest for the trees that you’re focused on immediately. Be intentional about these longer-term data acquisition plans so that you can support the other elements of the data governance framework.

Laura and Vi from Gartner defined what they call a health data convergence hub. That’s a piece of technology that can automate the ingestion of data, provide traceability, data lineage, manage identity, security, compliance, and then add maybe algorithmic processing, and then deliver that data to somebody so that they can take action on it.

This is kind of a useful definition to think about a data-integration strategy; however, the thing that Gartner’s definition does not fully surface is that you might need more than one approach to that data integration. Let me explain what I mean.

When we think about the kinds of data that we need integrated and how we’re going to use it, we might divide it into two types. One would be data in the moment. A patient presents in the emergency department. I need a quick way to look at their clinical history. I’m going to order some lab tests, maybe some diagnostic imaging, and perhaps order some medications. That’s a very in-the-moment view of an integrated dataset. Very transactional in nature, and yes, I might look at the history of clinical results for that patient to inform my thinking, but it’s very in-the-moment view of that data, and that’s one type of data integration that’s super important to be able to do. It’s great to be able to query a community health record and see a history of patient activities at other institutions outside of our own, but the purpose of that is to support this admission in the ED or this surgical procedure in the hospital. Our use of that data is very transitory and in-the-moment because once the patient walks out of the hospital, our major use of that data in the moment passes.

Now, it doesn’t mean that that data is now not useful in other settings. It’s great to have historical and analytical insights using a similar dataset to help us understand things like risk prediction models and cost estimates for certain types of care processes to identify gaps in care, things that haven’t happened that should happen to create patient registries. Historically, these two types of data have been very separate within our institution. To really take a big step forward in a data-first strategy, we need to figure out how those two frameworks can feed each other so that our analytical insights over time can drive better decision making in the moment, and the data that we capture and generate in the moment can improve our analytical insights.

Now, this is a pretty sophisticated data-integration approach, which is going to require pretty sophisticated technology in order to accomplish it. You might think about it as you need some sort of data operating system that can help you accomplish those strategic goals. Do we have the right data from whichever source that it came from? Whether it was inside our institution or outside our institution, we need a platform that can accommodate that. It needs to be able to process, accommodate, and analyze that data quickly so that our data in the moment can immediately inform our analytic models. We need to provide the right insights from that platform. We need to take that data and turn it into insight and to wisdom to help us understand better choices that we should make, not just look in the rear view mirror to what we have done previously. We need to deliver those insights at the right place.

Most hospital institutions have three or four hundred IT systems that they use to help solve various problems in their organization. Many of those can benefit from better data and better insights. We need a platform that can deliver those insights to all the places where it can be useful via whatever the method of delivery could be, and there’s a wide range of modalities or methods from which we could deliver those insights. It might be dashboards. It might be mobile applications. It might be embedded inside the workflow tools like the EHR or the collection system that people use. It might be to make that data readily-accessible in Excel so the analyst can do their own additional analysis from it. You need a platform that can support this broad range of data initiatives and requirements.

Once we have the data integrated, we have the data captured, we come to maybe one of the most challenging components of this infrastructure, which is how do we grant permission, appropriate permission for people to access this data so that they can use it? This is a really challenging polarity to manage. On one side, we have data protection, and on the other side, we have data-sharing, and if we lean too far to one side or to the other, we have these symptoms of, that we haven’t dialed that polarity in right.

In Indiana Jones and the Temple of Doom, Indy has to pass a bunch of these tests to go to the Holy Grail, and one of them he comes to what looks like a very deep chasm with a small door on the other side. He has to figure out how he’s going to cross this chasm to get to that door on the other side. It turns out that there is a very narrow path that crosses that chasm, but he can’t see it when he initially starts. If you remember, once he sees it and finds his way across … Oh. He throws the sand back so that he can see that invisible bridge when he returns.

It’s really challenging to get the right mix between data protection and data sharing, and here’s some evidence that you found the right balance. One of those evidences is that you have streamlined approval process for people to request and gain access to data. If it takes too long to approve, then we’re maybe erring too far on the data protection side. We gotta have a five-step process within IT in order to grant anybody access to a certain data, and it takes a couple of months for that process to get approved. That’s probably an indication that we’ve erred too far on the side of data protection.

Similarly, if we’re not aggressive enough in managing our control and access to that data, we might have a data breach problem, or we might have an employee who can look up the medical records of a celebrity, which never happens, but we’ve got to find the right balance between data protection and data sharing. We need to have a consistent and robust auditing system in place. I believe that you need to err on the side of granting access to data with a “trust but verify” type of approach. Clinicians broadly have a desire to do what’s best for the patient. Let’s leverage that desire to give them the best data that we can give to help them accomplish that goal, and then audit consistently and regularly review those audits to make sure that our teams are using that data appropriately.

That’s how you can, the evidence that you can find that you’ve struck the right balance between these really two challenging polarities. These are really thorny issues to solve, and they’re hard to solve within an institution, and they’re even harder to solve when you have a community of institutions who are trying to collaborate together to provide better care for their patients by sharing data. You have to worry about sensitive results. You have to worry about mental health results. You have to worry about state laws. There’s many layers to this problem, but there is a path forward, even if it maybe initially doesn’t seem like there’s any way for Indy to get across there, there is a path forward, but it’s hard to find, and it takes concerted effort to strike the right balance for your community.

Once we have access to the data comes the best part, how we’re going to deliver those insights to somebody so that they can then take a different action that’s better than what they maybe would’ve otherwise taken without that data.

We need to get the right information to the right audience at the right grain. If I’m looking at this patient in my office who’s a chronic diabetic, it may or may not be helpful to present a bunch of data about other diabetics who range from 15-64. If this patient’s 50, I might need a more focused dataset. It might need to be patient-grain, it might need to be community-grain, but I gotta have the right granularity of data available in that moment. I gotta have it when I’m making the decision. I need to have it in a way that I can understand and interpret the data so that my decision making is influenced and informed by that data so that I take a better action than I would’ve taken otherwise. We need to systematically think about how we best deliver insight to everyone making decisions so that those decisions can be informed by the data.

That’s a lot of stuff to work on in that execute step, how do we capture the right kinds of data? We have a strategic data acquisition program. Do we have the right technology platform to help us integrate that data, control access to it both from policy and with technology, and then how do we deliver that data at the right place at the right time so that we can take better actions on it.

Looks like we have a question. “Seems like this data access polarity would be an ideal candidate for using all these analytical approaches. It needs measurement, iteration, fine-tuning. Anyone tried this?”

I agree. This polarity can be informed by data, and this question came from Eric. That’s a great question and a great way to think about it. We have analytic tools that can help us inform our policy decisions about data access. We should use that not just when we create the policies, but as we implement, deploy, and fine-tune those policies over time. That’s a great, great suggestion.

Finally, we’re going to talk about the last of the four E’s, which is once we have a program in place, how do we extend it broadly throughout our institution? In order to extend an initial pilot program broadly, you have to create a data-centric culture in your organization. There’s three dimensions to that.

The first one is we need have a culture of data literacy. We need to have the skills, the tools, the techniques to be fluent in the language of data. We also need to improve our data quality. Our analysis can only ever be as good as the quality of the data, which feeds that analysis, and we need a systematic, broadly-deployed approach to improve data quality. Finally, we need to have a culture of using data to drive decision making, and you’ve got to make progress in all three of those dimensions to create a data-centric culture.

Sarah Stokes:                Okay, we’re on to another poll question here. How would you rate the maturity of your organization’s data-centric culture? Your first option is limited: data is a luxury, and only the biggest decisions are data-driven. Second is developing: you have a strategy in place and early adopters have demonstrated success. Your next option is established: broad but uneven adoption within your organization, some pockets of excellence and some areas of concern have kind of risen; and lastly, pervasive, where almost everyone has timely access to the data related to their work and the skills to apply that data to improve the work they do.

We’re seeing your votes coming in, so just keep submitting those. Again, we want to encourage you to continue to ask questions. We’ve been excited about what we’ve seen coming through, and we are getting closer and closer to that Q&A session, so we encourage you to get those in.

Jared Crapo:                 Sarah, while you’re still working on the poll, we have a question from Ravi. “How can a single man practice convince the big system?” And I assume, Ravi, what you mean is, this all sounds great and it also sounds insurmountable. How can I have even a small amount of influence on a big, huge health system so that we can try and make some sort of change?

And I think you just need to go back to the very first of the four E’s in the data governance. How do I find some issue that people care about that we think can be supported or improved with data? And I’ll bet if you think about what is important to a small practice and also important to a big institution, that you can find some issue that you can latch onto, so that you can have a forum to share some of these ideas. So that’s how I’d suggest that you start.

But it is super challenging, right? Super challenging.

Sarah Stokes:                Okay, we’re going to go ahead and close that poll and share the results.

Okay, so it looks like 26% said, “Limited.” 44% said, “Developing.” 28% said, “Established.” And only 2% said, “Pervasive.”

Do you think that kind of aligns with what you would expect?

Jared Crapo:                 Yeah. We’ve talked about some of our ideas. You probably already have many of these things. You have plans and you have strategies in place within your institutions. And you’re starting to see success. That’s what the poll results say. Half of you feel like you have a strategy and we’re moving down that road. And that’s fantastic.

Hopefully you will learn a couple of things that may help you be able to go faster or to expand that within your organization. ‘Cause what we’re all trying to get is to “Pervasive,” where most decisions are made with timely access to data. That’s the goal.

So that’s super encouraging. I mean, we’ve made a lot of progress.

Okay. Let’s briefly talk about how we can advance a culture, a data driven culture in our organization. We need to think about data literacy. Do we have the right mix of data skills in our organization?

One thing that we observe frequently in healthcare institutions is they got a lot of report writers, a lot of people with “Analyst” in their title, who spend most of their time gathering data and not spending most of their time interpreting data and understanding what it means.

And so, having the right mix of resources and having the right tools so that our data skills can practice at the top of their license, to borrow a term from Clinical Practice, is a great way to improve data literacy. We need to understand whether the people that we have with data skills are adding the highest value skills and activities to the organization.

We also need to have a leadership literacy evaluation. Most score cards that are in use at the executive levels inside of institutions are not great tools for separating signal from noise. Yes, we might have had a blip in our case mix index that was unexpected. But is that noise, or is that a real signal? Is that evidence of a structural problem? Or is that just variation that maybe doesn’t need as urgent of our attention?

And so, we need to assess both at a technical level and at a leadership level, our data literacy skills. We need to think about our progression as analytic complexity increases and our contextual understanding increases, we can have more sophisticated analysis of the data. We start at the bottom left, where we can only answer questions that we’ve previously anticipated. And it’s mostly rear view mirror looking. What’s previously happened?

And as we progress both in analytic complexity and our skill and our understanding, we can eventually get to a spot where we couple technical expertise with domain knowledge to really understand why certain things happened and what are the best paths, what are the best actions that we can take to yield future improvements.

And so, that’s how we want to progress our analytic literacy as we build a data centric culture.

We also need to focus on data quality. A couple of things to help you think about data quality. A lot of data initiatives say, “Well, we’ve got to get all the data right, before we can start. We’ve got dirty data and we’ve got to get that cleaned up before we can get started.” That, in my view, is a recipe for failure, because you’ll spend years working on data quality and never yield any actual value from that quality.

So don’t try and fix it all at once. Focus your data quality efforts in the areas that you have improvement initiatives. And you don’t have to wait to start improvement until the data’s perfect. You can begin improvement while you strengthen your data quality and improve your data quality, change your data capture processes. Those are all improvements and we should think of them as such.

And the other thing is, you shouldn’t be afraid to stop. Sometimes we only need directional insight. We don’t have to have perfect data quality to get directional insight.

And Liza, I think, I see your question. I’ll come back to it in just a second.

Finally, to encourage a culture of data utilization, we need to identify and recruit change agents in our organization who can help us change. And I love this quote from Eric Hoffer. “In times of change, learners inherit the future, while the learned find themselves beautifully equipped to deal with a world that no longer exists.”

We need to identify those learners so that they can help us with utilization throughout organization.

Many of you are familiar with this chart about the diffusion of innovation. We have early innovators who will jump on any new thing. The best change agents are early adopters, because they can help you recruit this big, meaty middle of the bell curve.

And there are some techniques that you can use to identify those leaders. They need to be both able and willing to be a leader. And it’s great if you can have an open selection process from all those who you’re hoping will support this initiative, for them to choose who would be a great leader for this process. Because not only does it help you get it started, but it helps you as you’re trying to build buy-in and involvement and engagement in that initiative over time.

So, super important to identify your early adopter leaders, because they will be the ones that will help you expand and extend your culture of data-driven decision making.

So, Liza had a question. How do you assess leadership data literacy? That’s a great question.

There are several ways you can do it. Health Catalyst actually has some tools that are available on our website that can help you. It’s a self-assessment, an analytic assessment that you can use, either broadly within your organization or individually to assess data literacy and data capacity. And these tools that Health Catalyst developed are actually backed by research. It’s not a survey. It’s real science about what are the questions that matter that actually are indicative of data maturity within an organization.

So, Liza, if you want to send any one of us an email, we can point you directly to those tools that you could use to help you get started with data literacy.

So, data governance and the techniques, the four E’s that we’ve talked about, help us elevate data as a strategic asset so that we can make better decisions, which yields massive improvements.

So, we’ve talked a lot about strategy. How does it work in the real world? What can actually happen? How do you apply this to a specific population health initiative like we started our meeting with?

And Greg is going to walk us through how that might work. How could we actually use some of these techniques? And what could the results be of implementing an approach like this?

Greg Sill:                      Thanks, Jared.

And so, as we think about that curve that Jared talked about, that there’s data in the moment, there’s data that we’re capturing today, in real time. But there’s also this huge, analytic knowledge base that we need to have.

And so, if we think about the technology behind this, how do we surface this information through the process? And how do we involve our community as well?

And so, as we think about the platform needs, there’s certain aspects, there’s certain key activities that need to happen within that data set in order for that to happen.

And so, if I start over on the right hand side, this is how we need to link to that data in the moment. Pulling in the HL7 or CCD or other real-time data feed. So, as the clinician is engaged with that patient, that we’re sharing that information. But you’ll notice in all of these cases as we think about that, there’s a two way arrow diagram in which not only are we asking for the data that that physician or that clinician is talking about that patient, but also what is the information that we can share?

So, thinking about how those two interact with each other, from the analytic perspective, we may have gone out and considered what is the predictive risk of that patient. What are some of the aspects of patients that have had a similar condition? What is the information that can help inform and help build upon that particular process?

And then we share that as that information comes back. And so whether that is shared with the community or within our own network, that enrichment or that complexity of that data really helps that.

But I want to hit home really the key point that as Jared was talking about, that there’s multiple things involved here. And that if we get too caught up in just the data, we’re missing a key piece of that aspect. And that is, is that this process takes not only data but also people and processes as well.

And so, if we go back to our story in thinking about that, there’s things that need to happen in order for us to be better at that care prevention. In this case, for the cancer. So we need to think about things such as the at-risk cohorts of patients. We need to think about how do we engage those physicians in the information that we’re sharing with them. And we need to think about what is that process behind us. What are we trying to essentially accomplish and achieve with that particular example?

And so, very quickly, I just want to share a quick, little example of an application that we use here at Catalyst to do this exact process. This is our Community Care application. And so, I’m going to switch over here very quickly to the dashboard, to show you this application.

The process behind this application is we’re looking at different gaps in care or different quality measures in which the patients that we’ve identified throughout our organization need to specifically be addressed. And so, if we take our example of a cancer screening, I’m going to pick a particular measure there. And overall as an organization we have some information of how we’ve been doing in the past. This is how we’re doing. And we can see that right now we’re not actually doing very good. We’re at 17.2% of our compliance.

And we can use this to really drill down into thinking of this from a historical perspective. But let’s get down to the actual patient in the moment.

And so, I’m going to drill down through the organization here. First off drilling into my location, and then into the department within that location. And ultimately my goal is, now as a physician, I have information that’s meaningful to me.

And so, within this department we have three different physicians that I’ve drilled into here. This particular group is actually doing better than their peers. They’re at 43%. And I may pick particularly an individual patient, or individual provider, excuse me, in this particular case that I want to look at.

And so, the first activity is I may want to switch over here to a call list. And so this physician has 12 patients that need this particular screening. So this would be the call list in which they would start with to engage with these patients and start to bring them in. So we’re starting to think about how to engage those patients.

The next step is, once I’ve engaged those patients I’ll come over here to this patient list. And we want to get down to the individual data in the moment at this point. So not only does this patient need that colorectal cancer screening, but they also are due for a flu immunization as well. They’ve had certain other gaps in care that have been fulfilled. But in this particular case we want to look at specifically what is the outreach and how do we present and bring data to this.

And just to very quickly go back to our story here. One of our clients, USMM, which is up in the Michigan Delta, they were able to do and utilize this to really make an improvement in their process with their community.

And so, as you can see here, just on some of the results that they were able to achieve through this process, they were able to do some screening, and clinical depression screenings allowed them to reach the 90th percentile. Also with some of their other metrics such as flu immunization, blood pressure, and future fall risk screenings they were able to reach the 80th percentile. But more importantly, they were able to reduce the savings and really reach out to these patients to ultimately impact and be able to make a difference for those patients.

So, if we get back to that story, if we think about the story and the questions that Jared asked at the start. With those cancer screenings, or with that ability, our ultimate goal is to avoid these cancer in the first place. Are we doing the right things with cancer prevention? And are we pushing the right data at the right moment in order to do that?

So going back to the original three questions. Do we have the right data? Are we pulling that data in the moment with the data, the historical data? Are we granting access to that? Are we pushing that data to the right people to make decisions on that data? And is that information in a place in which that we can ultimately make success there?

And I hope that through what we’ve shared here today that you can kind of get an idea of how we use that information to really make an impact or difference to our communities.

Sarah Stokes:                Okay. Thank you to Jared and Greg. That concludes the majority of their presentation here. We do have a few wrap up questions for you. And this is the first one. While today’s topic was an educational webinar focused on optimizing and better utilizing your data, we often have attendees that would like to know more about Health Catalyst’s products and professional services. If you are one of these individuals and would like to learn more, please answer this poll question.

Okay. I’ll give you one more moment here. And then we’re going to jump into a couple of giveaways that we are offering.

Okay. We’re going to go ahead and close that.

Okay. Finally before we wrap things up and address any remaining questions, this is a great opportunity if you have any questions that you haven’t yet asked, to submit them so we make sure to get Jared or Greg to address them before we wrap things up.

But we have a few giveaways for complimentary passes to the upcoming Healthcare Analytics Summit. This is an annual event with more than a thousand provider and payer attendees. It’s occurring in just a few weeks from now, from September 11th to 13th in Salt Lake City, Utah. And it’s going to feature some of these brilliant keynote speakers that come from the healthcare industry and beyond.

And so then we have a couple of giveaways here. If you know that you’re able to attend and are interested in being considered for complimentary passes for a team of three to attend the Healthcare Analytics Summit, we ask that you please answer this poll question.

Okay. A few more moments here.

Okay. We’re going to go ahead and close that.

And then we’re moving on to our last poll question here, which is, lastly if you know that you’re able to attend that Healthcare Analytics Summit and you’re interested in being considered for an individual complimentary pass, please answer this poll question.

I’ll give you just one more moment here.

And last chance to get your questions in. Otherwise we will be wrapping up here, right on time.

Okay. We’re going to go ahead and close that.

Okay. This is your last chance to submit any questions that you have. Otherwise we will just go ahead and wrap things up.

Oh, looks like we did have one come in.

Jared Crapo:                 So, looks like more of a comment than a question. Great talk as usual. Data clearly a strategic asset, but now slugging it out in our region with data sharing agreements between competing orgs.

Yeah. That’s a really hard part of that data polarity. And as many of you may know, Health Catalyst recently acquired Medicity, who has both technology and policy expertise in these community types of data sharing initiatives. And we’re really looking forward to combining the capabilities of both of our companies to try and help solve these problems.

Greg Sill:                      And I kind of briefly touched on that as I was sharing that diagram. If you notice on that diagram as we think about the integration of our analytic platform, which Health Catalyst has been very good at, and the community outreach, which Medicity has been very good about, being able to share that information and combine those two different data streams. So the real-time data stream along with that historical analytic data stream, and being able to really share that. And that’s how we view that same answer as well.