The Data Maze: Navigating the Complexities of Data Governance


Tom Burton:                 Excited to be with all of you today, and hopefully share some core principles around data governance. Data governance is a complex topic. And so, at our last health care analytics summit, I actually built a game, an interactive game to help teach these principles. And this is now the webinar version of that session that I shared at the healthcare analytics summit last fall. So, our learning objectives today, there’s a lot of them. We’ve got a lot of content to get through, but we’re going to focus on these core objectives. So first, how do you unleashed data to really promote improvement across the entire improvement spectrum? We’ll talk about sustaining and spreading those improvements. We’ll talk through how investing in your analytics training and infrastructure can really lead to massive improvement. We’ll spend some time on the five key stages of the data lifecycle.

Tom Burton:                 And this is where most of the data governance challenges come into play is in that lifecycle. We’ll talk about the common challenges, which are data quality, data utilization, and data literacy. And then we’ll show how a data governance framework can help you accelerate improvement in clinical costs and experience outcomes. So quite a few learning objectives. Hopefully we’ll get through most of them. And it’ll be useful information for your organizations. So why we’re doing all this, we really believe that when you elevate data as a strategic asset, it really significantly enables better decision making. And that promotes massive improvement across the spectrums of both effort and value. So, let’s first dive into the data life cycle. So, the data life cycle starts when you capture data about any process. Now, you may have multiple places where data is captured. And so, the second step is integrating data together.

Tom Burton:                 So, you may have financial data captured in one system, clinical in another, experience in some survey system. And so, the second step is integrating that data together. Next is giving access to that data to the right people who can actually use it to make better insightful decision making. Then you deliver insight, not just data, but insight. And then that hopefully leads to action. You act on that data, you act on that insight and you make a better decision, or you intervene in some way you otherwise wouldn’t have done had you not had the insight delivered. That may then lead to capturing additional data or changing the way in which you capture data, and that leads to the life cycle continuing. So, there are several challenges that happen in this life cycle, and we kind of group it into three major categories.

Tom Burton:                 The first category of challenges is data quality. This is where when you capture data, it’s incomplete or delayed or inaccurate. It could be that you may be trying to consolidate all that data into one place, and it could take way too long to try to consolidate all of the data into the EMR. And so that’s a challenge. We also see a lot of problems with data, trying to consolidate data into a fixed or static data model. And so, there’s some challenges around that in data quality in the integrate phase. The next challenges occur in the data utilization phase. So, a lot of times we see organizations because of fear of privacy laws or fear of inappropriate data utilization, access is limited. And so that becomes a problem underutilization. We see sometimes fiefdoms; this is our data.

Tom Burton:                 I often hear the finance team saying, well, you can’t have our financial data. It’s just for us, or clinical data or other data fiefdoms if you will. And then finally, sometimes we lack the ability to interpret the data correctly. We may draw inaccurate conclusions from the data where it may not be causal but just correlated. And so, we assume things that aren’t actually correct. So those are some of the challenges with appropriately and effectively utilizing data. And then finally the third big category of challenges is in data literacy. And this is where we may lack the skills, the knowledge, the attitudes to actually use the data to get insights and to take better action. We may have the wrong mix of resources; we may have a lot of report writers and not enough data scientists or analytics engineers.

Tom Burton:                 And then we also see a lack of interoperability where data is useful over here, but it’s not presented in the right spot in the workflow to really have it be actionable. So those are all in the category of data literacy challenges. And then at the core of this whole cycle is creating what we call a data-driven culture. Oftentimes data is not driving decision making, sometimes it’s politics, sometimes it’s just tradition. And oftentimes especially at the C-suite, the data infrastructure is seen as an expense and not as a strategic asset. And so, you’re trying to minimize the expense versus invest in the asset.

Tom Burton:                 So those are all kind of elements of a data-driven culture. All right, we’re going to dive into now what we define as data governance. It has lots of definitions out there, but simply data governance refers to the people, the processes and the technology that are proactively applied to ensure that an organization’s data is managed in such a way to maximize the value of that data to the organization. Again, turning data from just a raw set of ones and zeros to an actual asset that the company leverages for better care and for lower cost care.

Tom Burton:                 So how do you set up data governance? We call it the four E’s. So first you want to elevate data as a strategic asset. Second, you want to establish a data governance structure or an organizational structure to really leverage that data. Then you want to execute on and prioritize what are the key improvements we can make in the data life cycle. And then finally, you want to extend and sustain the improvements that you’ve made and extend those initial gains to go beyond the first departments you roll out improvements to. So, elevate, establish, execute and extend are the four E’s. So, a little teaser here. A great way, there’s a lot of complex concepts, and a great way to learn these concepts. And we won’t have time today to go over all the principles, but again, we build this interactive game to teach these principles.

Tom Burton:                 We called it data maze, and it’s kind of a fun way of learning these principles. And so, we’ll ask later in the webinar if anyone’s interested in actually getting 50 of your top executives at your organization together and playing the data maze game. We found it to be a really effective way to teach these principles. We’re just going to cover the principles today. Now, we’re going to break it kind of into two sections. So, we’ll talk about some core principles of data governance and then some advanced principles of data governance. And we’ll actually cover that with four poll questions. There are about 20 principals. We don’t have time to go over all the principles, so we’re going to do a little bit of choose your own adventure. And so, the poll questions will be really important in helping us to decide where we dive deep.

Tom Burton:                 I’ll hit on all 20 principles briefly as we get into these different sections, the core principles and the advanced principles. But we’ll only have time to cover maybe half of them today. All right. So, let’s go over, the first four core principles are elevating data as a strategic asset, delivering insights with a key process analysis and self-service, establishing around frontline processes and choosing your priorities. So, I’ll just spend 30 seconds on each of these and then we’re going to have you vote on which ones we go deep on. So, the first is elevating data as a strategic asset. You want to actually tie into some of the core burning platform issues that are going on in your organization and show how improving the data governance within that could be a real positive thing. The next is delivering insight. As you identify opportunities, there’s actually two ways that you can think about prioritizing.

Tom Burton:                 And the first is to do what we call a key process analysis where you’re looking for variation in process. The second to is to do kind of a pain assessment. Now, I’m not talking physical pain, I’m talking data pane where you actually look for pain points and evaluate that. And then you take the combination of the pain point assessment and the key process analysis and combine those to help prioritize where to focus your data governance efforts. The next principle is establishing everything around frontline business or clinical processes. And the idea there is don’t organize around your EMR or your financial systems and think of data governance in that context. Rather think about as Deming called them, frontline work processes. How does a care actually get delivered? How do support processes actually happen? If you organize around that, you’ll find data governance will be much more successful.

Tom Burton:                 And then the fourth principle is all around choosing priorities. You want to choose your governance priorities around those hot topics that are really challenging and around problems in the data lifecycle because there you’ll have the most, the biggest chance of actually getting some significant improvement that people will notice and say, “Wow, it’s a big difference, and I’m glad we did that.” So those are our first four. This is the choose your own adventure portion. We really want you to weigh in, we’ll probably be able to cover maybe two of these. So, let’s go to the poll question, which of these first four principles are you struggling with the most in your organization? Which of these you want to dive deep on? And share a little bit more detailed information.

Tom Burton:                 So again, the four questions are elevating data as a strategic asset, delivering insight by doing two kinds of opportunity analysis, a key process analysis and a pain point or self-service analysis. Third is thinking about establishing data governance around frontline work processes. And then fourth, how do you choose your priorities based on the pain points within the data life cycle? All right, let’s open that poll.

Sarah Stokes:                It is up and running, and we’re getting a lot of votes. About 50% of the audience has already voted, they’re still pouring in. We’ll give you all just a few more moments to submit your responses. As Tom said, this is choose your own adventure. So, we do encourage you to participate in these polls. And as a reminder, you will have access to the slides. We are recording today’s session if you joined late. And just so that everyone is aware. All right, I’m going to go ahead and close that poll. We’ve had 64% of the audience vote. Some are going to share the results. So, we have a fairly even spread here, but it looks like 23% voted for elevate data as a strategic asset, 28% voted for it deliver insights with KPA and self-service, 31% voted for established around frontline prostitutes, and 17% voted for action choose priority. So, it looks like those middle two.

Tom Burton:                 Okay. We’ll cover the middle two. And let’s start with the number one poll pick of establishing around frontline processes. So, I’m going to go ahead and dive right into that area, I can see my mouse.

Sarah Stokes:                Sometimes you just have to move it around kind of quickly. And there you go.

Tom Burton:                 All right, here we go. Choose your own adventure. When you establish data governance, it’s important to organize around frontline value-added work processes. This is a quote from Deming. He said, organize everything around frontline value-added work processes. And we think this especially applies to data governance.

Tom Burton:                 Many organizations, they organize around technical systems. They’ll have a data governance committee around the EMR, they’ll have a data governance committee maybe around the HR system. They may try to establish these subcommittees, but they are not doing it in the context of how the patient actually experiences the care. And so, it’s actually quite important to think through what are your key processes? How is care experience by the patients? What are the processes that support patient care? Which of those processes cost the most? Could they cost less? Which processes produce the right data at the right time, and you could actually use that data to get insight? And then how well is the financial, the clinical, the experience data integrated?

Tom Burton:                 So, if you think about the process the patient goes through or the support processes that go through different areas, ask these types of questions. And actually, what you want to do is organize around those value-added processes, not around a technical system. So that’s kind of the first principle. So oftentimes what we think about is kind of three subgroups of data governance. The first we would call sometimes vertical or clinical program subgroups. And basically, what that would be is cardiovascular or women’s and children’s. Sometimes folks call the service lines. I like to call them clinical programs because it’s more about organizing around the way that clinical care gets delivered. So, think about organizing data governance first around these clinical areas or clinical domain areas. The second way to think about organizing are organizing around support services. And there are multiple types of support services such as care unit support services, ancillary support services, nonclinical services.

Tom Burton:                 These are the processes that support care throughout your system. And they have their own unique data cycles. And so, organizing around the support services is a second sub category. And then there’s a very small group of things that should be organized kind of globally centralized. And these would be functions like data access policies or analytic tool standardization. And that’s kind of the centralized group of things to organize around. So again, the core principal here is organized around the processes themselves not around the data systems. Now, data systems are absolutely going to be involved in those core processes, but if you organize first around the process and then say what data systems contribute to this understanding this process, you’ll be much better prepared to really solve data governance challenges. As you think about those large processes, either clinical processes or support processes, you can ask these questions.

Tom Burton:                 So, think about the data lifecycle. Do we have all the data we need to manage this process? Do we have the clinical, the financial, the experience data? So, think about integrating, do we have all that right data? What data is missing or what’s inaccurate? Then think about, do the right people in the process have access to all the data they need to promote the best decisions? And then what insights could we present at the right time in the workflow to encourage better decision making? And then finally, are we measuring how well we act? Can we see what percent of the time are we achieving the best benefits possible for the patients or for that process? And so, Dr. Brent James often calls this achievable benefit not achieved/ Rather than benchmarking against other mediocre processes throughout healthcare, what’s the best possible outcome we could get?

Tom Burton:                 And how much of a gap is there between what’s the best possible benefit and what we’re actually achieving as a benefit? So, these are some great questions to think about as you think through the data life cycle and you think through how to organize data governance and establish it in a way that matches with the workflow of how care is delivered and how care is supported. Let’s return. That’s a little deep dive on that first around establishing data governance, around frontline work processes. I think our second most popular area was delivering insight with key process analysis and self-service. Let’s dive into that area. So, there’s two types of opportunity analysis you can do. One is a data-driven analysis where we look for variation, and the second is a pain point opportunity analysis where we look for what are the biggest problems in the data life cycle or what are the biggest pain points that you want to improve?

Tom Burton:                 So, let’s dive into the different types of improvement. So, you have comprehensive deep outcomes improvement, which require deliberately changing process. This requires a high amount of effort. We have fast track improvements, and this is kind of don’t require quite as much effort, maybe not an entire process redesign, but still some effort. And then we have organic improvement. This may just be getting data to people that don’t have data today, and they make their own decisions. It’s very light touch, but it could have a significant impact. So, think about the amount of effort it’s going to require to achieve the improvement. That’s kind of the first spectrum to think about it. The next is to then plot this on what we call a value and effort matrix. So, you may have an improvement opportunity which is of high clinical value, but it’s going to require a lot of effort.

Tom Burton:                 You may have some opportunities that have high value, but very light effort required. Maybe it’s a financial improvement, something like replacing a high cost drug in an order set with a generic drug. And suddenly the cost almost overnight goes down when you change the default order set. That can be an example, lots of dollars savings without much effort. You may have something that improves the patient experience. You may have some that have a combination where it’s both a clinical and a financial value. Or awesome of you can get all three of the triple aim or sometimes we call it the quadruple aim where you have a better clinician experience, a better patient experience, financial savings and clinical improvement. What you want to avoid are these down in the bottom right hand quadrant where it takes a lot of effort and there’s not much value. So, think of both of the value spectrum and the effort spectrum.

Tom Burton:                 Think of the different improvement types of the opportunities you’re looking at. clinical, financial, and experience. And maybe then plot out the different opportunities on a chart like this. So, this is basically showing the two types of improvements. The high efforts really are going to require process redesign. This is where you’re going to use your key process analysis to identify opportunities. On the other side are your self-service dashboards where just providing access to data in a visual way that allows people to answer their own questions could get you some of those light effort, but still valuable improvements. Let’s talk a little more about a common pain point assessment. So, think about where is their hunger for data? It may be somewhere where people feel like it takes weeks to get access to data. They send their report requesting, and maybe they bake cookies for someone to get their request bumped up in the queue.

Tom Burton:                 Where do we have a lot of wasted time in reconciling reports from two different departments? Where do we have lack of data literacy? Maybe the same effort as going on around in five similar departments, but they’re five different analysts doing essentially the same analysis just with a different department as the core filter. Well, a lot of those, we see spread marts or this sneaker ETL where they’re manually moving data from one system and another system, putting it together in an Excel Spreadsheet and producing a weekly report. We see armies of ‘analysts’ doing this kind of work. So that’s a pain point. So, go through your major processes and look for these kinds of pain points. In order to get kind of organic improvement going, you’ve got to have some common things in place.

Tom Burton:                 So first you’ve got to have good data policies, broad access to data. You have to have a common tool set, and that tool set probably shouldn’t just be Excel. You probably need some data infrastructure tools that are a little bit more leverageable. You need training and support. A lot of folks would be glad to use better tools, but they just haven’t had the training and support. Communication, you got to let people know what data is available. And that really takes underlying all of this analytic leadership. So, here’s an example of a light touch improvement that had a high value in all three categories. I was working down with Texas Children’s Hospital, and we were working on asthma action plans. And in the middle of this process, a physician leader says, “Wait a second, go back a screen.” And we were looking at a particular measure around how many kids were getting chest X-rays in the emergency room.

Tom Burton:                 And it was way higher than what he thought. And so, he first thought, “Oh, your data has got to be wrong.” And so, we looked at the data, we drilled in a little bit and actually the data was right. He’s like, “This is crazy, it shouldn’t be 60% of the kids getting chest X-rays, it should be less than 5%.” Well, we dug into it deeper, and it looked like there was an order set being highly leveraged by the residents in the emergency departments. And it was an old order set that really was out of date. And so, we’ve quickly, within a few hours, we changed that order set. And then we did some training with all of the residents. And overnight, it went from 60% all the way down to less than 5%, which had some really positive outcomes. So, first of all, better care for the patients.

Tom Burton:                 Second, the kids weren’t getting exposed to the X-rays that they didn’t really need. And it eliminated a whole bunch of unnecessary costs. So, this would be an example of a very high value output or outcome with a fairly light effort. I mean, it only took literally a matter of hours to track down the problem and to solve the issue. So that’s an example of leveraging a self-service dashboard this doc was looking at finding an opportunity to make a very small change and having a great impact. The second type of analysis you can do is what’s called key process analysis. And you’ll look at your major processes and look for variation. So, this is an example of looking at cost per case in vascular procedures. So, let’s say I have a particular doctor who did 15 cases in the last quarter and averaged $60,000 per case.

Tom Burton:                 But if I look at the peers of this physician, it’s $20,000 per case. So, what if I could get Dr. Jay 15 cases from $60,000 a case down to 20,000? Well, that’s $40,000 difference, that’s a $600,000 opportunity. Well, if I think about then other physicians that are above that average cost per case and I drill into a particular physician, maybe this physician has 25. And so, it starts to add up. As you look at each of these, and even if you just brought them down to the mean, maybe not even to the lowest, this could turn into a multimillion-dollar opportunity in a large health care system. So, the way that we think about this is first adjust for severity of illness. You don’t want to be comparing apples to oranges, but adjust for severity of illness, and then recognize there could be multiple causes for variation.

Tom Burton:                 One cause could be clinical variation, physicians actually doing it different for the same type of patient. The second cause could be data system variation. And this is where data governance comes into play. You need to make a commitment to eliminate both causes of variation, both unwarranted clinical variation as well as unwarranted data system variation. So, one way to analyze this is to think about another …. I’m sharing lots of four box matrices today. So, one is to think about the size or the magnitude of the process. You could think about how much money are we spending on this process overall? And then on the Y axis, you could put variability. How differently do we do this from physician to physician, from unit to unit, from clinic to clinic? And you can come up with these four quadrants. So, the top right quadrant are very large processes that have significant variability.

Tom Burton:                 That’s your high priority area to go after. Your second quadrant are processes where maybe you’re more consistent, but you could be consistently high cost or consistently only mediocre quality. So, look for those large processes where you’re consistent, but maybe you’re not consistently excellent. You can actually lay this out in a bubble chart form and see which processes are the largest processes with the most variation. And you can then evaluate which of those processes you want to go after. So, this would be a much more data-driven approach to prioritization. You combine that with pain points where you may not have good access to data basic self-service. And that can help you really drive improvements. We’ve now talked about two of those four areas, I touched lightly on the other two. We don’t have time to go into those other two.

Tom Burton:                 But now let’s talk about the second set, this set of five additional principles around improvement and establishing data governance. Let me briefly describe each of these and then we’ll vote on which ones we want to go deep on. So, the first is how do we improve data quality? Thinking about improving that quality at the source is really the key there. And we’ll talk about some of the common data quality challenges. The second principle is getting beyond your EMR. There’s a lot of very meaningful data that’s outside the EMR that should be leveraged as you think about improvement. And so how do you prioritize those other data sources? How should you think about going beyond your EMR? The third principle is strategies around integrating data. And they’re actually some different strategies. And we’ll talk about a reusable, scalable, and flexible strategy that can get you closer to value quicker.

Tom Burton:                 A lot of these data integration projects take a lot of time and effort, and there are some kind of best practices as far as getting value quickly, but having it be flexible and reusable. The fourth principle is getting to appropriate access. And it’s managing what we call polarity between granting broad access and auditing that access. So, we’ll talk in more detail about the challenges of kind of doing both good things like we want to protect the privacy of the data, but we also want to give access to the data. So how do you do that effectively?

Tom Burton:                 And then the final for this section is the five rights of data delivery. So, in healthcare, we often talk about the five rights of medication administration. Nurses have the five rights, the right drug at the right dose, et cetera. We’ll talk about what’s the five rights of data delivery and how important that is to get the right information to the right people at the right place in the workflow. So those are our five categories. Let’s go ahead and open the poll question, and we’ll see which of these is your organization struggling with the most? And which of these five topics we can probably choose two of them to go a little bit deeper on?

Sarah Stokes:                All right. Again, the votes are pouring in. We have about 30% already. And I want to just give a shout out to Tom for rolling through the technical glitch that we had there.

Tom Burton:                 No problem.

Sarah Stokes:                You know this content; you have it down pat. All right, I’m going to go ahead.

Tom Burton:                 Well, it’s pretty close. I might have to go into three of them.

Sarah Stokes:                All right, we’re going to go ahead and close that poll and share the results. As you said, it’s pretty close in a few areas. 25% voted for improve data quality, 21% voted for utilize data beyond the EMR, 30% voted for build data integration plan. Only 6% voted for grant appropriate access, and 18% voted for the five rights of data delivery. So, it looks like the top two would be that build the data integration plan and then the improve data quality.

Tom Burton:                 Okay, great. It’s interesting grant appropriate access is the lowest, but I see organizations struggle with it all the time. So, we won’t have time to go into the strategy there. But let’s dive in first to integrating data and how to think through a flexible plan. The first concept here is the concept of a data hub. And Gartner, I love this quote from Laura Craft, Vi Shaffer at Gartner. Kind of the two keys of this quote are first automating the ingestion of data. If you’re manually integrating data, it’s a real problem. You can be wasting most of your analytic engineer’s time if they’re having to manually integrate the data. So that’s the first core concept there in integration is to automate as much as possible. And then the next concept here in this quote is getting the delivery of those insights into the right modality.

Tom Burton:                 And so, Gartner calls this a health data convergence hub. Our version of this is the data operating system. And so, it’s really a strategy of moving away from the EMR as the center of the universe and moving to having data as the center of the universe. If you think about how an EMR might move data, you’re getting less and less data over time and the data kind of gets stuck into the silos. And it’s a very static process, the EMR often doesn’t have good interoperability tools to bring in other data. And so, you waste a lot of time trying to force fit data from hundreds of different sources within your EMR. A much better strategy is to extract data from lots of different places including lots of different EMRs.

Tom Burton:                 We have many of our customers who have acquired new hospitals, new clinics, and they’re all on different EMRs. It is extremely expensive and not a great strategy to try to consolidate all those onto a single EMR. That is a very expensive proposition. And really you can accomplish much if not all of the benefits by simply integrating the data. And so, put data at the center, and then allow data to flow bi-directionally. So, you’re extracting data from maybe five to 10 EMRs, financial systems, clinical departmental systems, patient experience systems. And also going beyond kind of the encounter data, we may be including data that’s reported by patients, socioeconomic data, different data reported from sources we haven’t really considered genomic data, biometric data. We have lots of wearables today that could be very useful. So, think of data at the center not the EMR at the center.

Tom Burton:                 The next concept, if you put data at the center, then that you think about the data lifecycle at the center. What data are we capturing? How are we integrating that data? How are we granting access to that data, delivering insight, and then making better decisions, taking better actions? So here kind of the seven core capabilities you want to consider as you think about data at the center of the universe. The first is reusable clinical and business logic. You don’t want to build that into every Excel Spreadsheet but build logic that can be reused over and over again. You want data streaming, so near real-time data that can flow from multiple EMRs, from multiple transactional systems and be integrated on the fly. Third principle is think about both structured and unstructured data and integrate that unstructured and structured data together. Fourth is creating what we call closed loop capabilities.

Tom Burton:                 So, once you analyze the super set of data, you want to push the insight back into the workflow where the decisions or the action is taking place. So that may be pushing it back into a financial system, pushing it back into an HR system, pushing it back into an EMR. But leverage the analytics at the core where data is all connected together and then push the insight. close the loop push that back into the workflow. You want to have a microservices architecture, you’re going to need to leverage APIs so that different transactional systems can talk to each other. Now, the EMRs are a little bit behind on this, they’re somewhat slow to adopt some of the new fire standards and some of the capabilities of interoperability that really should in our industry be much more standard, much more deliberate especially on the EMR’s part.

Tom Burton:                 But we’re making progress, we’re seeing a lot more of this come to the forefront. And some our federal governance initiatives pushing interoperability is going to help kind of force those EMRs to play ball with each other, to play ball with everyone in the industry. The sixth concept is machine learning. You want to have detailed connections into machine learning. Have in your analytics platform the ability to have native AI, native machine learning. And then seventh, agnostic data lakes. Having the ability to integrate data very quickly using some of the newest technologies is absolutely key. This can dramatically speed up integrating data together. So those are kind of the core concepts around data convergence hub as Gartner calls it.

Tom Burton:                 I think this is probably the most important slide in this section, and this is balancing polarity between standard data models and ultra-late binding. Now. we were one of the first to kind of promote we need to be further towards late binding. But you can also go to the extreme of late binding. So, the concept of binding means when do we tie a specific data element to a definition. So, if you’re on the far extreme of an enterprise data model, that means you bind everything upfront. Many of the technology and EMR vendors have this approach. They have a standard model; they map everything into it. And if you want to change that model, it’s going to take an act of Congress. You’re going to have to get everyone that’s using that standard model to agree to a change. So that’s kind of the one end of the extreme.

Tom Burton:                 The other end of the extreme is kind of what we call the wild, wild west of visualization tools. And this is where some of the visualization tool vendors say you don’t need a data model, just integrate everything into the visualization layer. Well, this is the other end of the extreme. And you could end up with replicating logic over and over, and over again cause the same logic often needs to be used across multiple applications. So, what we feel is the right approach is a hybrid between those two extremes. Not going too far on either side, kind of like navigating a road and having guard rails on either side of that road so you don’t fall off the cliffs. So, what we recommend are very quickly getting data into data lakes or what we call source marts.

Tom Burton:                 Not really messing with the atomic level detail but preserving that atomic level detail. Then for a subset, and I emphasize subset of the data where you have commonly agreed upon definitions, you create shared data marts. This is reusable content, shared definitions that can be leveraged by lots of different applications. And then build what we call subject dairy data marts around key clinical or operational domain areas. Now, these can either pull directly from the data lakes or from the shared data marts, and you want to be able to do both. When you have this hybrid approach, you can be much, much faster, but also much more scalable. The problem with ultra-late binding with the visualization vendors is it’s not scalable and you start to get a spaghetti bowl of logic. The problem with the enterprise data model is it’s very slow and you end up spending all your time arguing and fighting about the data model.

Tom Burton:                 So, the hybrid approach we found to be the best approach. Let’s go to, you’re going to have to remind me, Sarah, what was-

Sarah Stokes:                It was improve data quality.

Tom Burton:                 Improve data quality. All right, we’re going to dive deep into this second section. And this is all about improving the data quality. So, one of the biggest principles here is fixe the data quality problems at the source, not downstream. We see a lot of organizations investing way too much in what they call data cleansing. Data cleansing is very, very dangerous. So, if you cleanse data downstream, what you’re actually doing is avoiding fixing the root cause of the problem at the source. So, let’s say that you’re working on heart failure, and injection fraction is a real key metric or a key value in order to know who is a diabetic patient or who is a heart failure patient and who’s not a heart failure patient.

Tom Burton:                 Well, if you say, well, if we’ve got a data quality problem, maybe only 50% of the time is that field entered in the EMR. And you just say, well, we’re going to exclude those records that don’t have an ejection fraction, then what you’re really doing is you’re limiting the cohort of patients that you’re actually able to help. And so, go back to the source system, find out what’s the breakdown in the process to capture the data in the first place. So that’s the first core principle. Fix the data at the source. I saw one organization; they were spending all of their time really thinking about which records should be excluded. And their data analysts were spending all this time writing custom queries to exclude some physicians’ single patient from a report, so they didn’t look bad. Well, that doesn’t actually help you improve.

Tom Burton:                 So that’s the second principle is encourage transparency. Don’t hide data flaws, don’t cleanse data so someone doesn’t look bad. Make them look bad, have them actually see the problem with the data. If the data is wrong, then fix the data. Don’t hide the data flaws. And then finally focusing on people is not the right way to fix the problem. If you focus on, hey, this person is bad because their numbers look bad, they’re going to start to fight you. So, a process is perfectly designed to produce its current data quality issues. So, focus on redesigning the process not redesigning the people. And so, data for understanding about the process is so important versus data for judgment about the people. Those are kind of the key strategies for improving data quality. So, what you want to do is perform what we call a data quality assessment and you look at a key process and then evaluate these dimensions.

Tom Burton:                 Do we have complete data? Do we have any kind of latency that is slowing down decision making? Like oh we don’t get the data from the benchmark for a quarter. And so, it’s like kind of flying a plane with a rearview mirror. It’s not really helpful to know how you did three months ago versus how you did yesterday. Third principle, accuracy. How accurate is the data being captured? The next principle is look at the transactional system value. Sometimes an underlying process, it’s not really captured, the data is not captured in a natural way. Does it save time in the process? Should we redesign the process of data capture? Maybe you’re capturing some data element in the workflow at the wrong time in the workflow. It’d be much easier to capture it on the previous day or it on a different screen.

Tom Burton:                 So, think through, where’s the value? Look at the ROI, a lot of times, where’s more ROI? So again, this idea of a system going out and acquiring a lot of different neighboring clinics or neighboring hospitals and then thinking that there’s a higher ROI in getting everybody on the same EMR versus just getting the data integrated and using the data for improvement. Where’s the bigger ROI? And I would argue 99% of the time it’s in integrating the data and using the data for insight versus standardizing on a single EMR. Data fidelity, looking at how accurate and atomic the day to detail is. If you’re looking at benchmarking data and it can’t tell you why you’re not ranked as high as somebody else, it’s really not that valuable. What you want to be able to do is get down to the atomic level detail of what is actually driving the process failures.

Tom Burton:                 And so, a lot of the benchmarking data today is not high quality. Not that it’s not accurate, although there’s an argument it is not accurate. A lot of people gain those ranking systems. But it’s not detailed enough, it’s not atomic enough to actually make changes. And then look at interoperability. Look at how you can integrate data back and forth between transactional system, between financial, clinical, operational systems. So, go through your data for a given process and evaluate these kinds of data quality categories. The next key principle here is looking at data stewards. A data steward, first of all has to have fundamental knowledge of the process that the data is about. So that almost always disqualifies an IT person as the data steward. So, if you and your organization have IT folks as the ones granting and removing access, as the ones evaluating data quality, you’re missing the boat.

Tom Burton:                 It really should be a clinical or operational person that’s asked to be a data steward. Now, they need to be technically savvy. They need to be able to evaluate how will we fix this without causing harm to patients? What is the root cause of the data quality problem? So, it’s usually the person who’s kind of the go-to person clinically for technical questions. So, it’s a combination of fundamental knowledge about the process as well as a little bit of technical skill. And their primary role is to make sure that data is being used appropriately for improvement.

Tom Burton:                 Again, this is a clinical or a business decision not a technical decision. So, these are the major areas where we think about data stewards. We have clinical processes, and we have support processes. You can think about these clinical domains, behavioral health, cardiovascular community care, et cetera. And you can think about support services such as different care unit, different ancillary support areas. Or nonclinical things like patient experience or financial or laundry. All of these health processes, and they have domain experts embedded in them. Those are the types of people you’ll want to choose as data stewards.

Tom Burton:                 So, the next big question is when is the right time to improve the data quality? And here’s where we have a lot of organizations make the mistake of, well, we can’t give data access to anybody until it’s perfect. The actual right time to fix data quality is not all at once, not before you give access to anybody, but actually fix it while you’re working on the clinical or operational improvement. Then you have folks motivated to improve the data quality because they’re looking for data, they’re looking for clinical or operational variation. They’re trying to fix that, and it’s very natural at the same time to fix the data quality problems. Don’t wait till all the data is perfect, fix It as you work on clinical improvement or operational improvement. And then recognize that data doesn’t have to be perfect to use it for improvement.

Tom Burton:                 It just needs to be directionally correct; it needs to give you insight into what to do better. Let’s return back to our core principles. I do want to just highlight, because I think this last category, I think it was the third or fourth one, I want to just highlight these five rights. And I won’t go through the whole thing, but I think I want to just show the importance of a couple of concepts in this last section. Then we’ll jump to the final section. So, the principle here is the five rights of information delivery. You want to get the right information to the right audience at the right granularity, at the right time in the workflow. That’s that closed loop concept, in the right modality or visualization so that it produces the right action to improve. We don’t have time to go into the detail, I have slides on each of those detailed areas.

Tom Burton:                 But this is a really core concept, and oftentimes we get multiple of these wrong. We don’t present the right information or it’s not at the right granularity or it’s disconnected from the workflow. And so, getting these five rights of information delivery is absolutely critical. So, let’s bounce back a couple of slides, and we’re going to dive now into the detailed kind of-

Sarah Stokes:                Sorry, which slide did you want to go to? Core principles or you want to just keep going through?

Tom Burton:                 Let’s go back up there. We’ll go back to this. I think from a time sake, we’ve got about 30 more minutes, is that right?

Sarah Stokes:                Correct.

Tom Burton:                 That’s all that we have time for on the core principles, we’re going to now jump to the advanced principles. And I’m going to briefly explain some of these advanced principles. And then we’ll do poll question number three. So advanced principles first is a governance framework. And so how do you use a chief analytics officer to coordinate the difference between improvement governance and data governance? And these are two very different things. And often people just say governance and they confuse the two. So, we’ll talk about governance framework and how those really matter. The next is capturing the right data, we talked about this a little bit already, but how do you capture the data that you need to manage your process in the most efficient way?

Tom Burton:                 The next is getting even more deliberate about what to prioritize first in your data integration journey? And then finally, how do you streamline granting access to people so they can have access to the data they need? So those are the four principles, kind of a governance framework, capturing the right data, prioritizing your data integration, and then granting access in a streamlined way. So, let’s go ahead and open the poll question. Those four principles again, choose your own adventure. Which two of those four would you like to go into a lot more detail on?

Sarah Stokes:                Oh, I think we have a clear leader in this one. We do encourage you to submit those votes, they are driving Tom’s discussion today. And I do want to remind everyone this is a 90-minute webinar today. Obviously, Tom has a ton of content here to cover, and he’s even cutting it short at 90 minutes. So, I just wanted to remind you of that as we near the top of the hour. All right. And the votes are slowing down. I’m going to go ahead and close that poll and share our results.

Tom Burton:                 Okay. It looks like the clear winner is the governance framework. Has been a very close, maybe I’ll try to cover both, capture the right data and integrate. So those are a little bit shorter section. So, if we have time, we’ll do capture the right data first. And then if we have time, we’ll do integrate. So, diving into-

Sarah Stokes:                Sorry, let me just close that poll quick.

Tom Burton:                 So, diving back into the governance framework, which was clearly the most popular in this section. So, the first key here is getting a chief analytics officer. This is not your CIO; this is a chief analytics officer. And their major role in the governance framework is to connect improvement governance and lead data governance. So, here’s the core concept. Data governance should never be done outside the context of improvement governance. Now, we often talk about the three core things required for improvement to actually happen. You need to have an understanding of what you should be doing or best practice. You need to know how you’re doing, that’s the analytics, that’s capturing the data required to really evaluate the process and look for opportunities.

Tom Burton:                 And then you’ve got to know how are we going to promote adoption of a better way of doing it or a better process. So, when you have best practice adoption and analytics all working together, that’s where you see significant improvement. So, you should never work on data governance issues outside the context of trying to improve an actual process. That process might be reducing the cost of the care. It might be improving the quality care; it might be improving the clinician or patient experience. Or it could be a combination of those things. If you organize data governance outside that context, you will likely stall. You’ll get halfway into it and people will lose interest and you’ll have nobody showing up to your data governance meetings.

Tom Burton:                 So, the real key is always to do data governance within the context of improvement governance. You can think about improvement governance as first and foremost, standardizing your framework around your improvement methodology. You may have five or six different improvement methods, maybe it’s six sigma, maybe it’s lean, maybe it’s DIHI, PVSA type stuff, Toyota production systems. We’ve seen them all. They all kind of stem back to Deming. One of the key jobs of your chief analytics officer is to help standardize what improvement methodology you are going to get to use because it dramatically impacts how you do data governance. It’s not important which one you choose, but simply that you have a standard for your organization so that your technical staff, your analytic staff isn’t constantly having to switch context between different improvement methodologies. So that’s a first real key role of the chief analytics officer.

Tom Burton:                 The next key concept, and this is again, we actually created kind of like a Rosetta Stone to translate between the different methods. They all kind of come down to seven key questions that are shown in this methodology. Our improvement methodology, we recommend use whichever one, but make sure you answer the seven key questions, which are things like identifying the root cause of the problem. Measuring is the change an actual improvement, things like that. And here are the questions that we think mapped all of those. So, do we understand the problem? Do we know where we want to be clinically or operationally? Do we know what the root cause is? Do we know what we should change? Have we applied and measured the change? Is the change an improvement, are we sustaining the gains?

Tom Burton:                 So, every one of the methodologies out there today has some form of these seven key questions. And so, if you can answer those questions, that’s going to massively help understand what data you’re trying to govern and what data matters the most for the improvement actually happen. So, the structure that we recommend is this structure. So, at the very top, you have an improvement executive leadership team. They are determining clinically, operationally what are the top priorities? Should we work on heart failure? Should we work on readmissions? Should we work on our ischemic heart disease, length of stay? What are the most important things to work on? And they’re using data from that key process analysis to prioritize the clinical or operational areas to go after. Now, once they’ve selected a key set of areas to go after, we would establish three teams within each of those domains.

Tom Burton:                 The first is a domain guidance team. This would be something like the women’s and children’s guidance team, or maybe the supply chain guidance team or the ICU guidance team. So, once you establish that particular domain area, you’ll pick, that domain guidance team will pick priority areas within that domain to improve. Then you have a small work group that’s going to innovate. They’re going to come up with a better process. And the larger team that’s going to implement that innovation. Now, this is all connected to a data governance committee. But again, it’s in the context of improving a process. So as that data governance committee gets their marching orders from the key problems and challenges within the process they’re trying to approve, that will turn into data quality improvements, that will turn it into data access improvements and data literacy training opportunities. But it’s always in the context of these work groups and improvement teams that are working on specific initiatives.

Tom Burton:                 So, one of the key challenges with the chief analytics officer, the first challenge is you don’t have one. The second challenge is they don’t really know what they should be doing. So, this is a job definition for a chief analytics officer. So, the purpose of the position is to develop and standardize the analytics infrastructure and really deliver insights that support improvement. So that’s the core of the job description. Now, the skills they’ll need, they need good data capture quality skills. They need to know how to integrate data. They need to be able to train on data literacy. They need to be able to tell a story with the data, and they need to really be able to lead. They’re going to chair the data governance committee. They’ve got to have knowledge about healthcare, data, health care operations, how business and finance works, budgeting, how all that sort of stuff works inside of health care.

Tom Burton:                 And they’ve got to be able to break through some of the barriers of maybe an overzealous chief security officer who thinks we should just lock all the data in a vault and not give it to anybody. So those are some of the key skills and knowledge. Their attitude is also important. They have to be a great leader, they need to really be an improvement champion that’s going to work closely with the chief clinical officers, the chief operating officers to really make a difference in the processes we’re trying to improve. So those are just some core concepts on how to set up governance. The key thing here is you need a chief analytics officer. You want to establish data governance inside the context of improvement governance. And then you really want to emphasize improvement as the goal not data quality or data literacy or data access as the goal.

Tom Burton:                 Those are all means to an end, which is improvement. All right, let’s go back to the other two. I took too long on that. We probably only have time for one more.

Sarah Stokes:                The next one was the capture the right data from 1% higher than the other.

Tom Burton:                 Oh, 1% higher. One of the keys here is looking at what data should you capture. And the idea is to go through these seven steps to figure out what data do you need to best manage that clinical or operational process. So, the first is to lay out a conceptual map. This could be a value stream map; it could be a process flow map. But lay out, what’s the ideal way this process should work? Then identify how would I know that this process was working? So, you’ll generate a list of metrics or measures that will help you understand what are the processes I should really be measuring? They could be outcome metrics, they could be process metrics, they could be balance metrics.

Tom Burton:                 Then you’re going to say to get those metrics, what data elements do I need? So, you may not have all the data elements you need, but list them all out. And then look at what do I have today versus what I’d love to have. So, some of it may be captured in your EMR, some of it may be in another data system. So, this is going to drive your integration strategy. Some of them may not be captured in any tool today. Maybe it’s captured on paper, but it doesn’t go anywhere. Maybe it’s not captured at all. This is where many of our clients use a tool that we include called IDEA, Instant Data Entry Application where you if you don’t have time or money to make a change to the EMR, within about 15 minutes, you can build your own application, capture data securely, and integrate that data into your data life cycle.

Tom Burton:                 So, you figure out, here’s what data we have, here’s what data we’d like to have to really manage this process well. You figure out what additional data you need a capture. Then you design the technical infrastructure. So what data marts are we going to build? What subject area data marts? How are these measures going to be integrated into the workflow? You figure all that out, then you build it and integrate it. You create the visualizations, the dashboards, the predictive models, the closed loop alerts, and you put it into place. And you test and update that data lifecycle. And then this repeats. You get better data. You get more insights. And you may repeat this process as you improve that clinical or operational area. So those are kind of the core steps. And one of the biggest challenges is capturing the right data.

Tom Burton:                 You don’t want to go to either extreme again. If you go to capturing too much, Dr. James used to refer to this as recreational data collection. Academic medical centers are notorious for going too far on this side. Well, I might want that data sometime, so I’m going to destroy the workflow of all of the frontline staff and make them capture way more data than they need to manage the process. You also don’t want to just go with what you currently have. That’s kind of flying blind, you could be missing key data elements that would better help you understand variation better, help you understand how to improve. So ideal is somewhere in the middle, not too much, not too little. So again, what data will help you manage the process and then the person that’s actually determining how and where and who’s going to capture that data is the data steward.

Tom Burton:                 They can ideally define the right point in the workflow to capture that data. Another key principle here is activity-based design. So, think of the activities that you’re capturing. Think about dynamically adjusting what’s captured based on the answers to prior questions. So, you may use analytics to help you determine additional data that may be captured on an ad hoc basis. And some of our friends at Intermountain has done a great job at actually creating a tool to dynamically adjust what data is captured real time based on the workflow itself. None of the current EMRs have this capability at the same level that what Intermountain has developed. They’ve done some really pretty interesting things there. So that’s a call out to Intermountain for being super innovative. Let’s go back to our advanced principles.

Tom Burton:                 I think we don’t have time to go into the prioritization, so we’ll hold that for a different time. Let’s dive into our last poll question. Let me briefly describe the last four categories or principals. So absolutely key to delivering insight is having data literacy training. It’s probably the most important thing and a thing we see many of our client struggling with. They have lots of data, but they don’t know what to do with it. They don’t have literate analysts; they don’t have literacy leadership. And so, it’s a combination of giving people the skills, the knowledge, and maybe adjusting their attitudes so that they are extremely literate with data and can make better decisions with data. So that’s the first principle. The next principle is adopting what we call a hub and spoke approach. So, this is the question of what should we centralize and what should we have distributed as far as analytics go.

Tom Burton:                 And a lot of folks struggle, they think everything should be centralized or it’s kind of the wild, wild west and everybody’s doing their own analytic thing. So, there’s a balance there, what is the right balance? What thing should be centralized, what should be distributed? The third principle is all around what kind of measures would you put in place to make sure that you’re utilizing data as a strategic asset? And then the final principle is how do you assess and prioritize data governance as it relates to those three big challenge areas, data literacy, data quality, data utilization? So again, we’re going to open up the poll here. Choose your own adventure. We probably have time to maybe just go deep on one but maybe two. So, we’ll see. What are the top priority areas? Again, the first one, data literacy, its importance both at the leadership and an analyst level. Hub and spoke approach, outcomes and data utilization metrics and why they’re important. And then finally, assessing and prioritizing data governance issues.

Sarah Stokes:                All right, we’ve already got 44% of the audience voting. Again, if you have any questions, now’s a great time to submit them. This is our last major poll question before we hit the Q&A. And we were chuckling here, it says adopt a hug and spoke structural strategy. I like adopt hug, it’s a good theme for the day.

Tom Burton:                 It is, it’s all about getting people feeling comfortable with their data.

Sarah Stokes:                All right, we’ve got about 53% of our audience, but we’re going to go ahead and close that poll and share the results. All right, we have a clear leader again here, 34% voted for outcome and data utilization measures. The second most would be that data literacy training at 28%. And then we had assess and prioritize data governance at 22%. And last place was that hug, hub and spoke.

Tom Burton:                 All right. We’re going to try to get through both outcomes in data utilization and data literacy. Let’s dive into data utilization measures first. So, the principle here is you want to measure cost, quality, experience, and outcome in conjunction with utilization. So, data utilization is a great surrogate for are we using data in our everyday decisions? And if you have low utilization of data or low utilization of analytic applications, that could be an indicator of people are just kind of doing it the way they’ve always done it. So, one way to think about this is in these three buckets, and I mentioned these earlier, but what are the outcome metrics? These are the actual results. What are the process metrics, and what are the balance metrics? And you want to think about all three. So, at a high level, the outcome metrics would be things like the cost of the care of, the clinical quality of the care, the real results you’re aiming to improve.

Tom Burton:                 This could be a mortality rate; it could be a readmission rate. It could be how often are we intervening early in the process in the ambulatory setting to prevent complications or prevent a disease like diabetes from getting out of control? So, think about those, those are the most important metrics. Then think about the process metrics, these are the intermediate measures that contribute to a successful outcome measure. So, for example, for sepsis, it may be measuring the three-hour bundle and the sub-components of the three-hour bundle. How fast are we getting the antibiotics onboarded? How fast were you getting the fluids onboarded? How fast is the lactate being onboarded? So those process measures can really help you pinpoint root cause of the challenges in a particular improvement initiative.

Tom Burton:                 Then the balance metrics are things you don’t want to hurt as you improve something else. So, we may not want to negatively impact patient satisfaction. So maybe you’re measuring if you’re trying to increase the throughput in a particular department or reduce the length of stay, maybe you also want to make sure that patient satisfaction isn’t suffering. People don’t feel like cattle getting heard it through a process. So, balance metrics help you make sure that when you improve something you’re not hurting something else. There are different ways of delivering data to make sure that it gets utilized. So, one method is executive dashboards. These are used for strategic decision making. These roll up to a senior leadership level and maybe they’re using those ideas on a regular basis for deciding where to invest, deciding where to cut, and deciding where the problem areas are.

Tom Burton:                 Accelerators, we call it analytic accelerators. They help to pinpoint root causes of variation. They’re more discovery, exploratory tools. And then closed loop software, this is where we push information or insight right back into the workflow. You’re 15 times more likely to leverage an analytic if it’s embedded in a workflow versus if it’s just something reviewed in a monthly improvement meeting. So, think about are you using the data in the right way to promote strategic decisions, discovery or analysis of root cause decisions as well as kind of in the moment care decisions. I love what Sister Krauss said, she was a great leader in health care. And she said, no margin, no mission. So, one of the other key factors as you think about funding improvement work is not just focusing on the clinical challenges but focusing on a way to fund continued improvement initiatives.

Tom Burton:                 And so, as you think about kind of spreading your improvement initiatives over the spectrum like we talked about earlier, you don’t want to have everything high effort. You don’t want to have everything light effort and not go after some of the challenging problems. You also don’t want to just look at clinical or financial as the only improvement, but you actually want to have a good mix. So, some of the light touch, high value dollar value improvements can help fund the more challenging clinical improvements, which are going to cost more from an analyst and an analytics engineering perspective but may have a huge clinical impact. So, make sure you’re evaluating all three types of value and you have a good spread across all these areas. Now, one of the things to consider is this waste framework. Dr. James and Greg Polson at Intermountain came up with this framework to think about the different categories of waste.

Tom Burton:                 About 5% of the waste is inefficiency. Meaning the cost per unit of care is higher than it needs to be. That represents about 5% of our trillion-dollar waste problem in healthcare. The second category is variation. It’s really within case utilization, and it represents about 50%. And then case rate utilization or improving health or population health as is sometimes referred to represents about 45% of the waste. Now, here’s the interesting hard thing. You could make the right clinical decision for the patient. And if you’re primarily fee for service, it would actually hurt your bottom line. So, you can’t do improvement initiatives inside a vacuum. You’ve got to be partnered with the financial team thinking about these different ways to improve. So, an example, inappropriate cases. Let’s say I’m in fee for service and I’m working on lower back pain, and we determine that a lot of patients who are getting spine surgery really don’t need it.

Tom Burton:                 They could be doing physical therapy and actually having better clinical outcomes because of complication rates, et cetera. So, we say, great, we’re going to improve this. And we go and improve it, and now we’ve cut our volume of very profitable high margin spine surgeries by 20%. Well, the CFO is going to be furious. What we should’ve done is think through this chart, what type of improvement are we making? And if we’re making certain types of improvement, we need to partner with finance and kind of the population health team and go renegotiate contracts with payers. So, these are the categories of things we might standardize and improve, who should get the care? These are utilization knowledge assets such as diagnostic algorithms or indications for intervention, indications for referral. Now, who should get the care? These are shown in dark blue. This is kind of adjusting the order set, maybe shifting from the expensive drug to the generic drug and changing an order to set like that Texas Children’s example.

Tom Burton:                 And then the last two categories are how can we deliver the care efficiently or how can this whole process be done officially? Well, they map to this kind of a chart that shows if you are in fee for service, the only ones you can work on and really have a positive impact are shown in green here. And it’s mostly in that efficiency category. If you shift to value-based care and you’re fully at risk, then it doesn’t matter where the improvement asset happens. You could improve anything and it’s going to have a positive impact on the bottom line. So very important consideration. Work with your CFO, make sure you’ve thought through what are the hard cost savings, what are the shared savings? How are we going to renegotiate with payers? I think we have time if I cruise very quickly through the last principle, which is data literacy training.

Tom Burton:                 So, the main idea here is you’ve got to get the right mix of resources for your improvement initiatives. And that may be data scientists, analysts, analytics engineers, data stewards, report writers, super users. And these are scattered throughout your organization. And you want to elevate their skills, knowledge, and their attitudes as well as the leadership of the organization needs to understand the difference between signal and noise. Understand how to read charts and understand variation. So really what you want to do is you want to move up this analytics complexity framework from reactive to descriptive to prescriptive. And it requires a different set of skillsets. If you’re looking at your analytics team, what’s required to do reactive versus descriptive, versus prescriptive is significantly more. And what you may be doing is first of all, you may have skills where you’re more than capable of doing kind of reactive analytics.

Tom Burton:                 These would be report writer type activities. And so, you’ve got capacity to do that, but you’ve got demand that’s beyond what you’re currently doing, and you’ve got a skill cap or a skill gap in the higher two categories. And so, what you’ll want to do is design a training program that will help move them to be able to do some of the higher-level analytics. Again, the concept here is help people understand you’re not trying to punish the outliers with data, you’re trying to understand the process and improve. I’m going very quickly because we’re almost out of time. You can think about control charts. Many organizations don’t really understand the upper and lower control limits, how to read a controlled chart, how to not overreact to noise and how to see signal in the data.

Tom Burton:                 And there’s a number of folks that might overreact to noise that’s within the control limits and they would say, “Man, something is wrong, we’re seeing this go up or down,” when it’s really still within the control limits. So, these are some rules that I believe all senior leaders at an organization should know about how to read control charts. And if your organization doesn’t know these rules and your senior leaders don’t know these rules, if they’re using things like this year to date versus last year to date to compare and they’re overreacting, you have a data literacy problem. And you need to help them understand the difference between special cause, noise versus signal within the data.

Tom Burton:                 All right, so think about the competency categories. There’s knowledge, there’s skills, there’s attitude. And think about the key roles. You have analytic role, you have technical role, financial role, domain and senior leadership. And what you’re really trying to do is upgrade them from a report writer to an analytics engineer or a data scientist, from an EMR upgrader to a data capture designer for better data quality. Financially, you may need to shift from simple RV use to activity-based costing and so forth. Well, we’re out of time. I don’t have time to tell that story. I’m going to just cruise through a bunch of slides that we don’t have time to cover. This is a teaser. There’s more to learn here, but for time sake, we’ll skip to the end. But there’s a lot that can be done in data literacy training.

Tom Burton:                 To just go back to kind of our conclusion and then we’ll open up for the last few minutes for questions. We talked about the data lifecycle, capturing, integrating, granting access, delivering insight and then using that for action. We talked about the four E’s, elevating data as a strategic asset, establishing the structure, executing and improving, and sustaining the gains. We talked about data governance really needs to be inside the realm of improvement governance, that you don’t ever want to do that outside the context. We talked to about thinking through the questions around the data life cycle to help identify data governance initiatives and data quality challenges.

Tom Burton:                 And then finally we talked about how important it is to have a chief analytics officer. The three problems of data literacy, data quality, and data utilization. We talked about fixing data at the source, applying data beyond the EMR. We talked about thinking through scalable, flexible structures. Not the enterprise data model, not the wild, wild west of the visualization vendors, but a balanced approach, a hybrid approach in the middle. Thinking through how to prioritize. We didn’t get to talk about strategies for granting broad access but accessing or streamlining processes. We talked about delivering insights, briefly talked about the five rights of data delivery.

Tom Burton:                 Hopefully you got some takeaways, you understand the data life cycle a little better. You understand how important data quality, data utilization and data literacy are. And you thought through kind of that data governance framework we shared. So back to the final slide of the why. If you can really elevate data as a strategic asset, it can significantly improve your decision making as an organization and lead to massive improvement, which is the real goal of data governance is to enable massive improvement in the health of our patients, the cost of the care we’re delivering and the experience for both patients and clinicians. All right. I will stop there, and I think we have one final poll question then we’ll open it up for questions from the audience.

Sarah Stokes:                We sure do. So in this closing pull question, we’d like to know if you would like to be considered for a free onsite educational session with Tom Burton and 50 of your top leaders, sort of going through everything that Tom discussed today and then including all those categories that we kind of skipped over. Do you have any more thoughts to share on that, Tom?

Tom Burton:                 Yeah. So, this is our interactive version. The web version just hit the principals, this interactive session would be playing the game and using the game as an analogy for really deeply understanding these principles. And we would require you to get your top 50 leaders together because that’s the only way you’re really going to get change in your organization and then playing this game, which will feel a little awkward to them. But I guarantee you it will significantly enhance their understanding of how data governance and improvement governance fit together, the challenges they’ve got to really help lead the organization to overcome.

Sarah Stokes:                All right. We’re going to leave that open for just a minute while we dive into the Q&A. Do you have a couple of minutes to stay over to answer these questions?

Tom Burton:                 I sure do, absolutely.

Sarah Stokes:                Great, so we will just dive in here starting with this question from Karen, who asked, how do you identify the best personnel to build the DOS and data lake?

Tom Burton:                 Yeah, that’s a great question, Karen. Thank you. Really, we recommend doing kind of an assessment of the skills of your current technical team. They need to have a combination of sequel skills, of data modeling skills, of telling a story with data. A lot of this can be automated, and usually it’s not automated when an organization starts working on this. So, there are actually assessments that can be done to say, where are the skill gaps? That would be kind of my first thing is doing kind of an analytics engineering assessment and looking for the key skills. And our Health Catalyst University has some assessment tools that are useful in making sure you’ve got the right team identified that can really help build out that analytical infrastructure. Great question.

Sarah Stokes:                All right. This next one comes from RT asking, a lot of your recommendations apply to organizations with a clinical team leading improvement governance. What would you recommend for a healthcare analytics vendor who doesn’t yet have a clinical team?

Tom Burton:                 Find the clinical team. This is always best done if you’ve got clinicians or operational leaders at the center. If you try to lead with a technical or a cost approach, it typically falls short of its potential. Back to Simon Sinek, people connect with the why. If you can share the why of data governance of we’re doing this to improve care for patients, they’re much more likely to work on data quality than if you’re coming from a purely technical, “Hey, there’s 38% of the time this field is left blank, we really should improve that.” If you don’t connect them to the why, they’re going to be like, “Don’t bother me, I’ve got clinical work to do.” So that is really a key. I appreciate the question because it just highlights how important involving operational and clinical leaders in any kind of data governance process is. Good question.

Sarah Stokes:                All right. Next question comes from Christina who asks, when it comes to literacy, how do you ID deficiencies in current staff and leaders in a gentle but helpful way?

Tom Burton:                 Yes. Gentle and helpful is the key. I like to do it with these kind of interactive games where they have their own personal aha experience. I had two of the C-suite leaders of an organization going through one of my games. And they came through it and they were the ones that said, “You know what, we’re not doing this. This process here is broken.” Now, I could have just said, “Guys, we’ve been working with you for six months and this, and this, and this are broken, and you got to fix it.” And I could have been super blunt. But the discovery of them understanding on their own and having that aha moment, man, they’ve gone from a client producing maybe five or six improvements a year to last quarter they had 28 improvements. And it’s because of the leaders came to that conclusion on their own.

Tom Burton:                 Another great way, if you can’t do the interact … I find the interactive game is the best way to do that. Another great way is to just ask questions, and they’re kind of like leading the witness kind of questions a little bit. But if you do it in a … I truly want to understand how are we going to improve, this or what are our options to make this thing better? And you ask those questions. Questions are a great way to lead people gently to their own conclusions. Again, you want them to have their own aha moment, not to just say, “Hey, you really stink at this, and you should do this because it’s a better way of doing it.” Help them understand. Maybe present three options and ask them to evaluate which one do you think is the best option.

Sarah Stokes:                All right, great. Next question comes from Dee who says, with pharmacies being a huge source of data, how do you see pharmacists playing a role, I guess, in this governance process?

Tom Burton:                 Absolutely. So again, if you organize around the process, then you’re going to include the data from any part of that process. So, if you have a process, for example, like sepsis. Sepsis has an element where certain drugs are ordered. And so, there’s a pharmacy part of the process, and there’s a pharmacy part of the workflow. And so, whenever you’re looking at the process, looking at problems with the process, you want to engage people with frontline knowledge. So that may be, you know what, we’re understaffed at a particular time of the week, and that’s why we’re failing at getting the antibiotics quick enough is during these busy times we’re understaffed in the pharmacy. Well, engaging the pharmacy in those discussions is going to help show those insights a lot faster. So, think about it from a process standpoint. As you lay out a process, you say, who are the key owners of the steps in the process? That’s when you then pull in the right key frontline stakeholders to help redesign and improve the process, which then leads to what data do we need to understand it? So that’s the key.

Sarah Stokes:                Right. This next question, we don’t have a name, but they say basically how do you sell governance and its importance to outside entities such as partners, for example, in population health uses?

Tom Burton:                 Yeah, that’s a good question. This is a tough one. I don’t know that I have a perfect answer for this because it is hard. When you are dealing with affiliated physicians or a much broader audience that need to be engaged and you don’t ‘control’ the audience and their participation. I found that the best way to do is connect to what they believe in. So, find out what jobs are they trying to get done and why do they care about them? And then, can we resolve some of the pains that they are currently experiencing? Can we show them how participating and connecting and engaging on those topics could help them relieve some of their current pains and help them solve their jobs faster? And so that’s the best way. It’s not perfect but trying to connect with the why and understand the jobs they’re trying to get done and the pains they’re trying to deal with. If you can help relieve some of those pains, they’re much more likely to engage. Hopefully that was helpful.

Sarah Stokes:                All right, we have one final question here from Simon who says there are plenty of added values and proven success in using analytics. Why is healthcare still slow in embracing the data science? So just some opinion question and your session on.

Tom Burton:                 That’s a great question. So, healthcare tends to lag other industries. I really believe it’s because healthcare is so complex and we’re very disconnected. I showed that one slide where you can see our payment methodologies are so disconnected from our improvement. And so, when you have payment going from an employer to a payer to a care delivery system to sometimes the physicians, and the patients don’t even understand the cost of what they’re ‘buying’. We’re very complex and disconnected, and so that makes it much more difficult. Analytics absolutely can help. But there are multiple transformations we have to go through before those analytics become valuable.

Tom Burton:                 We have to go through a fee for service to fee for value transformation. We have to go through from RCC methods to activity-based costing from a cost transformation. We have to shift from thinking the EMR is the center of the universe to putting data at the center. Those are all pretty significant disruptive changes we need to go through. And because there are so many of them in healthcare, I think that’s why analytics has taken off so much faster in other industries. It’s the complexities of healthcare that make it more challenging, but it’s worth doing. And so, hang in there, keep at it because it’s worth doing. It can really make a difference.

Sarah Stokes:                All right. That’s going to wrap up the Q&A.