Getting to the Wrong Answer Faster with Your Analytics: Shifting to a Better Use of AI in Healthcare


Jason Jones:                  The title may seem a bit odd, Getting to the Wrong Answer Faster with Your Analytics. Usually, we don’t want to get to the wrong answer faster. But I’m hoping that you find this engaging, understand what kinds of things might count as a wrong answer. If we skip to the learning objectives then, let’s see how it is that data can both help us draw the right and the wrong conclusions. My hope, to borrow a bit from… I heard it from Maureen Bisognano at the Institute for Healthcare Improvement. My hope is that you can walk away with two things that you can do next Tuesday to help your own organizations arrive at the right answers as quickly as possible. We’ll go through a couple of cases of how it is that we can drive better analytics.

Jason Jones:                  The first thing is, and I hope that you can use your imagination for this, put yourself in the circumstance where you’re moving to a new place. Let’s say a state or a small country. Your goal is to buy a house. The problem is that you’ve heard some reports that some of the regions in the state may have more cancers than others. You definitely don’t want to buy a house in an area that has more cancer. You wouldn’t want to put your family or yourself at that kind of risk. The good thing, if there is a good thing about cancer, is that it’s a reportable disease. You were able to download for yourself all of the data for every cancer that has been reported in the state that you’re going to be moving to.

Jason Jones:                  On the right hand side, what you can see is the latitude and longitude and the region for each one of those reported cancers. For instance, the first four cancers reported there each happened in region A01. Then there are two cancers from A02, two cancers from A03, three cancers from A04, and so on and so forth. The list goes on and on. In total, there have been 400 cancers that have been reported. Again, your goal is to figure out, can you use these data to help you decide where you should buy or not buy.

Jason Jones:                  As we go through this example, what’s important is that for this learning objective, for this experience, we’re never going to add any more data than you’re looking at on the screen right now. There are more rows as I mentioned. We’re only looking at the top 19 and there were 400 cancers. But you’re never going to have to worry about, “Oh, are there genetic differences or other underlying population differences or different population densities or anything like that?” The goal here is not to trick you, not to ask you to do some kind of advanced analytics. It’s just to go through how it is that we might best represent the data for you to drive a conclusion about where it is that you should or shouldn’t buy a house. Okay?

Jason Jones:                  Again, this is one way we’re showing representation of all of the data that we have. We know as human beings, that we can see things when they’re visualized as a picture much more quickly than we can as raw numbers. Here, we’re looking at exactly the same data as before. You may recognize, for instance, if you look in the lower left hand corner, that’s the A1 region that we saw at the top of the table before. You can see it goes out to J10. We have 400 cancers in a hundred different regions of the state that you’re moving to. Hopefully, you would agree that you could more easily digest the information in this picture than looking at the table that we were presented with a moment before.

Jason Jones:                  Sarah, I’m wondering, does anything just pop out to you already where you’d say, “Oh, I don’t like the looks of that region?” Or, “I do like the looks of this one?”

Sarah Stokes:                There’s a couple of blocks. Right? Maybe down by that B3, B2 area. There’s a cluster there. Or C7 also looks a little nasty.

Jason Jones:                  Okay. Those are areas where you’d say, “Oh, I’m not sure I want to buy something there?”

Sarah Stokes:                Right.

Jason Jones:                  Is there anything that just pops out where you say, “Oh, there’s an open space. It doesn’t look like there’s any cancer there?”

Sarah Stokes:                Maybe H7 looks okay. A9 also looks okay.

Jason Jones:                  Great. Okay. Hopefully, if you’re able to see this on the webinar… It’s going to be a little bit tough if you’re only on the audio part. But hopefully, you can start to draw some of your own conclusions from this. Of course, we can ask the computer to do a bit more for us. For instance, we can say, “Let’s add some grids and then some colors to this.” What we’re looking at right now, again, we haven’t changed the data at all. But what we’ve added here is we’ve highlighted those regions that have more than two times the typical number of cancers. Anything where you see an orange triangle represents having at least double the four cancers that we observe on average. The red one is the region that has the single highest cancer rate.

Jason Jones:                  We can also have the computer highlight for us where it is that there are no cancers at all. Now, the green circles show us, and Sarah, you were spot on, that H7 region, for instance, has no cancers at all. We can see there’s another one, A9, where there are no cancers. We can use the computer and visualization techniques to help us better be able to see what’s going on with the data. That can really help us draw conclusions.

Jason Jones:                  Sarah, when you look at this, would you change your answers once the computer has helped you identify where the cancers are?

Sarah Stokes:                I mean, I think the good areas are still good. But I guess there’s a little more info on the “bad areas.” Right? The red, I definitely wouldn’t want to be in the red.

Jason Jones:                  Mm-hmm (affirmative). Yeah. It might have been a little hard to see that. I mean, you actually did pick up on it, but those cancers are almost riding on top of each other. It’s almost like there’s a big apartment building there or something, or condos or something. It might have been harder to spot for some people. But again, the computer can help us visualize. Just before going to the next slide, if you can get your computer or if you have a smartphone or something handy, we’re going to ask you as the first poll question to go ahead. At the bottom, you can see that there’s a URL. If you can go to that URL, you will see the picture that’s on the right hand side of your screen. I’ll read it for you. You don’t need the HTTPS probably, but the buyingahouse, all one word. Buyingahouse.azurewebsites.net.

Jason Jones:                  Again, you’ll be presented with the screen that you see here. If you’re on a smartphone, you can just tap where you want to buy or not buy a house. If you’re on a computer, you can click. It’ll ask you to confirm your responses. We’ll just give you a couple minutes to put in your information. Just to test it, I myself am going to go into the app here quickly and see how long it takes me so I don’t rush you too much.

Sarah Stokes:                Great. I guess while we have some downtime, I’ll remind everyone, if you joined late, we are recording today’s session. You will have access to the recording and the slides after the fact. We’ll get those posted by about midday tomorrow and send you an email letting you know those are available. We are holding a Q&A at the conclusion of today’s session. If questions come to mind, be sure to submit those. Jason will get to them at the end of today’s presentation.

Jason Jones:                  Great. We’re getting some responses, but not quite as many as I would hope. Again, if you have a moment to go in and go to the URL… We’re not tracking. I should have said we’re not tracking anything about you. It’s an open website trying to see what kinds of patterns we see with the audience here. It looks like we’re starting to see a pattern. We’ll go forward now. But again, you’re welcome to enter your responses at any time. We’ll go forward and click on the results, if I can make the mouse work. Where’s the mouse?

Sarah Stokes:                I can see it here.

Jason Jones:                  Okay. I just can’t see it on my screen. I’ll go ahead and click.

Sarah Stokes:                Nope. Nope.

Jason Jones:                  Nope.

Sarah Stokes:                Let’s see. Let me try. Sometimes you have to wiggle it. There we go. You just got to get a little quick with it.

Jason Jones:                  Okay. There we go. We’re looking at the same screen that you might have seen. But now, what we have the opportunity to do and you can see those yourself if you went through the app is we can see, based on your responses, where did people buy? That’s shaded by region. We had the most responses in this H7. The next most responses for where to buy a house, in this A9. I wish we could ask all of you why you elected to choose what you chose. I’ll ask Sarah as a proxy for all of you. Sarah, why might you be inclined to buy in H7 as opposed to A9?

Sarah Stokes:                I would say probably because of the proximity to that A10.

Jason Jones:                  Yeah. It’s a little scary. What if cancer really is infectious and some of those people start to lurk over?

Sarah Stokes:                You’re definitely closer then. Yep. Mm-hmm (affirmative).

Jason Jones:                  Okay. Then if we go to the not buying a house, where people are not buying a house. It’s interesting. B2 is now jumping off the map. I’ll just have to tell you historically that if you don’t highlight those red dots, which we can do when we’re in a more intimate setting, and ask people to step through different phases of what information we show, people tend to be more inclined to pick up on the C7 and the D10 I think largely because it’s just easier to see that there are a lot of points there.

Sarah Stokes:                A little more spread between the points.

Jason Jones:                  Yeah. Based on this, it seems as though most of you are agreeing with Sarah as your representative about where it is that you would and wouldn’t buy. We’ll get out of this and go back to the presentation. Thank you so much for being willing to place your votes there. There is another way to look at these data. Before, what we were doing was looking at the geographic distribution on a grid. But the other way to look at it is just to say the X axis here… This is a bar chart. You’re probably pretty familiar with. But we have a bar chart here. What we’re showing is how many sectors had how many counts of cancer. You can see, we’ll make it a little bit easier to see in a moment, that there were two sectors with no cancers at all and about… Probably it looks like five sectors that had one cancer, and so on and so forth. Let’s make it a little bit easier to see.

Jason Jones:                  Now we’re highlighting in green on the left hand side those two sectors that you all decided you wanted to buy a house in that had no cancers at all. On the far right hand side, we can now see that that one sector actually had 10 cancers. The two orange sectors each had nine. Now we know how many sectors were out there. Sarah, again, as the representative of the people here, when you look at this bar chart, does it look like any famous distribution that you learned about in your favorite class which was introductory statistics in college?

Sarah Stokes:                Oh, yeah. My favorite class. Definitely looks a little bit like a slanted bell curve.

Jason Jones:                  A bell curve. Yeah. Hopefully, that’s what it looks like to those of you who can see the picture. Again, thankfully, we have a computer to draw this in the first place. But we can actually use the computer to actually smooth out the distribution. This is what it looks like with the computer smoothing out the distribution. Sarah, I think most people would agree with you that this does look like a bell curve. To your point, it’s slanted because we’re dealing with count data. We can’t have like minus one cancers or minus two cancers whereas we could theoretically have a hundred cancers. That would be definitely a region to avoid.

Jason Jones:                  If we know that this is looking a little bit like a bell-shaped curve, and for those of you who really, really, really loved that intro stats class, you may remember there was something called a Poisson distribution, which is specific for count data. We can actually have the computer say, “Well, what would we have expected to see? What would we have expected the distribution of cancers to look like if it was completely random?” Meaning if it had absolutely nothing to do with geography. That’s the purple line that we’ve just overlaid on top of the blue line in the bar chart.

Jason Jones:                  Sarah, how close does the purple line look to the blue line to you?

Sarah Stokes:                It’s pretty close.

Jason Jones:                  It is, isn’t it?

Sarah Stokes:                Yeah.

Jason Jones:                  I mean, there are definitely some gaps there. It looks like the theoretical distribution actually expected only one region with zero cancers. We observed two. There are some slight differences. We can see some gaps between the blue and the purple line. But that blue line, which is what we observed, looks really close to what we would have expected to see if there was no association between geography and cancer risk. It turns out that we can run an actual statistical test to help us understand that. You may or may not remember P values from before. But the test that that were… Whoa. Somehow we’ve missed the result of our test in the deck. I’ll just have to tell you that the P value when we run the test is 0.99. You may remember that if P value was less than 0.05 that we would say, “Oh, there’s a one in 20 chance that what I’m observing could have happened by random chance.” But 0.99 is about as close to one as we can possibly get. Which is telling us that what we are observing is we can probably safely conclude that it’s just due to random chance.

Jason Jones:                  Sarah, again, as a representative of the people here, how might you change your answer about where it is that you might buy a house or not buy a house?

Sarah Stokes:                I mean, I guess then I’m not really limited because it doesn’t matter. I could pick anywhere?

Jason Jones:                  Yeah. You could say, “Well, I’ll pick the one that has a nice view.” Or, “I’ll pick the one that’s close to a good school district or in a good school district,” or something like that. Or whatever. I like the neighbors. You can make it on whatever you’d like, but cancer risk does not need to weigh heavily on you. What have we demonstrated here? What we’ve demonstrated, I hope, is that we can do wonderful things with data and computers. We can visualize information to help us spot things that we could not have seen before. Human beings are really good at spotting things visually. That’s how we work. But what we need to be careful about is that just like you can lie on the grassy field in this wonderful new state that you’ve moved to and see a dragon in the clouds, in fact, there isn’t a dragon in the clouds or an elephant or whatever it is that you see when you look up at the clouds on a cloudy day.

Jason Jones:                  We can make exactly those same mistakes in analyzing our data. The good thing is that we can anticipate the types of errors that people are likely to make. The rest of this discussion is going to be, how can we anticipate those errors and then bring AI to help us prevent ourselves from the kinds of routine mistakes that we might otherwise make? Now, we’ll go forward. I’d be really grateful, Sarah, if you could, as the representative of the people, ask the people.

Sarah Stokes:                Yeah, absolutely. Okay. Our first poll question here, we’d like to know, on a scale of one to five, how useful did you find this exercise? Your first option is one, not useful. Maybe you thought it was a silly example. Or it was information that you already knew. Then we have a two, then we have three, which is ambivalent. Might help someone somewhere but not me right now. Then we have a four and then five, extremely useful. You learned something that you can leverage today. We’ll give you just a moment there. The votes came in pretty quick. Everybody’s on the ball today.

Jason Jones:                  Wow. Thank you so much for responding to these questions. We’re going to get to another one in a moment that truly is going to direct the course of this session. But your responses here will direct the course of how it is that we try to explain things going forward. Thank you.

Sarah Stokes:                Okay. Close that poll. You’re good to move on.

Jason Jones:                  Okay. Oh, how do we get the results?

Sarah Stokes:                Oh, you want me to share those?

Jason Jones:                  Yes, please.

Sarah Stokes:                Oh, okay. Then I will go ahead and do that. There you go. People thought it was pretty useful. What is that? 69% said four or five.

Jason Jones:                  Okay. As a net promoter score, we’re doing okay here. Would love to hear from people who put in a one, two, or a three if you want to type in. Is there anything that we could have done to make this more useful for you? Please do let us know if you’re willing to contribute your thoughts as we go forward. Okay. Back to our presentation. Now, what we’ll look at are some of the ways that we can actually make a difference. We’re going to start with a couple of examples here. Whoops, let me click back here. Okay.

Jason Jones:                  We’re going to go through two examples. I wish we were all in a room together. I could ask, how many of you have ever seen or watch Scared Straight? It was a TV show that started actually in 1978 as a documentary. The basic idea was that you take some teenage delinquents, and you put them in prison for three hours. The whole idea, which was a wonderful idea, was that… I mean, you sort of wish people didn’t commit crimes in the first place. But if someone is going down the path of delinquency early in their life, then is there some way that we can help them change course? That was the idea.

Jason Jones:                  We put people in prison for three hours and basically scared them. The idea was, well, we can scare them into not wanting to go further down this path. The only thing is, and I think some of you may think that’s a great idea or a bad idea, but people really did think it would be a good idea. The only bummer is that we’ve now done nine rigorous studies on this. It turns out that two of those studies showed that there was no benefit at all. Seven of them showed that there was actually a net negative impact. It actually increased delinquency. We’ll see the rates in a moment.

Jason Jones:                  I’ll just point out at the bottom, if it’s a little hard to see, you will get these slides. I just want to be clear about giving credit where it’s due. I did not come up with this. There’s a wonderful book called Doing Good Better. The link to it is at the bottom. Then there’s also a blog at the bottom. You’ll get these slides. You can go to those links if you would like to. What is it that the market should buy then? You can probably search for this yourself on the internet if you didn’t know. But I looked last week, and when I searched for Scared Straight TV, it turns out that you can watch this show today. More than 40 years later, if you search for this on the web, you can go watch the show. It’s still running.

Jason Jones:                  How do we think about this? I mean, this means that society is willing to pay via commercials and other things to continue watching this type of show. I won’t comment on the image that’s selected, which I’m sure was not random. Here we have the other side, which is that Cochrane Review that I mentioned. Again, the link for the Cochrane Review, if you’re interested, you can go see. I’ll read because I think the text will be too small. But highlighted there is the intervention, meaning Scared Straight, increases the odds of offending by between 60% and 70%. Not only did Scared Straight as a perhaps reasonable hypothesis of how it is that we may help people avoid a life of delinquency in prison, not only did it not help, it actually has been demonstrated to hurt terribly. There aren’t that many things that we can do where we can improve something or make something worse by 60% or 70%. It’s a pretty massive effect. Yet, more than 40 years later, it is still going strong.

Jason Jones:                  But I don’t want to end on a negative note. I’d like to end on a positive note. This is one that is going to turn out to be positive and show how important, if you’re in the health and healthcare world, how important your contributions can be. The next one is about books and worms. What is that about? Here, there’s an organization, a wonderful organization. Its goal was to try to improve academic performance for kids in Kenya. One of the things that this organization noticed when they went to go look at the classrooms is many classrooms had only one book for the entire room. How many of us would send our kids or be willing to send other people’s kids to a class where the entire class had to share one book? We’d say that’s nuts. How are the kids supposed to learn?

Jason Jones:                  The first thing they did was provide more books. As they did so, they were careful to study the impact. What they found was that providing more books didn’t help at all. In fact, in some sense, it hurt. There was an odd finding which people have theorized why it was found. But there was an odd finding where giving more books to the class, the class as a whole actually did slightly worse. But the highest performing students, the historically higher performing students did a little better. We can hypothesize why that might be the case. One of the reasons that they were pretty confident the books didn’t help is that for many of the kids, the books were in English and English was their third language.

Jason Jones:                  Great. You give them a book in a language that they struggle with. Then it’s no surprise that they’re not able to learn. Then they said, “Okay, not a problem. The other thing we’ve noticed is that the teachers are kind of hampered. They don’t have great blackboards if we’re trying to encourage group learning, which we may want to do. If English is a third language, then let’s provide flip charts to these classrooms.” Again, they studied providing flip charts, that didn’t help. Similarly, they thought, “Well, maybe we should provide more teachers.” That didn’t help either. Each of these was carefully studied. We’re really grateful that those studies were done.

Jason Jones:                  The organization got a little bit disparate as you might imagine. They went in with the best of intentions. Each intervention didn’t work. It seemed so obvious. If these things don’t work, what the heck do you do? It turned out… Again, I’m relaying a story that you can read about in this wonderful book. But it turned out that one of the leaders in the organization knew someone in the World Health Organization. That individual said, “Well, one of the main drivers for kids missing school is that they get worms and then they can’t go to school. Why don’t you do deworming.” Which I think most of us would acknowledge is pretty far away from… You’d have to be a pretty creative person to think about deworming as something that can assist in test scores and academic performance. But what they found was not only did deworming help, it decreased in the short-term absenteeism by 25%, which is really impressive.

Jason Jones:                  But careful study 10 years later showed a net 20% increase in income. Just astounding the positive impact that we can have, but importantly, how do we know? We know because someone took the time to study what was working and not working carefully. They didn’t just assume that the best of their intentions coupled with good implementation and solid theory would automatically yield better results. They actually measured it. We can do wonderful things.

Jason Jones:                  Okay. Sarah.

Sarah Stokes:                All right. We are to our second poll question here. In this poll, we’d like to know when was the last time you suspect your organization drew the wrong conclusion in a preventable way? Bad data or methods. Your options are, number one, within the last week, number two, within the last month, number three, within the last quarter, number four, within the last year, and number five, it’s never happened. It’s never happened in your organization. We’ll give you just a moment here to submit your responses there. The votes are coming in pretty quickly.

Jason Jones:                  Wonderful. Thank you so much, again, for participating.

Sarah Stokes:                We’ll give you just one more quick moment. Again, I’ll remind you all that we are recording today’s session. You will have access to the slides after the fact. Okay. We’re going to go ahead and close that poll and share the results. Okay. It looks like 31% said within the last week, and that is your majority. Then 25% reported within the last month. Another 25% within the last quarter. 17% within the last year, and 2% said it’s never happened. Your vast majority are within the last quarter, over 75%.

Jason Jones:                  Wonderful. Thank you for providing that. The only way that I’ve ever been able to avoid mistakes is by erasing my memory. Some of you I think are more effective than me. Okay. Thank you for that. We have a hypothesis about why it is that especially in health and healthcare, and more broadly, within data and analytics, we might have found ourselves where we are today. There are many contributors. But I’d like to address one of the common images that’s out there. I want to be clear. We’ll get to some awesome things about this image as well. But this is the Gartner Analytic Ascendancy model. Again, I think it’s done a lot of good. But there are also some challenges that we encounter fairly routinely with people literally interpreting the path as well as the value proposition.

Jason Jones:                  The first thing is that we routinely find that these basic descriptive analytics are not a solved problem in our world, especially in health and healthcare. We have all kinds of challenges collecting basic data and making sure that the right data are standing up for analysis. Until we solve that problem, we can’t easily be able to move through the rest of the ascendancy. We need to remember that and focus due attention on the descriptive statistics.

Jason Jones:                  The second is that we have the diagnostic and predictive analytics in the wrong order. It is an order of magnitude harder to be able to demonstrate why something happened as opposed to predict what will happen. Again, for any of you who can remember basic statistics or research methods class, you may remember the phrase correlation does not mean causation. In predictive analytics, all you need to know is correlation to be able to establish causation or why something happened is really, really, really hard. 10 times harder than prediction. We find with many of our customers that by not understanding that, we focus, again, on solving the wrong problems.

Jason Jones:                  Then the last piece is about prescriptive analytics. The headline there is, how can we make it happen? But the part that gets left off is we find that’s actually usually not an analytic problem at all. That’s a change management problem. That’s where we need to be able to support leaders and workers and others so that we can see the change that the computer or something else is suggesting that we make. It’s not actually an analytic problem. I will point out, there are many good things about this diagram. But the thing that has been particularly wonderful for me is I have learned to use the word optimization instead of evaluation.

Jason Jones:                  I used to say, “Hey, let’s set up an evaluation or a study to figure out how things are working.” What I found is that if we simply swap out the word evaluation, which most people don’t think of favorably any more than they would some kind of diagnostic exam that causes you pain or other kinds of misery. No one really loves doing evaluations. Or few of us love doing evaluations. But just about everybody wants to optimize. It turns out that we can take many of the same steps for an optimization problem that we could for an evaluation one. One recommendation if you use evaluation and you wonder why people won’t get excited about it with you is that we can switch to using the word optimization.

Jason Jones:                  Okay. Our next poll question, Sarah.

Sarah Stokes:                All right. In this poll question, we’d like to ask, what is your chief complaint? Your first option is low analytic throughput. Second option is low data literacy. Third option is lack agreement on definitions. Fourth option is don’t believe results. Your fifth option is pilot-itis, too many or too long. You want to provide any additional details on any of those?

Jason Jones:                  Yeah. This is going to direct the rest of our time together. We’ll go in rank order here. Also, get ready with any questions or comments that you have. We’ll try to be responsive as we go through. Please take a moment to vote what’s most important to you. That will determine the order of the rest of our time. Thank you.

Sarah Stokes:                Okay, great. The votes are coming in a little slower on this one. I think it’s making people think about… Yeah, I see a comment here. Need multiple choice here.

Jason Jones:                  I know. That’s somehow true. Chief complaint for those of you who aren’t familiar is what you often come into the emergency department with. Like my arm is broken. Or I have the sniffles. Or I’m having a heart attack. For those of you who are physicians or nurses or other providers out there especially in the hospital, how often do we wish that patients would agree to only have one problem at a time? Wouldn’t that make the world so much simpler?

Sarah Stokes:                All right. We’re going to go ahead and close that poll. Then we’ll share the results. 18% voted for low analytic throughput. 34% voted for low data literacy, that’s our highest here. 31% voted for lack agreement on definitions. 8% voted for don’t believe results. 10% voted for that pilot-itis. It looks like we’re starting with data literacy, then moving on to agreement on definitions, and low analytic throughput.

Jason Jones:                  Wonderful. The content here is going to be a little on the light side. That’s not an accident to try to encourage some thought and some dialogue. I had to wiggle the mouse there. I’m going to click on low data literacy. It’s going to take a moment. I’m just going to describe what we’re looking at. But the idea here is we often say that… Well, the problem in our organization is that we can’t provide people with more sophisticated analyses. They won’t be able to understand them. Please just give me my standard business intelligence report that looks beautiful.

Jason Jones:                  What we’re looking at here is a report. You may or may not love the aesthetics, but it’s a report that’s incorporating a lot about what we know and love from the field of AI. Let me describe what we’re looking at first. The first thing is we have a measure here. It’s some kind of a key performance measure. We have seven different geographies on how the system performs overall. On the X axis, what we’re looking at is their performance measure, which is ranging from about… It looks like about 63% up to the mid 80s, to high 80s. That’s what we’re looking at. Each of these points represents the performance. We can see g1 is the best performer at something like… We’ll call it 85%. Gg7 is performing around 63%, 64%. Those are what those triangles represent. Hopefully, you can understand. We have the different geographies to the system overall, which is at around about call it 77% or so.

Jason Jones:                  Then a couple of other things to orient you. Green is the direction of good. We would like the geographies to be over towards the right hand side. Blue, the solid blue line that you’re looking at here, is a goal that has been set within the organization. Hopefully, that basic background is fairly easily recognizable to you. The performance by geography snapshot in time, we know what good looks like. We even have a goal that we have set for ourselves. Now, let’s go a little deeper into some of the additional information. The first thing is these lines that you see spreading out represent confidence intervals. If you’re not sure what a confidence interval is, perhaps for the purposes of this brief discussion, think about it as where it is that we would comfortably be able to say the triangle actually fits.

Jason Jones:                  The first thing that you’ll notice probably is that some geographies have much, much broader confidence intervals than others. This g7 has a much broader confidence interval than the system as a whole. Or even if we look at these top two geographies here. What that’s telling us is that the underlying sample size is much, much smaller for this geography down here than it is for this geography up here. Again, the same approach could apply to geographies. It could apply to providers. It could apply to whatever, hospitals, or medical office buildings, or whatever. We know that some geographies, some hospitals, some medical office buildings, or whatever, have larger populations than others. This is automatically taking that into account for you so you get a sense of what the variation is around the measurement.

Jason Jones:                  The next thing that you might notice is that the colors are a little different as we look at the different geographies. If you’re colorblind, there’s some letters off to the side that are in gray that are telling you how these geographies cluster together. Even though geography one is numerically the best geography in the system, it turns out that it’s statistically tied with geography two. However, geography three is meaningfully worse as a performer than either one or two, and is tied with geography number four. Then geography five is a little bit worse. Geography six is a little bit worse than that. You can see the colors changing and also the letters changing. Finally, geography seven is worse.

Jason Jones:                  We don’t have to ask ourselves, “Geez, are one and two really different from each other? Or are three and four really different from each other versus five?” The computer can tell us that using fairly reliable and rigorous methods. The last thing that you might notice is that we have these gray diamonds out here. What the gray diamonds are telling us is not what our current performance is, which is represented by the triangles. The gray diamonds are using an automated forecasting or projection technique by the computer to forecast. If history is a guide, and we consider the complete history of these geographies, where do we expect them to be a year from now?

Jason Jones:                  What we can see now is that although regions one and two are tied with each other, they actually appear to be heading in different regions. Although we might have been equally concerned about region three and region four, we’re leveraging AI to tell us a little bit more that, in fact, if we’re concerned, we should be a little bit more concerned about geography number three. Geography number four is projected to actually improve. Not just improve, but improve beyond the target. This is helping us draw much more information from the same data that we would ordinarily have. But it’s providing more information. What it’s providing us then with that additional information is the opportunity for us to get on the same page.

Jason Jones:                  I wish we had a poll question here, but we wanted to limit the number of poll questions. Because the question that I hope that you would answer favorably is that most of us, even non-analytically minded people, can actually understand and interpret this picture. Perhaps more importantly, as I said before, it helps us all get on the same page, which then vastly facilitates the discussion and the conversations become more meaningful. We may come in with a perception that we have low data literacy, which often can be low analytic capability within the people who are consuming the information. That’s often the interpretation. But we’ve found that there actually are ways to present the information, leverage the computer at what it does best: prediction, forecasting, things like that, to help even the least analytically sophisticated users understand meaningful concepts in their organization.

Jason Jones:                  That is an example of how it is that we can increase data literacy. Not by running away from statistics and machine learning and standard reporting, but actually by embracing machine learning and statistics in our standard reports. Okay. Next topic. Can you help me out, Sarah? I can’t remember what was next on the list.

Sarah Stokes:                Yes. The next was the lack of agreement on definitions.

Jason Jones:                  Okay. This is a great one. My poster child here was arguing for long periods of time over what would count as a hospital readmission. I’m not often grateful for CMS or Medicare, but boy was I grateful when they adopted and told the rest of us how to measure readmission because at least we could stop arguing over the definition and then start acting upon what we were seeing in the data. But there is another approach. Again, the graphic here is really small. That’s only partially an accident or a mistake.

Jason Jones:                  The real thing that I’m hoping you can see is that we have a line that’s running down the middle here. Any points that we see on the right hand side of the line are saying something is better. Any confidence limits that are on the right hand side of the line are saying that older is better. Anything on the left hand side is saying that fresh is better. Now, what might old and fresh mean? Sarah, if I asked you, “You need some blood. I can give you old blood, or I can give you fresh blood.” What blood would you like?

Sarah Stokes:                I definitely think I’d like fresh blood personally.

Jason Jones:                  You’d like fresh blood? Yes. I think most grocery stores only stock fresh blood. No. Believe it or not, and some of you who are in the hospital may know this, this is an actual issue. You frequently will have patients who need blood or perhaps someone yourself who has needed blood. The challenge for people who manage blood banks is, well, what do you pull? Because you can imagine if you always pull the freshest blood, well, that means the next person who comes in, unless you get a new blood donation is then going to get older blood. Pretty soon, you’re going to have some pretty old blood. But we don’t really feel good about giving people old blood. Right? Most of our patients would say, “No, please,” like you Sarah, “Please give me the fresh blood.”

Jason Jones:                  Now I’ll ask you, and Sarah is sitting far away from this as well. If you look at this picture, would you say that older is better or fresher is better? Or it doesn’t matter?

Sarah Stokes:                I mean, it looks like there’s more dots to the right. Older is statistically better, it seems.

Jason Jones:                  Could be. Now, I’ll just add a little bit. Wherever you can see a confidence limit here, if it overlaps this line, then it means there’s actually no difference.

Sarah Stokes:                Okay. Well, then there’s probably not much of a difference.

Jason Jones:                  Yeah, but you’re right. We have one point where clearly older is better. But the rest are pretty much hovering around. There’s no difference. I don’t know, Sarah, if you would change your answer. You may still say, “I don’t care. Give me fresh.” But as somebody in the hospital who’s going to actually pull the blood off, at least what you can say now is by looking at a whole bunch of different subgroups, which is really not the focus of this discussion but just so you know, we’re looking at different blood types. We’re looking at sicker people versus healthier people. Down here, we’re saying, “Well, gosh, what outcome should we choose? I mean, should we choose death within a day? Should we choose death within six months? Should we look at fevers? How would we know if older blood was worse?” Right?

Jason Jones:                  There are lots of patient subpopulations. There are lots of different ways that we could count something as having been bad. What this picture is doing is it’s saying, “You know what? Rather than just saying we’re going to choose one patient population, and one perfect way to measure whether or not older or fresher was better, we’re going to look at all the ways. If all of the ways are telling us the same answer, then we’ll feel really good.” Right? Then our lack of definitional agreement doesn’t matter so much because frankly, it doesn’t matter which definition you choose. You can use a definition that includes sicker people or healthier people. You can choose immediate death or longer death. We won’t choose taste.

Jason Jones:                  But that’s the important thing. This is called sensitivity analysis. What does this boil down to? If you came in and your chief complaint rather than I broke my arm or I’m having a heart attack was we cannot move as an organization because we can’t agree on the definitions. Then what we would suggest is that a single version of truth is actually illusory. We should stop. Because in health and healthcare, you all on the phone will likely know better than me, there is no perfect definition of readmission. There’s no perfect definition of a good outcome. There are as many definitions as a good outcome as there are people. There’s no such thing as a perfect treatment either. People have different preferences. I don’t know that we’ll find someone who said, “Yeah, give me the old blood.” But for sure, we have differences with people who want to be screened or not screened for certain diseases based on what they would do with that information. We have differences in whether people would like to pursue aggressive or less aggressive treatments based on their goals and preferences.

Jason Jones:                  We know that we’re not going to have perfect definitions in health and healthcare because our members and patients have differences of opinion about what would count as best for them. Instead, what we suggest, rather than pursuing the single version of truth is to pursue a convergence of evidence. Then approach differences with curiosity as an opportunity to understand. In this case, it was pretty clear cut, it doesn’t really matter if you use older or fresher blood. I mean, none of this was like three-year-old blood. People thought it was safe, but they just wanted to check.

Jason Jones:                  There’s a specific technique that if you’re not an analyst, you can ask your analysts to perhaps test out and embrace which is called sensitivity analysis. If you are an analytic professional yourself and you’re familiar with it, sensitivity analysis that is, then perhaps your leaders will be more supportive of you getting to use what you’ve learned in the past. If you’re an analytic professional and don’t know about sensitivity analysis, then perhaps it’s something that you can learn more about, and again, bring to your organization as a way to burst through that chief complaint of, “We can’t agree on the perfect definition for whatever it is that you’re trying to improve.”

Jason Jones:                  Okay. We’ll go back now. Sarah, if someone raises a hand and says, “I have a major issue with what you’ve just said,” please do let me know.

Sarah Stokes:                Chris did ask, what is the definition of fresh and what is old in this example?

Jason Jones:                  Okay. Chris, that’s a wonderful question. You probably can’t see it, but I have actually included the link to the New England Journal of Medicine article at the bottom. You can read the entire study, which was a pretty cool study. There have been several others. But you’re in the blood bank world, I’ll tell you that I once shared this with a blood bank runner, and they said, “Well, none of that blood was really that unfresh. It was all pretty good.” I’m not an expert in that area, but take a look at the article and see what you think. Again, the link to it is in the bottom left hand corner should you download the slides. Anything else that’s leaping out of the audience at this point?

Sarah Stokes:                Not right now.

Jason Jones:                  Okay. No one’s suggesting I get a transfusion of super old blood. Okay. What was number three on the list?

Sarah Stokes:                Number three was the low analytic throughput. When you kicked that off, we had a question from Chris who had asked, what is low analytics throughput? Maybe you can set the stage before you dive in.

Jason Jones:                  Right. That’s one of my favorite examples in the ED where some per diem or daily worker in the ED called a code stroke in the hospital. It just introduced what code stroke meant. The entire hospital fell upon this person who thought he was just going to send the patient off for an MRI. Okay. Low analytic throughput, what is it? Low analytic throughput you might complain about. If you find yourself in a situation where you ask your analytic team for an answer to a question, and they say, “Thank you very much. We’ve stuck it in the queue. We’ll get back to you somewhere between six months and 600 years.” That would count as low analytic throughput. Which from your perspective is, “I needed the answer faster than you’re able to provide it to me.”

Jason Jones:                  That is when you would complain about low analytic throughput. I’m glad that this wasn’t the number one problem. As an analytic professional, I’ll maybe speak for some of us when I say it certainly feels like we can never produce the information fast enough. We wish our throughput could be faster. This is a practical example of what we have found can be really helpful in increasing your analytic throughput. It’s going to sound a little weird. Let me start by saying often what we are asked to produce is a predictive model. Remember that Gartner chart. That was like level three in your analytic ascendancy. It’s not as valuable as being able to do basic descriptive statistics or reporting.

Jason Jones:                  But actually, we have found that if we can really nail those basic descriptive statistics and reporting, and especially if we can visualize information over time, it can accelerate our analytic throughput by a factor of about 10. Especially predictive models. This has a massive impact because what really happens often is that we’re asked to do an analysis. When we go in to do the analysis on a really important problem, it turns out that we’re among the first people to really dig into the data. We often don’t have the clinical or operational expertise to know what we’re looking at. We spend the vast majority of our time, and by that, I mean like 80% to 90% of a new analytic project is actually spent hunting, gathering, and cleaning data.

Jason Jones:                  How can we accelerate that? Well, one is to be able to start any analytic project with a basic visualization of, what are we seeing in the data? Here, we would suggest using a statistical process control chart because it does a couple of things for us. First off, we can’t build the chart until we have clearly identified a population. If we’re trying to look at hospital discharges, what’s going to be our definition of a discharge? That may seem trivial, but do we want to include someone who only went through observation and wasn’t admitted as an inpatient, as a discharge? Do we want to include all ages? Or do we want to exclude pediatric cases?

Jason Jones:                  These are all reasonable questions that people could ask. If we’ve been looking at the data and have a sense of what’s our discharge count by month, then we’re in a much better spot to start the analytic process. The other is it helps you identify an outcome. If the thing that we were trying to improve… Since we’re talking about hospital discharges, the common ones are trying to improve on length of stay, or readmission, or mortality, or something like that. Then it would have also forced us to define that outcome. We would have had to define, what counts as a readmission? What counts as a long length of stay? What counts as a mortality?

Jason Jones:                  We would have already started with those population and outcome definitions. That can immediately shave off months and months of back and forth. Or even having accidentally provided the wrong answer because we thought we knew what people meant by a discharge or a readmission when we really didn’t. The other thing that it does is it immediately helps us understand the kind of variability that we’re seeing over time. If you’re not familiar with the statistical process control chart, we have a central line here that’s telling us we have just below 450 discharges a month. But we do see some volatility. Here, we have in January was a low month and statistically significantly so. We’re also seeing as we go over to the right hand side of this chart that we have a series of recent points where the discharges seem to be quite a bit higher. Now what we have the opportunity to ask is, is this real, or is this a problem with the data?

Jason Jones:                  If it’s a problem with the data, then we need to go back and fix something. If it’s real, it may tell us we need to rethink how it is that we’re going to do our analysis. Are the discharges in the recent time period really similar to those in the past? How should we think about that as we go forward? It helps us see things with our clinical, operational, and the other experts that we’re working with to really accelerate dramatically. Again, tenfold dramatically increase the analytic process.

Jason Jones:                  Since we’ve already talked about computer-generated forecasts, we can take this one step further and have the computer add a projection. Visually, we can all look at this. Sarah, you know nothing about this hospital or this case. But what’s the computer telling you about the trend going forward? Are we going to forever be increasing our discharge rate? Is it plateauing? What are you seeing?

Sarah Stokes:                It looks like it might be leveling out a little bit but at a higher level than current.

Jason Jones:                  Absolutely, yes. That’s a great interpretation. Sarah’s a brilliant person, but I don’t think she would count herself as either a professional forecaster nor a professional statistician.

Sarah Stokes:                Definitely not.

Jason Jones:                  And yet, if she was an expert, she may actually be able to give us a lot of advice about how we can present this more effectively, either verbally or visually, because that is her profession and not mine. But someone can look at this and pretty quickly draw reasonable and consistent conclusions. If in that room of experts, somebody knew or actually know the recent trend is not real and there’s a problem with the data, we could fix it. If we agreed that this was the “new normal,” which is what this forecast is suggesting, then we would be able to think about how it is that we want to deal with it. If someone said, “No. Actually, I think that this trend is going to continue to increase, we have not yet reached the plateau,” then we would think about things differently again.

Jason Jones:                  I’ll just say in this particular example, this bump that we’re seeing happens to be because a nearby hospital closed. The conclusion was that actually we had reached a new normal unless a new hospital opens up. We need to think about that carefully. We’re going to be getting different kinds of patients as a result of that hospital closing than we’ve gotten in the past because it was a specific type of a hospital that closed.

Sarah Stokes:                You did have a question from Jordan. He wanted a reminder of the type of graph this is that you’re looking at here.

Jason Jones:                  Oh, wonderful. This is called a statistical process control or SPC chart. We’ve augmented it a little bit with some of our AI techniques to not only highlight where we’re seeing important differences, or these orange dots indicate possibly important trends. But we’ve augmented it further by adding a machine learning forecast to the tail of it to help us understand not only where are we and where have we been, but if history is a guide, where might we be going? Do we believe that trajectory?

Jason Jones:                  Sarah, I think I’m noticing that we’re at time.

Sarah Stokes:                We are. One minute out.

Jason Jones:                  I think I should turn the time over to you unless people would like to stay later. But at least you should be able to close out and take people to your closing comments. Correct?

Sarah Stokes:                Yeah. If you can just scroll through quickly till our slide about the Healthcare Analytics Summit. I will go ahead and launch a poll question here. Before we move into the Q&A or Jason wraps up these last few slides, we have a few giveaways for complimentary Healthcare Analytics Summit registrations. This is an annual event with more than a thousand provider and pair attendees occurring this year, September 10th to 12th in Salt Lake City, Utah. It’s going to feature brilliant keynote speakers from the healthcare industry and beyond. This is just a glimpse at some of the speakers that we will have this year.

Sarah Stokes:                Then in this first poll question, if you know that you’re able to attend and are interested in being considered for complimentary passes for a team of three to attend the Healthcare Analytics Summit, please answer this poll question. We’re just going to give you a few moments here. These are going to be quick, so you’re going to have to act fast. All right. We’re going to do a three, two, one, closing that poll. Then moving on to our second. I’ll go ahead and launch this one. In this poll, if you know that you’re able to attend and are interested in being considered for a complimentary individual pass to attend the Healthcare Analytics Summit, please answer this poll question. Maybe you can’t get your whole team but you’re still interested in attending, please let us know here. Be sure that you’re considered. Okay, we’ll give you just one more second there. Then we’re going to go ahead and close that one. Then I have one more poll question, which we’ll fly through. Okay.

Sarah Stokes:                Well, today’s webinar was focused on the use of AI in healthcare, some of you may want to learn more about the work that Health Catalyst is doing in this space. Or maybe you’d like to learn more about our other products or professional services. If you are one of these individuals that would like to learn more, please answer this poll question. We’ll give you just a moment here to respond. You’ve started getting lots of praise in the comments section, Jason. I think we’ll have some good questions lined up for you. But we’ll give you all just one more minute here before we turn the time back to Jason.

Sarah Stokes:                Okay. I’m going to do another quick countdown here. Give you your chance to respond to this poll. We’re going to do a three, two, one, and we’re closing that poll. Okay. Now, you’re welcome to just navigate back to where you were in the slides or-

Jason Jones:                  What about Ryan?

Sarah Stokes:                About Ryan?

Jason Jones:                  Ryan session.

Sarah Stokes:                Oh, yeah. I’ll also cover that at the end. Yeah. Jason’s just mentioning we do have a webinar coming up on August 28 with Ryan Smith from our Internal Health Catalyst team. He’s Senior Vice President and Executive Advisor. He’ll be presenting a webinar for us on August 28. Again, we do encourage you to join us for that, but I’ll remind you about that again when we get ready to close.

Jason Jones:                  Okay. How are we doing? Are people still on?

Sarah Stokes:                You still have 180 folks on the line.

Jason Jones:                  Okay. Maybe I wish we could ask what people like to discuss, or maybe there’s some questions that we can take.

Sarah Stokes:                Yeah, we can pull those up.

Jason Jones:                  I’ll go back. Ops, I have to remember to click. I’ll go back to where we were. Any points of clarification or other substantive comments?

Sarah Stokes:                So far, just a lot of praise. We’ve had a few questions. One person wants to know if you’re willing to share your email address. I can send that privately if you want.

Jason Jones:                  Yes, of course. I’m sorry. I thought I had included that somewhere, but I probably forgot.

Sarah Stokes:                Okay. I mean, we have a few questions if you want to get into that. Do you want to do those first or do you want to keep going through your section?

Jason Jones:                  Okay. I should know by now by process of elimination what the last one was.

Sarah Stokes:                You still have pilot-itis, was the next, and then they don’t believe results. Someone actually just asked for the definition of pilot-itis. That kind of ties in.

Jason Jones:                  Pilot-itis, I have to give credit to when I had the great fortune to work at Kaiser Permanente. One of the leaders there used to use it regularly. Pilot-itis is where you find yourself in your organization perpetually running pilots. I’ve known organizations that have had well over a hundred pilots going in a specific area. What that’s telling you is that there’s definitely demand for some kind of improvement, or there’s some kind of excitement out there. But if you have a hundred of these things or more going on at the same time trying to solve the same basic problem is going to be almost impossible to learn. What we routinely find is that these pilots can go on for way too long.

Jason Jones:                  You initially thought, “Oh, we’ll run a pilot for a couple months, and then we’ll get the answer, and we’ll make a decision.” Then you wake up five years later and you find that you’re still running a pilot with inadequate resources and people are stressed. Pilot-itis is having too many pilots going on at the same time, or they’re going on for too long. Okay. Then the other thing, and the reason why there’s not a link to that is because what we’ve been discussing this whole time is how it is that you can avoid pilot-itis. If we go back to that books and worms example, the study was, “Let’s give kids books and it’ll help them out.” Because we conducted a good study, we were able, or others, were able to determine, “Actually, that’s not going to help,” which allowed them to fairly rapidly draw a helpful conclusion that actually deworming was going to be helpful.

Jason Jones:                  Similarly, if we use the techniques that we’ve talked about already for low analytic throughput, which is the use of things like statistical process control charts, then we can more quickly understand whether or not things are working and we should turn them on or off, or support them with more gusto. The data literacy, again, is not running away from statistics and machine learning techniques, but actually embracing them to help us understand whether things are working or in what direction they’re going sooner than later. The lack of agreement on definitions, again, can help us with pilot-itis because often, what we find is the pilot start because people disagree about the perfect patient or member subpopulation. They disagree about the outcome to measure. They disagree about certain aspects of the processes. We can use techniques like sensitivity analysis to embrace many different reasonable definitions and then ask and answer the question, what seems to be working? Are we getting a consistent answer? Do we have an opportunity to learn from outliers?

Jason Jones:                  I guess the thing that we haven’t talked about yet is not believing the results. I was surprised a little that this wasn’t more common. Before I go to it, any further clarification or questions that we should address before moving on?

Sarah Stokes:                I don’t think so. Nothing that’s tied specifically to any of these topics. We just have some more broad questions that we can get to later.

Jason Jones:                  How should we know when we should stop?

Sarah Stokes:                That depends on your availability personally.

Jason Jones:                  Okay. I’ll cover this last one. Then please, if you have questions or comments, type them in. I’ll be happy to respond to them. This is, I hope, it feels like your time. Okay. What happens if people don’t believe the results? The most common thing that we run into in healthcare is often the sort of three step process for being willing to use data. Step one, when you present someone with a conclusion as, well, your data are wrong. We talked about that a little bit already. Maybe the data are wrong. But once we get the data correct, the next answer that I’m sure many of us have provided or received is, okay, your data are correct, but my patients are sicker. I believe the conclusion. If I was that geography seven, I believe that actually, in fact, our performance is not as good as the other geographies. But that’s because our patients are sicker.

Jason Jones:                  I once had the wonderful response that our readmission rate was higher, not because patients were sicker, but because patients were smarter. You might ask yourself, Sarah, any idea why smarter patients might increase their readmission risk?

Sarah Stokes:                Maybe they are the WebMD-ers out there.

Jason Jones:                  Could be.

Sarah Stokes:                They think they know what they have even though WebMD says everything’s cancer. Right? That’s my guess.

Jason Jones:                  You’re spot on. The thinking was our patients are smarter. Therefore, they don’t follow what we ask them to do. Then they end up getting readmitted. That’s why our readmission rate is too high. There are all kinds of reasons. But basically, what this is getting to, and it’s perhaps the most difficult chief complaint to address because it’s often not symptomatic is we find that there is confounding going on. You may not know what confounding is. I’ll describe it briefly. But in short, it’s the, “Maybe my patients are sicker.”

Jason Jones:                  What we’re really interested in is the relationship between A and B. This could be a treatment. It could be some kind of exposure to pathogen or something else. But we’re interested in this relationship between A and B. For instance, what hospital are you at? What’s your readmission risk? The problem is that there’s some other thing, this thing C, that is influencing both what hospital you’re at. Or is associated both with the hospital and with the outcome. In the example we gave of the patients are too smart, we had the smart patients are disproportionately going to one of the hospitals. Those smart patients are not doing what we say which increases their risk of readmission. That’s the definition of a confounder.

Jason Jones:                  The cool thing is if we can surface ideas about what might be confounding the relationship, then we can deal with these things either through study design, or through analytic techniques. If we only have retrospective data and we can’t change our study going forward, then we can deal with these statistically head on and be able to address these confounders so that we can ask ourselves, is it really the IQ or other measure of smartness of your patients? Or is there really an issue here?

Jason Jones:                  We’ve already seen one of the ways that we can deal with it, which is a sensitivity analysis. But for those of you who read medical journals, then again, we have a relatively small image, but you’ll be well familiar with what you almost always see in any medical journal as table one. In this case, we’re again looking at that blood bank example. Here, we have our short-term storage and our older blood. We can see we had about just over two and a half thousand patients in each of the arms. It’s looking directly at, did we have any underlying differences in the population? Was one population older? Was one population more likely to be male? That they have different primary diagnosis.

Jason Jones:                  We might imagine if someone’s in there for trauma, their need for blood is going to be perhaps different than if they’re in there for gastrointestinal condition, or a kidney condition, or something else. These are all reasonable ways that someone could say, Sarah, you could say, “I don’t believe you that it’s okay for me to get older blood because I’m different. I am sicker or less sick, younger or older, female versus male. Have this problem or that other problem for being in the hospital in the first place.” When we look at this, what we’re seeing is that there are no meaningful differences between the population. But I’ll ask you, how often in your routine analytics are you on the webinar actually presenting something like a table one so that when someone comes back to you and says, “I don’t believe your answer,” that you’ve thought through what reasonable confounders there might be, and actually done the analysis to show whether or not there are differences or not?

Jason Jones:                  If there are differences, how are you dealing with it? There is techniques that we could cover for how to deal with it. This is an addressable problem where we can apply well-known methods. Moreover, if you have the opportunity, if you’re blessed with somebody who will tell you they don’t believe your answer, what a wonderful chance to say, “Well, why? What is it about your patients, your members, your unit, whatever, what is it that you think is different?” If you hadn’t thought about it beforehand, go back and add it. Now you’ve engaged someone which is what gets us hopefully to that third stage of data and analysis. Which is, “Okay, I believe that the answer is reasonable. Now we’re going to do something about it.”

Jason Jones:                  Again, the idea here as we go back to each of these chief complaints is that for each of them, there’s something specific that we can do to address the chief complaint. For some of you, my hope is these are things that you can do or at least ask or make progress on next Tuesday. We don’t have to reinvent wheels. We do have ways that we can address these chief complaints.

Jason Jones:                  I will stop there. I’m sorry that I went over. I mistimed how long it would take, but I am happy to address any questions or relay thoughts to the broader audience that you may want to share.

Sarah Stokes:                Okay. Yeah. You had tons of praise coming in. I wouldn’t worry about going too long. But so here’s a question from Ed who says, “I know single version of the truth is illusory. That said, we have a challenge of standardizing certain key processes in the EHR which requires a single version of the truth to ensure data integrity. Thoughts?”

Jason Jones:                  Mm-hmm (affirmative). Gosh. Ed, my initial thought as someone who routinely works with EHR data is I sometimes long for standards, at least in some places. My favorite example is the problem list where unfortunately, the problem list, which is where physicians will often enter what problems a patient or a member has, there isn’t really a strong guidance about how to use that list. Because people use it in so many different ways, it’s very difficult then to aggregate data from a problem list and make heads or tails of it. But I’m guessing that maybe some of the things that you might be referring to are around documentation. If you could give an example of a regulatory example that has caused a problem for you, I would be grateful.

Sarah Stokes:                Okay. Next question comes from Max who says, “Change management. What’s your recommendation or strategy for incorporating data in change management?”

Jason Jones:                  Oh, that’s a great question. Ed, if you type something in, I just wanted to be clear. I was asking if you could clarify so that I could come back with a hopefully more informed and helpful comment. Please do respond if you’re still there and have willingness. Okay. Yeah. Data, I think, can be wonderful for change management for a couple of reasons. One is in the absence of data, we are left with the loudest or strongest voice in the room syndrome. That is not always the most helpful way. It’s usually not the most helpful way for an organization to move forward. Simply being able to bring data to the conversation. I have had the great fortune of witnessing both at the bedside as well as in the boardroom how the presentation of data can shift the discussion from, I guess, heated and impassioned debates about largely irrelevant information to refocusing the discussion where it needs to be. Which is, what’s the problem we’re trying to solve? How do we think we might solve it?

Jason Jones:                  But beyond that, a lot of what we’ve been talking about here are some of the techniques to leverage data to facilitate change management. It’s using data specifically to engage people. Rather than analysts going off into the basement and coming up with great predictive models or great reports or something like that, showing data to your clinical and operational and other leaders in the organization is a great way to engage them in the process. It’s also a great way to help set a common level in the organization so that we are making visible what people might otherwise be concluding internally.

Jason Jones:                  If we go back to this data literacy example, without showing these diamonds, people are having to make up in their own minds where it is the different geographies tend to be headed. Even if it turns out that someone with great knowledge of this geography four knows that actually, although the trend indicates they’re going to improve, this person with extensive knowledge knows that that’s not going to happen for a specific reason. They’ve in fact hit a plateau. They know that resources are being removed or something else. Even if the report is wrong, it’s given the opportunity for that person to disagree with it openly so that the entire organization understands what’s going on, or at least can discuss what’s going on. So often, what we find in organizations is without being able to make visible what’s going on in people’s minds, we get 6, 12, 18 months down the road, and then uncover why it is that the change we hope to see hadn’t happened. Being able to present the information consistently for all to share and consume, and give their opinions on can greatly facilitate the change management process. I hope that helps.

Sarah Stokes:                Okay. We’ll do a few more questions here. This one comes from Dongsul who asks, “In a predictive model, what would you suggest to handle confounders? Do you include them in the model development regardless? How would you handle effect modifiers?”

Jason Jones:                  Okay. Confounding and effect modification are highly related to each other. There are a couple ways that I would address it. First off, we don’t care about historical confounders in a prediction problem. If we go to the example that we were looking at here… It’s not going forward the way that I expected. Let me go back to chief complaint. Okay. Don’t believe the results. Let’s say we had something like IQ really was driving readmission rate and we had measured IQ. We do care about IQ as an effect modifier in trying to conclude whether or not either the hospital or the IQ is related to readmission. But for the purposes of trying to understand whether we can predict that a patient is more likely to be readmitted, we actually don’t care if there’s confounding going on. We can include both the hospital and the IQ of the patient in our predictive model and still get a reasonable prediction about their likelihood of being readmitted.

Jason Jones:                  That’s one answer as it relates to prediction. The thing where we want to be really, really careful about is now let’s say this is not IQ anymore and it was some sort of an intervention. Let’s say that it was a post discharge follow-up phone call. We didn’t know that it was happening. Now, what we’re seeing is our ability to predict readmission is declining over time. It’s declining because we’re doing a good job of post discharge follow-up phone calls. Where will that matter? That will matter because before we conclude that our predictive model is bad, we need to be able to account for how it is that the interventions may be “messing up” our predictive model. I hope that has helped somewhat. It depends on what the nature of the confounder is and how it relates to the thing you’re trying to predict and the problem you’re trying to solve.

Sarah Stokes:                Okay. I think we’ll do these last two questions.

Jason Jones:                  Okay. Did Ed get back to us with an example? Okay.

Sarah Stokes:                Just no. He said, “This is behavioral health and challenge of getting buy-in to adhere to standardized process for establishing episode of care so we can accurately track length of stay, and by extension, evaluate treatment outcome from admission to follow-up. Some of it is lack of data literacy of the managers and others lack of buy-in.”

Jason Jones:                  Okay. Gosh. Ed, when we stick the email address up there, maybe we can have more of a discussion. First off, thanks for focusing on mental health and wellness. That is a particularly challenging area as it relates to standard documentation. But what I would read from your comments, and Sarah, please tell me if you think I’m missing something, it feels there as though one of two things is going on. Either we don’t have an accepted standard and the documentation is all over the map, which makes it difficult to conclude anything. Or the regulations are requiring us to document things that don’t matter. Or are not measuring things the way that we want to, and therefore, don’t have the time or energy to measure the things that we should. Do you think I got it right, Sarah?

Sarah Stokes:                I do.

Jason Jones:                  Okay. That second point is actually really common in health and healthcare where the burden of documentation, the requirements from regulations, it’s not that the regulation itself is obviously the worst way to measure something. It’s that there’s so many of them and so many rules about how you all have to document that you don’t have the energy to document the thing that we know matters. An example there that I’ve seen, not in mental health and wellness, is in surgery where we have to spend so much time capturing risk factors and documenting things like surgical site infections that we don’t have the time and energy left to be able to capture, how’s the page patient actually doing? If they got their hip replaced, are they actually able to walk better 3, 6, 12 months later? Because we’re spending all of our time on risk adjustment factors for surgical site infection per maybe not regulations, but voluntary reporting that our organization would like to participate in.

Jason Jones:                  Ed, happy to follow up further. Mental health and wellness is a personal passion. Anything that I could do to help, I would leap at. Thank you.

Sarah Stokes:                Okay. I think we’ll just do one more. This question is from Vinod who asks, “How do you deal with latency in obtaining different types of data? Like claims data, clinical data, any tips there?”

Jason Jones:                  Yeah. You know what? That is a wonderful question. I will quickly go back to our SPC chart here, which is analytic throughput. I can tell you that the very next SPC chart underneath this one was the readmission rate. What we saw is that routinely, this last data point was always very low precisely for the reason that was just mentioned because there was a latency. Why did that happen? Well, because we know when the patient is discharged, but for readmission, we have to give them at least 30 days to be readmitted. Then at least another couple weeks for the claim to come in. What I would suggest is that we, again, use these visualizations and charts to help us understand the nature of the latency. Then we can ask ourselves, “Is there anything we can do to reduce latency?”

Jason Jones:                  In the case of readmission, the answer may be no. We have to give someone a chance to get readmitted. But in other cases, there may be ways to reduce the data latency like a claims lag. We can ask ourselves if we’re willing to take the steps to do so. Thanks. That’s a great question.

Sarah Stokes:                All right. On that note, I think we’re going to wrap things up. Thank you, Jason for the extra 25 bonus minutes there of Q&A time. We do want to thank Jason for joining us today, and for all of you for staying on the line here an extra 25 minutes past.