20) How To Drive Clinical Improvement Programs That Get Results

Session #20 – How to Drive Clinical Improvement That Get Results

[Tyler Morgan]

Alright.  Good afternoon.  Hope you had a great lunch.  Thank you for being here.  This is session 20 – How to Drive Clinical Improvement Programs that Get Results.  I am very pleased to introduce Tom Burton, Co-Founder and Senior Vice President at Health Catalyst.  Now, along with Tom, we do have a team of analysts who will be running the polls and providing insights today.  Our analysts are Brian, Kevin, John, Mark, and Chris.  Waive a little, guys.  Alright.  They’re right on the corner.  This is a hands-on session.  We’re so glad you’re here.  So, without further adieu, please join me in welcoming Tom Burton.


[Tom Burton]

Well thank you, Tyler.  I’m sure excited for this session.  This is a “you get to be involved” session.  So I’m really glad you’re here.  You’re in for a treat and we hope you have a lot of fun with this session.

So first just a couple of slides to introduce things.  What is a clinical program?  How is it different?  And I’m going to be walking around throughout the session.  So, a clinical program is different from a clinical service line.  A service line might be focused on revenue or growth.  A clinical program is all about improvement.  And it’s a permanent team, it’s not organized around a project or organized around an initiative, but it’s a permanent team that’s put in place to really improve care and sustain the gains once those improvements occur.  The goal, the aim, is to really create a continuous learning environment for that clinical domain, so that as new medical knowledge comes forth, as new evidence that comes forth, they can integrate it into the care delivery.

You saw in the previous sessions this analogy of a flatbed truck.  And we like this analogy because it shows that it’s an interdisciplinary team.  The wheels are representing kind of technical or support staff.  So we have a knowledge manager who has both kind of clinical and technical background.  This might be the person that was the go-to person on the EMR implementation.  That’s a great type of person for that role.  The data architect is someone who knows SQL, builds visualization-type reports, and dashboards, and the application administrator would be someone who can actually make changes to the EMR.  As your clinicians drive the truck, they’re going to redesign certain aspects of that care delivery, which may necessitate changes to the EMR.  So you’ll want all three of these supportive technical roles working with these physician and nursing leaders that are really going to redesign the care delivery to achieve better outcomes.  And so, we hope that this session today really helps you understand some key principles that these clinical teams, these integrated clinical and technical teams will need to accomplish in their journey in achieving and sustaining better outcomes.

So Duke University did a study a number of years ago that showed that retaining learning with different methods have different efficacy.  So the lecture, which is what I’ve been doing in these first three slides, you’re going to only retain about 5% of that.  So where this ends, the lecture portion of this session.  We’re going to be focusing on the top three areas, discussion groups, practice by doing, teaching others, because those are the most effective methods for retaining information, learning content.  And so, we’re going to focus on interactive experiences, if you will.

Well we hope you have four kind of aha experiences during this session around these four principles.  First, choosing the right initiative; second, understanding variation; third, improving data quality; and fourth, choosing the right influencers.


Tom:       So we’re going to begin with our first exercise, choosing the right initiative, and we’ve pre-selected a few of you to participate as contestants in our experiences. So with our first contestant, Laura. Come on up, or come on down I guess is the correct vernacular for a game show.  We’re going to be playing ‘DEAL or NO DEAL’.  So I need my hand mic here for you. Laura, where are you from?

Laura:     BJC Healthcare in St. Louis, Missouri.

Tom:      Welcome.  Let’s give her a hand for being brave and for participating.


Now Laura, have you ever seen the game show ‘DEAL or NO DEAL’?

Laura:     I have.

Tom:       Okay.  Excellent.  For those of you who have not seen the game show, basically Howie Mandel is the host and I don’t have the hair for that, but we’re going to try anyway.  So they bring out a bunch of briefcases and in each briefcase is a dollar amount, ranging from like one penny all the way up to a million dollars, and you’re not going to win the million dollars today, I’m sorry.  But what we are going to do instead, so the contestant will pick briefcases and eliminate it and then the banker calls and says, “Do you want to make a deal?  You can keep choosing briefcases or you can take my deal,” which is a certain amount of money, usually less than what’s still up on the board.  We don’t have time to do that full drama with the lights and everything, but we’re going to play a version of that.  We don’t have models but what we do have are our Health Catalyst nerds.  So let’s welcome them out.

Alright.  You can see all of these great briefcases held by our Health Catalyst nerds.  Now, the thing that’s different here is these briefcases contain potential opportunity savings as you work on a clinical program and you can actually reduce the cost of the care being delivered.  So we’re going to do a shortened version of this where I’m going to have you pick six briefcases to start.  So go ahead and choose.  You can ask from the audience.  Audience, give her your input.  What should she choose?  Six briefcases.

[Audience choosing briefcases]

Laura:     Okay.  So, 9 is my favorite number.

Tom:       Alright.  9, come on down.

Laura:     And the double of 9 is 18.

Tom:       18, come on down.

Laura:     And then a 4.

Tom:       4.

Laura:     And 20 from my audience.

Tom:       20, come on down.  Alright.  This is like the donut lady.  You have two left.

Laura:     3, because it’s the square root of 9.

Tom:       3.

Laura:     And…help…[audience talking].  I heard 12 the most.

Tom:       Alright.  Now, so we’ve got the numbers 12, 9, 18, 3, 4, and 20.  Alright.  Now, from these six, let’s have you choose three.

Laura:     Well, it has to be 9 and 18 and 20, because it looks so good.

Tom:       Okay.  9, 18 and 20.  I’m going to have you guys have backup.  Alright.  Now, we’re down to these three, 3, 4, and 20.  Alright.  Now, why don’t you go ahead and we’re going to open the briefcases.  You have enough staff that’s on your clinical teams to work on three process improvements and you’ve chosen these three.  So let’s go ahead and open the briefcases and see your potential savings here.

So, mood disorders, you’re going to save $100,000.  Allergy and in the analogy, 1.5K and sleep disorders $10.

Laura:     It’s not good.  I can’t believe that.

Tom:       Alright.  So, up here we’ve tabulated the results.  You saved $100,000 and $1 and you’ve left up there…

Laura:     A bunch…

Tom:       $3 million basically.

Laura:     Okay.

Tom:       So, what do you think?

Laura:     I want the 3, I want the million.

Tom:       Alright.  Well, I feel I’m going to get a call from the banker here with an opportunity.  Alright.  I’m going to pretend my phone is ringing.  It’s AT&T.  Oh there we go…Alright, we’ve got our banker here.  Yes?

[Banker on the phone]

Tom:       Alright.  I’ll ask her.  Alright.  Laura, here’s the deal.  You can take these savings, fake big savings that you’ve just achieved, plus this incredible Health Catalyst baseball cap and say DEAL and you can go and sit back down.  Or we’ll let you put these back into the mix and we’ll let you choose again, but this time we’ll organize them a little bit differently, maybe using some analytics.  So DEAL, the half, or NO DEAL, I want to go for that bigger number?

Laura:     Really hard to pack up the hat.  I’m going to say NO DEAL.

Tom:       NO DEAL.  Alright.  Now, what’s interesting is sometimes we actually pick initiatives like playing DEAL or NO DEAL.  We kind of just randomly choose what number we like, you listen to some person here that said pick 20.  That was probably like a loud physician that had a grant.  How did that work out, sleep disorders, 10 bucks.

Laura:     Yeah, That didn’t work out.

Tom:       Yeah.  Not great.   Okay.  So this time we’re going to use some analytics.  So what we’re going to do is rather than listen to the loudest physician or randomly choosing, we’re going to use some data to help us.  So the first part is organizing them by the size of the process.  So I’m going to have our Health Catalyst nerds here reorganize themselves in a straight line based on the size of that particular care process.  This is similar to some of the Pareto charts that were shown by Dr. Macias where we have a Pareto chart like this that shows that the top 7 to 10 processes often represents 50% of all the variable costs that your organization spends.  And if you go to maybe 20 or so, you get up to 80%.  So now they’re organized by large processes over here – it looks like 9 was potentially a good one – and small processes, there’s 20.

Laura:     And 18 all the way down.

Tom:       And 18, way down there.  So this is kind of the first clue.  Now there’s a second clue.  Size of the process is very important.  Variation is the second thing we’re going to look at.  And I’m going to walk you through a little example here.

So if you had a particular physician and let’s say they did 15 cases of this particular severity in a given year and their average cost was $60,000.  But the average cost of all the other physicians was $20,000, we could estimate that if they could just get back to what we’re doing on average, that would be a $40,000 per case savings, times that 15 cases, that’s about $600,000 of savings.  And if we look at another physician maybe did a little bit better and maybe they had, let’s say, $35,000 a case, 25 cases.  That’s a large opportunity.  You add all of those up, getting them just to the average in that severity score, that’s a pretty significant potential savings.  So we look at variation as another opportunity to see how much opportunity exists.

So now what we’re going to do is we’re going to have our analysts here, our nerds, organize themselves into four quadrants.  So, go ahead and move.  Now, these quadrants are going to be large processes with a lot of variation and that will be up here in the top right.  Small processes over in the top left that have a lot of variation.  Large processes that are consistent here on the bottom right and smaller processes that are consistent.  And we’ve got caution tape to kind of mark that off.  So again, the large processes with a lot of variation are the top right quadrant, your instincts happen to match the analytics.  So, which three brief cases do you want to pick now that you have some analytics backing you up?

Laura:     9, 14 and 10.

Tom:       Alright.  Come on down.  Let’s see how you did this time.  Alright.  Let’s open up the briefcases.  $1 million, $600,000, $500,000.  Impressive.  Let’s give her a hand.


Tom:       Your grand total just by working on these three processes, $2.1 million.

Laura:     Much better.

Tom:       Oh, I’m getting a call from our banker.

Laura:     Oh.  He is more on time this time.

Tom:       Yeah.  He is more likely to make a deal.

Laura:     Yeah.

Tom:       Alright.

[Banker on the phone]

Tom:       I’ll translate from Charlie Brown.  Okay.  Great.  Thanks.  I’ll ask her.  Alright.  Here’s what he says.  He’s upped his offer.  Two baseball caps and take home these fake savings, or – now he’s offering something because it’s you and because it’s today that I’ve never heard him offer before, he’s offering for Health Catalyst to come and do a real care process analysis for your organization for free, plus your name will be entered into a drawing, plus he’s giving you this T-shirt from the summit.

Laura:     Wow!

Tom:       So, DEAL, the two hats, or better DEAL, the shirt, the actual key process analysis, and being entered into a drawing?  What will it be?  DEAL or better DEAL?

Laura:     Better deal.

Tom:       Alright.


Tom:       Thank you so much.

Laura:     Thank you.

Tom:       Nikki, our assistant, will find you the exact right size on the T-shirt and we sure appreciate you participating.

[Tom Burton]

So, the key principle here, look at both variation and size of process as you’re prioritizing.  Here’s an actual key process analysis that we did with one of our clients.  So you can see large processes with significant variation.  We actually ended up working on sepsis first with this organization and they were able to save about $1.5 million a year in their sepsis care and they also reduced their mortality rate by about 22%.  So, very significant effort and again, it would have been about the same effort to work on a small process as this large variable process.

The Popsicle Stick Bomb Exercise

Tom:       Alright.  Very good.  We’re going to move into our next game.  And this one is about understanding variation.  And we have Tarah here who is going to explain this game, and hopefully this is extremely interactive, so all of you are going to be able to participate this time.  So Tarah, explain the game a little bit.

Tarah:     Alright.  So this is the competitive part of the competition.  You will be getting popsicle sticks on your table.  Do not pick those up yet until I tell you you can pick them up.  That’s step one.

Tom:       Table 11, no picking them up yet.

Tarah:     Don’t put them in your mouth.  Sometimes the colors like to leak.  So no chewing on your popsicle sticks.

Tom:       These are not sanitized.

Tarah:     What you will be building is a popsicle stick bomb.  You will build it as fast as you can.  And once you build it, raise it above your head and look at the number that will be on the screen.  Remember that number, that’s an important number for you.  Once you have finished building your popsicle bomb, then you can help somebody else at your table work on it, but only then.  Colors don’t matter.  Are we ready?  Any questions?

Tom:       Are we ready?  Any questions?  Good?  Alright.  Here we go.  On your marks, get set, go!

[Sound effects]

Tom:       Now hold it.  You should be able to hold it up and it should stay together.

Tarah:     It will stay together on itself.

Tom:       If you want, you can have one of our nerds check your work.  It should be able to hold up like Tarah is holding now.  Alright.  We have someone who’s done.  Remember what number you are.

Tarah:     Remember the number.  Nice.

Tom:       Alright.  Remember that number, that’s how many seconds it took you to finish the project.  We’re going to cut it off here in about 10 seconds.

Alright.  Go ahead and stop.  Okay.  Now comes the important parts.  We need you to enter your time how long it took you to complete it into your app.  And we’re going to look at the histogram of how we did.

So open your app, this is session 20, and answer poll question #1, and you should see the times, 0, 5 seconds, 5 to 10 seconds, etc.  Pick the time.  And if you didn’t finish, put a minute or greater.  So go ahead.  Enter your time.  Does everybody have a chance to do that?  If you’re having any difficulty with the app, just raise your phone and one of our Health Catalyst folks will come and help you get that entered.  It’s important we get everybody in here.  We want a full count of how we all did on this very important exercise.

Alright.  Brian, how are we looking for results?  Let’s take a look at how you guys did.

Brian:     So let’s take a look at the distribution here.  On the Y axis of the count, that’s the number of people, and on the X is the average second, so that time slot that you entered.  So if you finished it within 15 to 20 seconds, it will show that 18 number.  So it looks, Tom, like we’re pretty heavily weighted toward the end.  The last number is greater than 60 seconds.  So it looks like most of you are in that bucket.

Tom:       Alright.  Well, that’s okay.  We’re going to have a second chance at this.

[Tom Burton]

But let’s a little first about the principle of variation, and Dr. Macias and Dr. Burton both covered this in their session.  If we can switch back to the PowerPoint.

So this principle of setting a minimum standard would be like us saying if you all don’t get it done in 45 seconds this next round, you’re bad clinicians.  And we’re going to punish you.  So that would be like setting that minimum standard and it would have some effect.  Some of you would say, I don’t want to be called out as a bad clinician, so I’m going to improve.  But those of you that finished in 30 seconds, you’re like, hey, I’m above the minimum.  I’m not going to worry about it.  I did fine.

Now, what we’re going to do instead is focus on what is the best practice, is there a better way of doing this?  And hopefully even those of you that got it done in 30 and 20 seconds, and maybe you can improve on that time.  And so the way we’re going to do that is we’re going to try to standardize the process.  Tarah is going to take us through kind of the best way of building this popsicle bar.

Popsicle Stick Round 2 Exercise

Tom:       Okay.  So she’s going to walk us through.

Tarah:     Alright.  You may carefully disassemble your popsicle sticks on.  Don’t explode them yet.  Notice I said yet.  We’ll still have fun later.  Okay.  I am the world-renowned expert in popsicle bomb building.  I’ve been studying it for four whole weeks now.  So I pretty much have a masters degree in that.  The first thing you’ll want to do is make a V with your popsicle sticks.  Are we okay?  Okay.  The one that goes down in the middle goes behind underneath your V.  The next popsicle stick goes underneath the middle one and above the outside two.  And the last one goes above the middle one and underneath the outside two.  You can do it with one hand, really quick.  None of you have an excuse.  And use this force only for good.  It’s dangerous.

Tom:       Yeah.  Let us show you how you could use this with your kids.  Go ahead and demonstrate the… Yeah, it’s looks apart.  Alright.  We also have some instruction here online.  If you need it, you can see the best practice there laid out very nicely.  So, disassemble your sticks.  We’re going to time it again.  This time we’ve identified our best practice, we documented it, we shared it with all you, we did a demonstration, and we also have reference content.  In case you forget, you can look back at this detailed diagram of how to do it right.

So are you all ready?  Any other questions of our popsicle bomb expert before we take round 2?

Tarah:     I’m a good teacher and humble.

Tom:       Excellent.  Alright.  On your marks, get set, remember to look at your time when you finish.  Just we’re testing if you’re listening.  Alright go!

[Sound effects]

Tom:       Wow, we’ve already got some folks done.  That’s excellent.  Less than 15 seconds.  Very good.  I think almost everyone is done.  That is impressive.  We’ll let our music play out because I love the jazzy music.  Alright.  Thank you very much.  Alright.  Go ahead and enter your time for this round 2 in your app.  Make sure you get that second time in there, and we’re going to see how we did on that same histogram for round two.  Has everybody entered their time?  Brian, what do we see this next round here, the second round?

Brian:     Second round is looking much better, Tom.

Tom:       Look at that.

Brian:     Much faster times on that distribution.

Tom:       That is very good.  We definitely tightened the curve and we moved it towards better outcomes.  Excellent.  Awesome.  So the key principle here – understand the variation, understand the best practice, teach the best practice, and then measure your results.  Okay.  Very good.

The Water Stopper Exercise

Tom:           We are ready to move into our third exercise and I have four brave volunteers.  So Dr. Lee, Carol, Amanda, and Greg.  Come on down.  You’re the next contestants.


Alright.  I’m going to have two of you over here.  You’ll be team 1 and the other two over here team 2.  Alright.  You’re going to want to have to punchy one for this.  Right over here on this side.  So let me explain this exercise a little bit.  This represents the data quality problems that we sometimes face.  So Tyler has built some contractions and this bucket here represents the problems that happen with data quality.  It’s a bucket, it has a bunch of holes in it.  And each team is being supplied with a whole bunch of stoppers of different sizes and there’s the same number of stoppers as holes in these buckets.  Now, we’re going to turn the water on from this top bucket and each team will compete to see which can have the best team work in stopping the water from flowing to the bottom bucket.  Alright.  Are our teams ready?  Stand by.  We’re going to try that.  We’re going to try to film one of these so you guys can all see it.  The challenge of it is keeping bad data from getting out into the organization.  Alright.  Are you ready?  On your marks, get set.  Go!

Alright.  Team 1 is doing well here.  Excellent teamwork.  They are plugging the holes faster than I thought they would, but I still see a bunch of bad data coming.  I don’t trust this data.  It looks bad to me.  Alright.  The leaks are slowing.  We still got some.  It looks like you stuck some of them in the wrong hole there.  Try again.  It’s neck and neck.  Okay.  Has it stopped?  Alright.  We got a winner right here.  Congratulations!  Excellent.

Alright.  So, how did your team do so well?

Audience:    We plugged the big hole that is right underneath first.

Tom:           You plugged the big hole first.  Dr. Lee, what did you think?

Dr. Lee:       That’s the key problem.

Tom:           That was the key point.  The big hole first.  So look for the big problems first.  Let’s see what this team did over here.  And Tyler, why don’t you pull the third one out.  I’m going to show you.  Yeah, it still looks like it’s leaking here.  Yeah.  So you had some problems reconciling two reports, it looks like.  Has that ever happened in your organization?

Audience:    Oh never.

Tom:           Never?

Audience:    Never.  It always agrees.

Tom:           Okay.  Alright.  Well, let’s give these four contestants a big round of applause.  We have T-shirts for you.  Nikki, if you can find the right sizes of T-shirts for these four contestants?  They will also, because they’re such good sports, be entered into our drawing.

[Tom Burton]

Now, I’m going to show you maybe an alternative way of doing this.  Alright.  So Tyler, why don’t you time me.  Now, there was something to the big stopper.  Let’s try it again.  On your mark, get set, go!

Alright.  I’m done.  Now, there’s an analogy here and a wet hand.  So the analogy is we want to fix the data quality problems at the source.  I’ve seen a lot of teams spend a lot of time cleaning up data downstream, and it’s problematic.  There are three places we can clean data.  One is during the data capture process, another is during the data provisioning process, that might be kind of this middle layer, another is actually in the report itself at the distribution point or the analysis point.  The top one is where we want to do it.  We want to fix the data problems at the source.  Let me give you an example of this.

We were working on reducing elective inductions, inappropriate elective inductions, and we were finding all sorts of problems with the data.  It was actually entered into the EMR at 11 different places.  Sometimes it was 39 weeks 1 day, this is gestational age, 39W 1D, 39.1, there were like literally 16 different ways it was getting entered.  So originally the analysts were spending a lot of time building complex algorithms that would merge all this data in and make sense of it.  But then someone would accidentally enter something else and it would break their algorithm.  And so they spend a lot of time refactoring that algorithm.  So, there’s problems of trying to fix it up there.  Why they ended up doing was they turned off 11 of the 12 places where you could enter it.  They made it a default that it was going to be 39 weeks, 1 day, spelled out, they made it very specific, they trained everybody on the right way of capturing that data at a specific point in the workflow, and suddenly all their downstream processes improved.

We have a question here.


What happens to all the data that got through before the hole was plugged?  Okay.  It’s going to be resident in the data warehouse.  Is it going to be pulled into the reports?  If so, you’ve got spurious data in your reports.

[Tom Burton]

A great point.  So you have two options.  One option is to just say, you know what, that was before we started really tracking this well.  We’re going to recognize that everything pre-2013 is suspect because we weren’t really organized as a clinical team yet.  And so you just say, that’s tough.  And that may be the best answer.  The other is to go back and correct the source system and go, you know, re-enter that data in the right place at the right point of capture for that historic data.  And we’ve seen teams do both.  So Allina, Penny Wheeler, we worked on that same elective induction case, and they actually went back, they didn’t correct their 10 years of history but they did correct one year, so this year versus last year reports would be accurate.  Those are the two ways I would suggest doing it.  I would still avoid doing it in the data provisioning or in the report stage.

This slide – three things when you’re thinking about data quality, the accuracy of the data, the timeliness of the data, and the completeness of the data.  And let me give you a couple examples.

So accuracy, operating from timestamps.  Sometimes they are meaningless.  They get filled out at the end of the shift and it’s like the patient was discharged from the room and the setup team for the next, it was all done in 16 seconds.  Well you know that’s not real.  Okay.  So the accuracy is important.

Timeliness, you know, oh, billing data.  There happens to be kind of a 30-day delay on some of that.  Or end of shift.  I wrote it all on my arm and then at the end of the shift I entered it in.  So that could be a timeliness situation, where if I queried at a certain point of time, I don’t have a complete picture.

And then completeness of the data, sometimes data is just missing.  Ejection fraction, we’re working on a heart failure, it was only entered like 30% of the time.  Well that’s the key to knowing the cohort definition if do they have an ejection fraction, you know, a 4-year or less and if it’s only entered 30% of the time, you’re not going to get the complete cohort.

So, three things to think about with data quality.

We’ve talked about most of these reasons of not fixing it at other places but trying to fix it at the source if you possibly can.  The more that you emphasize data for punishment and for ranking and spanking, I like to call it, the more that you’re going to get a real focus on data accuracy and data quality.  The more it’s about learning, the better off you are and the more that you’ll be able to use data even if it’s not perfect to improve.

Paul Revere’s Ride Exercise

Tom:           Alright.  We’re now going to move to our last exercise.  I have two final contestants.  Richard and Ally, would you come on down.  I think you know who you are.  Come on up.

Now, many of you are aware of the famous rider, Paul Revere.  And today we’re going to have Richard play Paul Revere.  So Richard, come right over here.  Now, what some of you may not know is there were other riders that same night.  William Dawes was one of the other riders that night, and Ally, we’re going to have you play the role of William Dawes.  Now, they took different routes.  So, Paul Revere, this half of the room is going to represent your route.  Each of these tables represents a village that you need to contact.  Okay?  And William Dawes, this is your half of the room, it’s a little bit larger, just like the area that Dawes covered.  And what we’re going to have you do, well first you’re going to need some trusty steeds.  So let’s bring out our horses out in style.  So, these horses will guide you through and you are going to go to each table and you’re just going to tap one person at each table.  You don’t have time to talk to everybody but you can warn one person in each village and hopefully they’ll warn the others.  So what we’re going to do is we’re going to have – Paul Revere, you’re going to take this half of the room.  William Dawes, you’re going to take this half of the room.  You’re going to just have a few seconds.  Your horses will guide you.  And so, on your marks, get set, go!

Make sure you get to every village if you can.  It looks like William Dawes has lost the horse.  And come back up here when you’re done.  Alright.  Very good.  Thank you.  Well done.


Alright.  Now we’re going to see how they actually did.  So those of you that were tapped, go ahead and open your envelope.  No one else that was tapped open theirs.  And there should be a number of cards inside there that say, the red coats are coming.  So go ahead and distribute those cards.  Keep one for yourself because you are of course notified and then let the other people at your table, if you have enough, distribute those to the other people.  So go ahead and distribute those out.  And this represents the influence that that person had on the other members of their community.  And so let’s see how did the Dawes’ side of the room, would you hold up your red cards.  Okay.  So we have pretty good coverage there.  Okay.  So let’s look at the Paul Revere side.  Okay.  Wow!  How did you do that, Paul Revere?

Richard:      I cheated.

Tom:           How did you cheat?

Richard:      I was influenced by my horse.

Tom:           Now, what did your horse say?

Richard:      My horse said “tap the people with the red envelope.”

Tom:           Okay.  Alright.  Thank you very much.  You both receive a Health Catalyst T-shirt and you’ll be entered in our drawing here at the end.


[Tom Burton]

So this is actually what happened.  Thank you very much.  Paul Revere did know exactly who to go to.  And in Everett Rogers’ book, Diffusion of Innovations, he explains this.  He says that Revere knew who the influencers were in each town.  So he was very targeted in who’s door he knocked on, they were the leaders in that town.  They were able to arouse the rest of the city, get them out of bed, and get them ready to meet the British.  Dawes, on the other hand, he just randomly knocked on doors and a lot of people just went back to bed.

So, this can be shown in this Bell curve at five major groups – innovators, that would be like Paul Revere, early adopters, that would be the influences that can influence the majority, and then you have the early and late majority and then the laggards.  And so, the key is to pick the right people to be the innovators and to be the early adopters.

So when we think about this in a clinical setting, in the clinical program teams, you want to pick, for the guidance team, early adopters as the leaders and innovators as the members of that team as well.  The small workgroup, the ones that are going to redesign care delivery, are the innovators.  And then the most important are the broader teams that have representation from each of your units or each of your clinics or facilities where the care improvement needs to be rolled out.  These should be the early adopters.  Now, how do you pick them?  Well, one way to do it is to ask.  You can ask, you know, “If you had a tough case, who would you go to?”  That’s a great question about early adopters.  You can also ask things like, “Who would be the top three physicians you think will come up with a new way of deciding on a treatment or a new treatment.”  It is very important that you get fingerprinting and buy-in from the groups as you pick those that will be leading this effort.

Teach Others Exercise

Tom:           So, we’re now into the final session, the final experience.  And what we’re going to have you do is we’re going to have you take one minute.  I want you to break into groups of two on your table and you’re going to go through the four exercises that we went through, the ‘Deal or No Deal’ exercise, the Popsicle Bomb, the Water Stopper, and Paul Revere’s ride, and I’m going to give you one minute to explain that exercise and its purpose to your neighbor.  Then we’re going to switch and your neighbor is going to explain it back to you.  So go ahead and break into groups.  One minute.  Go!

Alright.  Go ahead and switch.  Go ahead and switch, if you haven’t, already and have your neighbor explain it to you now.

Alright, 10 more seconds.

Alright.  Stop everyone.  Okay.  Hopefully, by having to explain this principle to someone else, you’ll learn ever more.  And we found this to be a really effective way of sharing knowledge – is by kind of forcing each other to teach one another the principles and the concepts.

[Tom Burton]

So I hope this session was helpful.  What we’d like to do now is have you do kind of our final survey for this session and take a look at first overall, what did you think of the experiences?  And then for each, how effective was it in teaching the principles?  So ‘DEAL or NO DEAL, how effective was that in teaching prioritization by size and variation; the Popsicle Bomb, how effective was that in teaching variation and standardizing processes; And then the Water Stopper Exercise in data quality and fixing data capture quality problems at the source; And then finally choosing the right influencers, Paul Revere’s Ride.

And then the last question is – were these the exercises that you might want to run in your organizations?  And we’d be happy to help provide some of the materials that we used here today to your organizations to do that.  Just indicate your interest in that.

Alright.  While you’re finishing that up, we do have a drawing for all of our brave volunteers that were willing to get wet, ride horses, and participate in the game show.  So I’ve got some names here in the hat and I’m going to draw our winner.

Laura, come on down.  Congratulations.



Thank you.

[Tom Burton]

Thank you very much for participating.

[Tyler Morgan]

Alright.  Thank you very much.  Joint me in thanking Tom for providing such an interactive, such a fun session.  Thank you, Tom.

We’ll now move to the question and answer portion of the session.  We will start by taking the time over to Chris, and he’s going to share with us the insight the Analyst team have gained from your responses during the session.  And Chris, before you start, I would like to remind everyone you can use that ‘Submit a Question’ button on your app to submit any questions you’d like to as well.  So go ahead, Chris.


Thanks, Tyler.  Tom, I’ll share two insights.  First, we found that when you talked about fixing data at the source, that really resonated with the audience.  And secondly, we were monitoring the time that it took for people by role and it appears that those in the finance role, it took more than twice as long than clinicians to build the popsicle one the second time.

[Tom Burton]

That is a great insight.  Excellent.  And look at that.  The clinicians are the fastest.  You know, that’s just, I love that.  Alright.  Thank you very much.

[Tyler Morgan]

Alright.  Well let’s go right into our questions then.  So we’ve got a few here in the app.  The first question is, how do we deal with data problems if they come from an outside source?

[Tom Burton]

That’s a great question.  Those are much tougher.  We can work with those outside sources and make them aware.  Many times they don’t know that their data has problems.  They’re just not aware.  And so, that would be the first line of defense.  The second would be just letting people know, footnoting there’s a problem with this data, we’re aware of it.  Again, I would really try to avoid correcting it down the stream.  If you’re hiding data problems, they’ll never get fixed.  So again, I would try to work with that outside source and make them aware that their data is problematic.  Great question.

[Tyler Morgan]

Now let’s take a question from the audience.  Somebody has raised their hand.  Do you have a question?


Assuming that we can’t get our sources to fix everything all at once, do you have suggestions about which things to start with and maybe some things that we would still clean?

[Tom Burton]

So the way I would do that is by AIM Statement.  So when you’re picking something to improve on these clinical program teams, you’ll find things that are most important and other things that aren’t important related to that AIM statement.  So that really narrows your focus down to I don’t have to fix everything at once but I do want to fix the ejection fraction for this heart failure data or I do want to fix, for elective inductions, I want to know that gestational age and I want that really accurate because it really matters in my outcome or process metrics.  So that’s where I would start.  I would prioritize those things that are going to impact the things you’re trying to improve.  Great question, Laura.

[Tyler Morgan]

Alright.  We’ve got a question from the app.  The question is, is it harder to find early adopters or innovators?

[Tom Burton]

It’s more important to find early adopters.  You can innovators that are coming up with great new ideas but they don’t have as much influence as the early adopters.  I think it’s not too hard if you look to find either but it is more important to find the early adopters.  They are going to be the ones that have the biggest influence to change the entire population of physicians or clinicians you’re trying to work with.


What thoughts or concepts have you had experienced with around the Pareto?  So in terms of figuring out not just what the opportunity is but what’s the cost in making that change and then also what change will that process tolerate?  What’s the limit?

[Tom Burton]

Great question.  So one of the other breakout sessions, and if you didn’t get a chance to go to Bobbi Brown’s session on ROI, that’s a great one to go to because you do, and we call those balance metrics.  So there was a cost, for example, when we worked on sepsis, we’ve put together a code sepsis team.  Well there was a cost associated with doing more for those sepsis patients, did that cost outweigh the benefit of reducing the length of stay of sepsis patients.  So you definitely want to measure the cost of the intervention, as well as the positive outcome that comes from the intervention.  Great point and great question.  I would really recommend watching Bobbi Brown’s session on ROI and things to think about when you’re calculating ROI.

[Tyler Morgan]

Okay.  We’ve got about 4 minutes left.  We’ve got time for just a couple more questions.  It looks like we do have a question right at the back first.


So I’ve often heard that including laggards or naysayers on your clinical improvement teams is a good way of bringing that voice in?  So what are your thoughts about that because you’re…

[Tom Burton]

Yeah, it depends.  It’s a risk.  They can also totally derail the process.  So, I would probably not recommend that unless they’re so loud that they’re going to obstruct the process, they’re going to capture Paul Revere on his ride and put him in the prison.  So it really depends on the situation.  But ideally, you choose early adopters that are well-respected.  Typically, the laggards are not going to be thought leaders.  They’re going to be seen as whiners, problem-creators not problem-solvers.


Oftentimes with clinical care teams the capability maturity of the team actually is a function of how well you’re going to be able to implement and an early adopter, an individual who’s an early adopter, may have trouble if their team isn’t structured right.  How do you address that?

[Tom Burton]

It’s a great question, Mark.  Thank you.  One of the keys is education.  So taking them through exercises like this as a team actually really helps them.  And so, what we try to do is make sure that team goes through some training, some advanced quality training together, as a team and that kind of brings those that are maybe not early adopters up to speed and gets them involved in the process.  And so, education is a real real key to avoiding that situation of different levels of competency or ability.

[Tyler Morgan]

We have time for one last question.  Do we have one last question?  Okay.  I’ll go to one here on the app.  The question is, how can we get Health Catalyst to do these demos at our institute?

[Tom Burton]

Just let one of our team members know.  Jeff, there at the back.  He used to be a nerd but now he’s put a jacket on because he couldn’t take it.  You can let Jeff know and he’d be happy to help set that up and we’re happy to come.  We really believe education is a key key component of transforming our healthcare systems and really helping to improve.

Thank you so much for participating today.


Thank you.


How To Drive Clinical Improvement Programs That Get Results – HAS Session 20 from Health Catalyst