Changing Healthcare Using Data: A Case Study of One Small Health System's Odyssey To Achieve Material Improvements (Webinar)

Changing Healthcare Using Data: A Case Study of One Small Health System’s Odyssey To Achieve Material Improvements (Transcript)

[Bernie Monegain]

Hello and welcome to today’s healthcare IT news webinar, Changing Healthcare Using Data: A case Study of One Small Health System’s Odyssey to Achieve Material Improvements, sponsored by Health Catalyst.

Like many health systems, North Memorial Health Care in the Northwest Metro Area of Minneapolis-­‐St. Paul has spent the last few years battling for financial stability. The 648-­‐bed 2-­‐ hospital system has struggled with rising costs, stiff regional pressures from an abundance of formidable competitors, and unpredictable reimbursement amid an uncertain political environment. During this webinar, you’ll learn to shift to a data-­‐driven decision-­‐making culture and how to make the data meaningful so providers can make better decisions.

Our speaker today will be J. Kevin Croston, President of Physician Organization and Chief Medical Officer at North Memorial Health Care. Dr. Croston earned his BS Degree at the University of South Dakota and his MD Degree at the University of South Dakota School of Medicine. He completed his surgical residency at Hennepin County Medical Center.  He has served as Medical Director of North Trauma Institute at North Memorial Medical Center since 1993, and was the Chairman of the Minnesota State Trauma Advisory Committee from 2005 to 2010.

And now, I’d like to hand it off to Dr. Croston to begin our presentation.

Dr. Croston…

[Dr. J. Kevin Croston]

Good afternoon.  So I’m going to try to go through our little odyssey. I’m going to talk specifically about from a physician’s perspective as to how these things went and how we migrated from being a hospital to a healthcare delivery system and how our thinking morphed with the availability and capability of our data warehouse.

[Poll Question #1]

What is your primary area of focus?

First of all, kind of nice to know who we’re talking to. So I’m going to ask you, what is your primary area of focus?  So the options are physician/clinical care provider, quality, are you in the information system business, finance, administrative executive, or other

[Poll #1 Results]


For my objectives today, I’m going to talk to you about how to shift to a data-­‐driven decision-­‐ making using a Key Process Analysis tool, how to make your data meaningful so providers can make better decisions.  We’ll talk specific programmatic and functional issues related to that. And then I’m going to to give some examples of some of our successes and also some of the challenges and mention some things that we didn’t do so well on, so you can learn from our mistakes. We’re going to specifically deal with pregnancy, the reduction of the pre 39-­‐week unnecessary inductions, cardiovascular care, revenue cycle processing and professional billing, and catheter associated urinary tract infections (CAUTI), and we’ll show you how we’ve morphed through that process.

About North Memorial

So about North Memorial, as we’ve mentioned earlier, we’re a Minneapolis-­‐based, kind of the northwest suburbs of the Twin City base, two-­‐hospital health system. We have 15 clinics, about 300 providers.  You can see that we provide the full continuum of services, we are a Level 1 Trauma Hospital, and we’re committed to developing clinical effectiveness guidelines to deliver the highest quality care at a lower cost.  A part of that is just because our market is very competitive.  We have very large health systems that are quite organized and had been until recently the classic hospital that owned a few clinics, who’s a successful model in the 80’s and 90’s and not so successful here lately as the consolidation of the healthcare market continues. We have about 90,000 emergency room visits a year, our annual inpatient admissions is about 33-­‐34,000, and our total FTEs is around 4,200.

North Memorial Situation

As far as North Memorial situation, I mentioned we have tough regional competitors. We have a couple of very large health systems in town. As everybody else was facing a declining payment stream, we’ve wrestled over the last 10 years with our payer mix. We are located near an impoverished portion of Minneapolis and as a trauma center we got a lot of uninsured patients that created an issue for us a few years back that we’ve had to deal with. And frankly, we were the classic health system: we had a lot of data, just didn’t know what to do with it. It wasn’t organized in a fashion that was very useful. So we were using basically product line reports that were pulled together 6 months after the year was over to make decisions about the following year that we were already 6 months into. So it was a bumpy and unwieldy process.

Our medical staff was anxious over where we’re going and where we’re heading. Our executives were anxious because of our financial status and it created for a fairly tense environment.  There was a lot of conversation about us merging with somebody in town, not by us but by everybody else.  We’re the smallest health system in Minneapolis, and so I think that’s the likely target for conversation in any event, and we weren’t helping ourselves very much.

And we’re also a hospital.  As I mentioned, we are a hospital with clinics as opposed to a healthcare delivery system and that made our decision-­‐making very hospital-­‐centric. So it wasn’t really enterprise based. We’ve kind of morphed through that over the last year or two.

The opportunities were good, though. We had a strong quality improvement culture based largely due to our success with the trauma world where it’s fairly mandated, and we have a lot of engaged staff.  It’s a very cultural place, to say the least. There is a lot of “us against the world” mentality here, and I think that helped us through some rough times. I would say that we had some administrative turnover but we were fortunate enough to land with some supportive leadership here that messaged to us that we needed to fix the place and to do what we could do.  So I had the leverage to do what I needed to do to make things move. And we recognized that we had to be a lot different than we were in order to survive. And so that drove a lot of these changes, and that also affected our conversations with our board members, as you can imagine.

Key Process Analysis (KPA)

The first tool we used – the first thing to bring up is this Key Process Analysis. So with Health Catalyst who made them and helped us develop our data warehouse, we would take the data, look at it, and start trying to use that to drive our decision-­‐making. So the first thing we tried to do was evaluate the areas that we can make the most improvement and have a continuing relationship going forward. So we’re trying to take the things here that are going to help us the most and moving along.

What bubbled up when we’re looking at clinical or care process modeling was that our length-­‐ of-­‐stay hours were one of the factors we used to help guide it. And you can look at this form. This is sort of the structure of the KPA tool and there are a lot of factors you can sort by. The length-­‐of-­‐stay was one that showed that our mother-­‐child products or service line was fairly strong and easily was our largest source of our length-­‐of-­‐stay. So it made it easier for us to focus on the fact that this is the direction we needed to take and tried to standardize the care process modeling.

And I would say that, again, going back to the prior slide, we recognized substantial changes needed to be made to reduce the variability and the way the care was provided. So we’re a health system that has 300 employed providers and 1,500 affiliated providers. When you don’t own a group of physicians, a lot of times they’re a little less amenable to change and a little less amenable to your change. So, we were fighting through that as well at the same time.

We tried to find a tool that would help us get there. The nice thing about this Key Process Analysis tool is that while the physicians were in the room and we were having a conversation, we were able to bring it up and discuss with them the findings, so they could see real time what we were looking at and it would kind of guide the conversation.

KPA Results

Another example of this is – let me dig down a little deeper and look at cardiology. This is a classic drilldown into that particular feature.  You can see the different sorting we did based on the highlighted areas. But what this looked like was a provider-­‐based model. So we’ve got the different providers and the associated APR-­‐DRGs to show the severity index. The bigger the bubble, the greater number of people on it. And the more spread out, the greater the variability and the care that’s provided. So in this tool, what’s interesting, if you look at congestive heart failure, for example, the lower severity patients had a lot of variability in the way they were cared for.  The higher severity patients also had variability but it was a startling finding that we were that all over the map and how we were caring for somebody that arrived at our facility with that diagnosis of congestive heart failure. That was a learning for our physicians and that drove them to the table to at least have a conversation about how we’re going to do this differently so that we can get a better outcome.

North Memorial Resources Consumed

So this tool basically looks at how we’re going to use our resources. This is the total cumulative percent of resources that are used, the red line that kind of makes an arch going up. You can see the cumulative percent. And the percent of total resources consumed for each clinical work process runs along the bottom. The interesting finding was that 50% of the care that’s delivered on our health system was through only 7 care process families. The encouraging thing for our medical staff and for our people here was that we can alter a reduced variation in the way care is provided by simply paying attention to 7 care process families, not 100. So that rallied everybody.  And if you click it even further, you’ll see that within 80% of the care that’s provided within our health system are 18 care process families.

That was kind of an encouraging sign for us. We valued our size because we thought it made us more nimble. This review was our chance to react to that. I ‘m not certain that we were actually very nimble but we felt we were. That was supposed to be our strength.  But this really brought it into focus.  So we’ve got things that we could achieve without having to eat the elephant and stay viable.

[Poll Question #2]

What percent of your quality improvement efforts are prioritize using a similar variation/resources analysis?

For our next polling question: What percent of your quality improvement efforts are prioritized using a similar variation or resource analysis. That was a big learning for us early on, that what we thought we had trouble with, we probably did but there was a lot of surprises on things that we didn’t we had trouble with and that guided our prioritization process. Now, I’ll give you a few seconds to click any of that.

[Polling Question # 2 Results]

Basically, a lot of you are unsure and then it kind of slides fewer and fewer to move towards the 75-­‐100%. So somebody out there is data-­‐driven, that’s good. Congratulations on that. For the rest of you, some of you are heading in the right direction. And those of you that are unsure, I’m guessing that you don’t have or you didn’t know that because it’s a fairly meaningful process.

So let’s move on.

How North Made Data Meaningful

So how did we make our data meaningful? We had lots of data, but it was on a number of different sources.  It was housed in HR, it was housed in Finance, it was housed in Accounting, It was housed in Epic -­‐-­‐  it was housed everywhere that you can imagine. We wanted something that consolidated everything. Then once you get the data, you have to really act on it.  So part of this process is learning how to put teams together that can actually act on data. So again, we went from the classic old model hospital-­‐based decision-­‐making, to a new direction from my CEO to get out and get our doctors involved and everything in the process and build a structure that encouraged physicians’ participation. Because we’re in a different position than a large health system that owns all of its physicians, we needed to engage ours differently.

So the first thing we did was set up our Clinical Operations Leadership Team, which we called COLT. This was a group of senior physicians or physician leaders, and our medical staff.  And we put them together, we partnered them with our senior administrative staff here, and we met weekly for a couple of hours to go through strategy and prioritization. We primarily put physicians in the midst of the operational decision-­‐making at an earlier phase. The biggest thing that we wanted out of them was to have them buy into the fact that we were going to do this and help us prioritize.

Next, we decided to start developing guidance teams which were a blend of the old model service line but also more than that. We’ll go into more detail on that as we get along here a little further.  But we developed several guidance teams including Women & Newborn, Community Care or Primary Care, Cardiovascular, OPPE (we wanted to answer to try to automate the OPPE function from medical affairs), and Infectious Disease.

Now, if you’re a health system trying to downsize or “rightsize”, it’s pretty tough to do this without thinking about what resources you have. And so, one of the guarantees we made to my CEO was that we would do this without adding FTEs. I would tell you that if I had the FTE resources, I would started first and then rightsized later. But I didn’t have the luxury.  So, we took people that were currently in jobs and did a fairly full scale evaluation of what we had, and we found the people that we thought we could repurpose in ways that were meaningful and match our new process.  When we started to do this, some people quite frankly needed to move on to other opportunities but others were able to be repurposed in the way that was meaningful.

Then we selected medical leadership to champion the vision of the process. Once we knew where we we’re heading, we took the obvious leaders (and there was a number of conversations around that), engaged the medical staff in that discussion, and added those medical leaders.   They were kind of a 0.1, 0.2 FTE level, sort of a medical director of the guidance team.

The next process piece that was fairly substantial was the data warehouse and how the data was organized. Now that had been launched before we started this process. We had the data warehouse built by Health Catalyst which actually allowed us to access the information. We put a data governance structure in place so that how the data moved and how we loaded the warehouse had been all decided by our IT and clinical folks. The organizational team structure was really built to support our process improvement outcomes. So everything was around continuous process improvement.  Every time we developed a new guidance team or looked at a process differently, it was built on the fact that we’re going to start this and then we’re going to request consistent process improvement.

Then we wanted to make sure that both the hospital and clinics were included consistently while trying to maintain their own autonomy. We had everybody at the table to make the decisions.  We wanted them to all have access to the information, but we were really building an ambulatory strategy at the same time that we were trying to rightsize our hospital structure to get into a position where we could be successful.

The last thing we did, while buried in this stuff, was articulate the vision of where we were heading.  We needed our staff and our medical leaders and our physicians, etc., all to understand what did success look like and where were we heading. This proved to be more complicated than we thought, but it was a process we went through over time, and now I would say it’s much more robust than it used to be.

Pregnancy (OB) Team Structure

So if we start on the pregnancy (OB) team structure, this shows how we built these teams. Under a care process model, this was a core work group. We selected Dr. John Nielsen, who was a senior physician leader and actually had been a member of our Board of Trustees at one point, so he was a strong personality, to launch our first care process model. If you think back, we identified Women’s & Children’s or pregnancy as the biggest (number one) area in length-­‐ of-­‐stay within our facilities, so we started to build the guidance. We took one of the clinical directors of the two hospitals, Linda Engdahl, who was a nurse, to be the clinical director lead. The other director of the other hospital – again, this was a little bit of a learning experience because they didn’t like the fact that one was in or one was out but in the reality they all had plenty to do and it wasn’t site-­‐specific, per se, to start with. It was Cathy Anderson, and she became the knowledge manager. So everybody had a role.  Then we tried to populate these boxes.  So we were able to put a couple of nurse experts on board, and a knowledge manager, who was somebody that went out, knew what was going on, and could pull the information and the right people together in the room at any one time. We had an outcomes analyst from our measurement reporting section – at the time, it was called decision support. And we had a data architect who we assigned to the project that would be in the room. In the early model, the data architect ran the meetings and was the one that conversed with the physicians and guided the process.

So we got all the people in the room: we had HR resources assigned to it that would get pulled in when they needed, we had finance resources assigned to it that would get pulled in when needed, and we had IT resources – because once we built the process, we were going to need to use Epic and other tools to help drive the change and we were going to have to change the workflow from the outcomes that we found.

Women & Children Analytics

So this is a guidance team dashboard that we developed over time. And while the first project was reducing elective inductions before 39 weeks, this Women & Children’s group has moved on, and they’ve had a number of other projects that are also reflected by this dashboard. You can see how it’s structured. As a CMO, this is my look at what that guidance team is up to and how they do it.  This is automated and refreshes either on a daily or a weekly basis where I can pick up this and understand where they’re going. If something drops off the grid, I’m aware of it.  My goal is that once you make a change, you’re able to sustain it.

So I’ll go back to the pregnancy guidance team, and talk a little bit about that. They went through a process.  They looked at the way things were happening and our outcomes, and our results were showing that we had enough pre-­‐39-­‐week issues to pay attention to it. Now again, in this particular specialty, it was all affiliated and there were no employed physicians. So we had 5 or 6 groups that delivered babies in our two facilities get together and agree on order sets and processes and how they were going to call to schedule inductions prior to 39 weeks. We put a medical director in-­‐charge of this process, so that if somebody tried to do it early, we had a recourse to it, and we changed the way it was recorded in Epic. Some of the learning from this group was that 39 weeks sounds like an easy, objective number. It should be something that we easily write down everytime they come in. As it turns out, it was recorded in different places in Epic, it was recorded inconsistently in Epic, and it wasn’t a clean process. And so, in order to really evaluate what we wanted to do, we needed to clean up the data first.

One of my recollections when Tom Burton from Health Catalyst first got here was that when you start these guidance teams, the mantra to your physicians is always, “we know the data is wrong, help us fix it.” So we first pulled the data, the doctors got their definitions down, and we used the nursing staff to get to help us with the process and what happens when people come in and schedule.  We then had criteria before they could schedule elective inductions before 39 weeks and started looking at it.  I’m happy to say that within about 3 months, it dropped off significantly.  I mean we basically have no unnecessary elective deliveries any longer and when we do, sometimes you’ll see on the grid that it pops up, it really has more to do with that we haven’t followed up and figured out what the real medical indication was.  People rarely get through that process any longer without somebody intervening.

Pre-­‐39 Week Elective Inductions

So this is a drill-­‐down from that dashboard. This shows (the biggest important thing isn’t the detail on here) the improvement over time. The bottom bar graph shows year over year how we’ve improved on the elective deliveries. But the biggest and most enlightening thing for our physicians was that we were going to be able to look at them specifically. We could look at the groups specifically, we could look at the providers specifically, and we could actually look at the patients specifically all while they were sitting in the room and have a conversation about it. So if you’re an OB-­‐GYN physician and you crossed that grey line in our pre-­‐39-­‐week world, you’re going to have to, at some point, stand and face the music with your peers. That’s a different type of responsibility than it is when you could slide through before and nobody really knew one way or the other.  So it changed behavior dramatically here and the results speak for themselves.  It’s quite astounding.

Women and Newborn Pre-­‐39 Week Elective Inductions

So this is our process.  Initially, as I mentioned, we wanted to find the workflows and identify improvement opportunities. Again, you look at what you’ve got.  You want to establish what you’re going to measure and then you want to get a baseline, which is what this group did. And then you want to find the evidence-­‐based standards for elective induction. We start by saying what is the community standard, what have we got to do and how are we going to hold people accountable to it.  And then the goal, the objective is to reduce the rate. You want to clearly articulate what your goals are and develop an end statement that is something that you can simply do.

Now, if you look at what happened, Health Catalyst came with these tools to help us develop all these things.  I’m not going to go into that too much other than that all of them were key in helping us put the information in a way that was meaningful for our physicians to react to. And then to date, the group has adopted evidence-­‐based guidelines and standardized the workflows there and they all rallied around it.  These are big personality physician groups and they all managed to play in the same ball field for the first time ever, I think. We established elective delivery baseline measurements and we were able to track the quality improvement gains and speak to that as well.  And then we developed a permanent team. So once you get the progress, what typically have happened in the past is we would gradually lose attention, lose sight, and drift back into our old ways.  Well in this way, we’ve got a dashboard, we got a team paying attention to it, and periodically they review their findings of all the things that they’re not currently working on.  It’s a nice process to keep people accountable.

So the early-­‐term deliveries were reduced from 1.2% to 0.3%. Our goal is to get to 0.0%. And truth to that, 0.3% is something that they argue about vehemently. They’re pretty proud of their progress and usually that’s one person in a month that’s had something go through, that nobody is happy about. It’s usually a discussion about whether or not it really was medically indicated or not, considering physicians, who are never going to agree completely on anything, Still, it’s dropped fairly dramatically.

On top of that, we went to the payers after we’ve completed this first project. We told them at that time we were doing this. We knew we would be cutting our own throats because right now we make more money when we admit all these babies to the Neonatal ICU when they’re delivered early. So we told them the only ones that are going to benefit from this process is you, so we’d like you to help us with that. So the first year, they offered up a $200,000 partner bonus payment simply as a result of our work, and I am pretty sure that they were able to measure their decreased claims. We were pretty tickled with it because they came back supporting other programs down the road.

Major Learning:  Follow the Plan!

So the real learning from all this was to follow the plan. I think it was an interesting learning session.  We needed to learn how to do this and we needed to stick to the discipline of developing simple goals, identifying the baseline, identifying what our goals, where we’re going to head, and how we’re going to do it, and then stick to the plan and move forward. So that was a fairly stunning success for us.

Cardiovascular Care: Challenges and Lessons Learned

Now I’m going to talk about a different challenge under cardiovascular care. We had just completed the goal of designing these care process models and these guidance teams. We wanted to take the lessons learned and then apply them to the next one and apply them to two more and as you can see, build the pyramid pretty quickly. So we took what we thought was a great success from OB and launched the Cardiovascular Care guidance team.

The new challenges in that were interesting. First of all, we really struggled trying to replicate the first clinical program’s success. And there’s a million factors to that.  I’ll stick to a few of those as we move along here a little bit. The biggest issue that ran into us, the cardiologists, who are the self-­‐described data-­‐driven bunch, jumped all over this and said, “Cardiovascular care is ours, we’ll do it.”  And honestly, we hadn’t completely embraced the whole guidance team construct yet, which is that everybody that touches the cardiovascular patient is supposed to be included in the guidance team structure and a part of the conversation. So our cardiologists took it and said, “Well this is a department-­‐based thing, we’re going to do it ourselves going forward.”  And truthfully, we had a lot of things going on here and this probably dropped off our radar a little bit or we would have intervened – but we didn’t. They really didn’t understand the guidance teams and and already decided their quality was better than anybody else is in town.  Probably all of you, if you work with doctors, have groups that tell you that they’re the greatest group in town, although measurably it’s pretty tough to define that or actually reproduce their findings. They’re frequently more happy with themselves than we are.

We lacked organizational readiness, and weren’t really ready to stomp on these guys and get them back in the line and get them into the discipline process. So it was very rough.  The physician leaders changed weekly. Instead of having one resource committed to be the knowledge leader and the medical director to the process, it was “whoever was free that week” that showed up. It was also a smaller group, only cardiologists as there’s only a few of them that are doing it, and so they thought it was no big deal if they didn’t talk to each other. We were really having a hard time. So we really didn’t make much progress.  In fact, we spent about 6 weeks having a long conversation. In the end, they did what they wanted anyway, which was going to the arrhythmia world and talk to the electrophysiology physicians, which had a bearing on the fraction of the patients we wanted them to have. We really wanted to focus on congestive heart failure, but their response back to us was, “Well we can’t possibly own that because everybody else admits those people, so how are we going to do it?” Which is my point exactly about going back to the beginning and understanding what a guidance team is for, how it’s structured and that it’s not departmental-­‐based but disease-­‐specific based instead, involving everybody that touches that disease. So we did agree with them that going forward, we would involve emergency physicians, family practice, internists, hospitalists, etc. etc.

The lessons learned was that we needed to all understand where we were headed and make sure that organizationally we were ready to stand on our ground and inspire that change. And then we had to include everybody that would touch those patients for the development of the care model. Our cardiologists came across as pompous, know-­‐it-­‐alls. They were only interested in doing marketing of their achievements.  I know that despite the fact that sometimes they drive me crazy, they’re really well-­‐intentioned good physicians and really want to do the right thing.  We just didn’t show them how to do it. And so, when after the lesson, after the challenge of this, we did sit down with them and I think try to write the lessons learned in stone so that it wouldn’t go away.

Know when we should and shouldn’t be involved. We need to let these groups do their thing but you need to steer the sheep back into the herd because they tend to wander off and get distracted by other shiny objects or other things that come along that make an impression upon them.

Another lesson is  (I can’t reinforce this enough), you really have to get them to buy into the methodology. If they don’t, you won’t make progress.  They have to believe, and they have to feel the power of having a data warehouse information available at a meeting in order to understand where you’re heading. They have to know that through these tools we can sit at a meeting and with the warehouse answer their questions while they’re sitting there as opposed to the old model where we would get a question asked, spend two weeks looking it up, and come back and get together again. That made the meetings valuable to them and then they started showing up.

The final lesson learned was that the focus of the product did not line up with the opportunities based on our KPA analysis.  They wandered off and they did what they wanted to do rather than what we found was the biggest problem with variability because they looked at congestive heart failure as too big of a task to overtake. I would tell you that my lesson with this group since that time has been we’ve taken them through the education process, we’ve challenged them through their incentive plan, that’s next year, to be the organizational leaders of this process, the knowledge managers, and I’ve told them that their role is to go make sure that every primary care physician, emergency physician and hospitalist thinks it’s their idea to do a better job of managing congestive heart failure and establishing heart criteria, that they help them identify and establish. It’s a daunting task but it’s a pretty good one for them to entertain.  And if they don’t do it, they won’t get their incentive back. And so, I think we’ve got them hypnotized with the shiny watch right now and we’re happy about that.

Professional Billing Application

This next one (professional billing application) is an interesting piece. It started out as a discovery project.  We had just finished the pregnancy guidance team and started the launch of the cardiovascular guidance team and then all of a sudden had an Epic upgrade. And it was going to tie up our IT resources for about a month and which created a month gap in servicing our other projects.  We just didn’t think we were going to have any capacity to do anything else. So we were sitting around and we thought, let’s look and see if there’s anything we could do over the course of this month. So we went in, we asked them.  We knew we were still having billing and coding issues.  We asked our decision support folks to dig into that and see where we were at.  We knew we were having this ongoing billing issue, and we found out that it was a largely manual process.   The following is a basic picture of the overall notes that are being driven.  You can look at the dashboard that was put together. In this, we found out something we thought we could fix. So this was a process where we realized we were missing all kinds of billing opportunities, missing notes, and not billing for patient encounters that we should. So we used the Health Catalyst and our finance group, we developed a type of finance guidance team, went through this whole process…

Professional Billing Application

We built this billing application which is a drill-­‐down effort of that group, which I then all of a sudden was able to identify where all these notes were. So before a coder – and to give you a better visual, a coder would have to go in to Epic and scroll around and look for patients and try to find the ones that needed a bill generated from a note. It was an unbelievably cumbersome process, and was extremely time consuming. Health Catalyst was able to help us automate this, so that opportunities for billing were identified and kicked out on to a screen. It saved these coders hours upon hours of time each week and identified where we were missing dollars.

They started with a smaller group of clinicians here, mostly GI and hospitalists. You can see the slide here, Intensivist, Oncology, Palliative, and Trauma.

Professional Billing Efforts

So again, in a 4-­‐week period of time, we had a fairly dramatic outcome. I think we had something like a $5 million opportunity that was going to be delivered down the road.  So it was a big possibility.

So let me walk through this again. This shows what you can do in a short period of time. Ensure accurate and complete charge capture of the professional services. The physicians here were concerned that a lot of them were productivity-­‐based, and they were sure that there were charges there weren’t actually dropping or being accrued. They wanted to reduce the manual data by professional codes because we have never spent enough time abstracting and coding.  And we wanted to deliver education to improve clinical data. So we want our doctors to get feedback. Part of what was missing was our coders were spending so much time trying to find the notes to charge that they had no time left over to educate our providers on how to document more cohesively.

So again, looking at the Health Catalyst Solutions that they brought to the table, there were a number of tools used.  We developed a starter set value stream mapping process to identify workflow gaps and then built an intuitive application for professional coders to optimize their workflow.

And the results to date, as I mentioned, had been stunning: a 6% increase in billing for notes that had sufficient clinical data. So we gave feedback to the doctors and to the providers, and so that were able to change those notes on an ongoing basis. Which resulted in a potential

$5.7 million of incremental charges over 3 years for unbilled services. That is an immediate uptake to our bottom line, and almost a 25% improvement in professional coding efficiency. These people now spend one quarter or less of their time trying to find codes than they had done previously which created time then to repurpose them to give a provider feedback and education. This was done on a 6-­‐week period of time versus the consulting firm who we brought in here previously who was really not able to deliver a data set or an application but simply came in and said, yeah, you got a problem. So it was a pretty good outcome and we solved it ourselves which I think was really one of our best accomplishments.

Catheter-­‐Associated Urinary Tract Infections (CAUTI)

Another example of a success is with our Infection Control Group.   When looking at opportunities, we started with those things that CMS has targeted.   And the first thing that popped up was Catheter-­‐Associated Urinary Tract Infections or CAUTIs.  CDC has identified that the most common type of healthcare-­‐associated infection is the CAUTI and that’s the cause of about a half a million annual infections leading to 13,000 deaths every year.   This also increases length-­‐of-­‐stay by 4 days and increases healthcare costs overall nationwide by about $500 million per year nationally.

So CMS obviously sees this and targets this, and as a part of our compensation issue going forward, they are not going to pay for hospital-­‐acquired complications. So part of what was happening is we were trying to monitor these folks and generate numbers.

CAUTI Application

Our infection prevention folks were running around monitoring this by hand. I want to add that currently the reimbursement from CMS is only targeting ICD/ICU population but nobody has any doubts that eventually there’ll be a hospital-­‐wide application. But our surveillance folks, our IT infection prevention folks, were running around doing this by hand, manually dragging the data out and trying to create numbers.

In this screen you can see that this is type of surveillance application that was built for us. It talks about the number of catheter days and the number of infections. And the goal was:  if we could automate the data, could we spend their time actually trying to create an operational improvement or a process improvement that will speak to that problem?

So this showed us kind of a bumpy slide and maybe we’ll go to the next slide to kind of summarize on this.

CAUTI Surveillance

Looking at our objective panel again, if we start with our process, objectively we want to come up with a scalable solution for CAUTIs and we want to get, we want to meet our CMS regulatory requirements because there is money tied to it, and on top of that, it is good for patients. And then we want to leverage the National Healthcare Safety Network using their definitions, their calculations, and their algorithms, so we do not have to recreate the wheel to create process.

And then we want to take those people that were spending all their time surveilling and arm them so that they can go out and have an intervention and help us make change.

If you look at the set of tools, Health Catalyst came on and this is a true snappy event for us. They were able to bring this pre-­‐built application, snapped it into our set of goody bags, our toolbox, and generated this automated data, and they did this in about 10 weeks. Then they identified specific tools that worked, started to identify workflow and process gaps, and then the automated data capture.

The result to date was that we dropped 50 percent in our CAUTI surveillance activities. All of a sudden we freed up 50% of somebody’s time. We basically are on track to move this from manual to electronic tracking when the NHSN-­‐required catheter days reporting comes online, which is pretty quick here. There is rapid time to value with a 10-­‐week implementation. So it did not eat up our resources for 6 months. They were able to bring it in, snapped it in and we were able to start working on it. And our preventionists are now out offering interventions.  So when somebody’s not documented appropriately as to why the catheter is in or if the catheter isn’t being removed like it should, they are out there intervening as opposed to observing it, which I think is a big win for us.


So in summary, here are a few conclusions from our process. We weren’t successful on all the things that we wanted, but we learned an awful lot in the process and we’ve grown as an organization. We used it to drive a lot of conclusions going forward.

So a few recommendations from me would be,

  • one of the lessons learned from the first couple of guidance teams was we needed to spend more time upfront with the teams before they start down with the quality improvement journey.  The cardiologists were a great example. They did it the way they always did. They sat down, they patted each other in the back, they told each other they were great, that they did great quality numbers, and why do we need to do that without ever looking at their own outcomes or their own information through a KPA tool. And so, once we taught them this, they’re much more data-­‐driven, as they self-­‐ described before we started this process, they actually get that and now they’re the guys for decision-­‐making, and we have shown them how to use the tools and the discipline to move forward.
  • Don’t ignore the warning signs. I mean a lot of them should have been going up in our heads.  We had different physician leaders each week. Their presentations to COLT were weak.  They felt like they were marketing. They didn’t feel like they were very specific, and I remember specific physicians that called up asking questions about it.

They commented that “It seems like you’re trying to solve world hunger here. Why don’t you just simply try to do the things we want you to do or that you need to do and they are your focus.”  But we couldn’t get their attention to do it.

  • The third bullet here is you have to commit one leader to the team and one person has to take this on.  You can’t bring somebody in from the outside to try to prop them up. We tried that.  We had one of our ER doctors, an excellent process person, step in and try to help.  However he was just met with resistance. So, you’ve got to get all the players in the room and you’ve got to commit a leader to that group and they’ve got to own this process and drive it.
  • And then you have to stick to the plan. Physicians are very in a software-­‐driven world. Physicians are looking at shiny objects every day. I mean there are those things flipping around in front of you, and then you want all of them. And there is a software solution for questions we haven’t thought of yet. So you have to stay disciplined and focused because the process actually works if you let the data drive the way you work and think.
  • And then the other thing is once you have a success communicated, share it with everybody.  Tell them why it was a success and then hold on to those principles rather than moving on to the next thing. We should have taken the lessons learned from the OB exercise and just held the cardiologists to the fire until they did it the way we wanted them to do it because it would have had a different outcome had we done that.
  • And then the last thing, I think, is the best news for our clinicians. We’ve always said that if we did good quality work, the finances would follow. You know, here we are, a fairly financially-­‐impaired health system trying to right its ship and in the midst of all this, I’m asking them to be patient while I’m delivering a new clinical process. The financial process and the financial improvements do happen when you follow this process, when you follow the quality of care. So if you reduce the variability and the way that care is delivered, there’s a major upside for your healthcare organization long-­‐ term, but you have to have the discipline to go down the pathway and let it work itself out and you have to have the patience to sort of stir the roads back into the herd and keep them moving in the direction that you want.  And you also have to have the patience to pick up smaller things, have some small successes early on which lead to bigger successes.  And then the financial stuff follows, and it becomes more obvious as you move along.  It’s a fabulous process.

Our organization has completely turned around. Our physicians are clamoring to get on COLT and clamoring for resources for guidance teams. I am now sitting in a luxurious position of having everybody complaining about the fact that we don’t have enough resources for all of them to create the change they want to create and move things forward. And that is not a typical physician behavior that I have experienced in the past. So by hook or by crook, we got over here today and it was just a fabulous outcome.

Questions and Answers:

Dr. Croston:     Thank you for listening to me. I don’t know if there’s time for questions and answer.  I think there is. But I’ll look forward to that.

Bernie:            Okay.  Thank you, Dr. Croston. Great presentation.  And I can tell you the audience was very engaged because there are quite a few questions. So we’ll get to them right away.

Dr. Croston:     Okay.

Bernie:            To what extent did the external QI measures like HEDIS and PQRS influence process planning?  Did you make any attempt to align these measures with those you determined to be important benchmarks for your specific process improvement goals?

Dr. Croston:     That’s a great question. And yes, we did.  One of the things we had to get some discipline on because there was so much need early on, so we could see so many things we need to fix, it was really hard to focus. So we used these guidance teams, that was one process to drive that.  But we really tried to target anything that was regulatory and anything that was reportable to try to automate. So we used specific measures.  I’m in Minnesota, so the Minnesota Community Measures.  We didn’t just specifically go after HEDIS measures, although some of them were crossed over in other measures. We did look at age gaps and we built process around age gaps here. We went on the value-­‐based purchasing scale.  We built processes around that. So we focused on those things first.  But you’ve got to have to do that in parallel because you want to have your clinicians engaged in the process and it can’t be all to meet a number. They have to see that you’re going to be creating a workflow change, environment change, and then you’re going to reduce variability in the way that care has provided that throughout your healthcare continuum. So yes, we did, but it’s kind of a yes and…

Bernie:            Okay. I have a two-­‐part question here. Actually it’s two questions.  It’s about physicians.  It is pretty remarkable that you could achieve these results without owning those practices. Can you speak a bit more about that?  And second part, how did you get affiliated doctors to buy into your philosophy to drive improvements?

Dr. Croston:     Yeah.  Those are both good questions. A couple things.  We at North don’t think you have to own your physicians to get them to do what’s right. What we think is you just have to put the data in front of them, and I think the disconnect historically has been just the way healthcare was done. You’re at a hospital, people are trying to go to physicians and tell them what they needed as opposed to engaging physicians early on at the decision-­‐making process, put the information in front of them, and have them understand the opportunity for improvement. So, I would tell you that it doesn’t sell to go to your group of physicians early on and say, “We’re going to reduce the variation.” They don’t like that.  They like it later when they understand what you’re talking about but early on they don’t like it.

What we did with our OB group was show them the opportunity for the upside. They wanted to be able to market, that they were a part of a health system that created such dramatic outcomes on this pre-­‐39-­‐week induction phase and they rallied around the data. So they were at our senior level decision-­‐making meetings, they understood why it was appropriate for us to do it. They all pride themselves on being independent. So they need us to be around because the other health systems aren’t going to play with them the way that we play with them. So they’re engaged in our financial success as well.  And they are willing to do anything that we ask them but they are willing to do things that they see value for and that they understand has a positive impact on patients.

So, you can say you want to reduce the variation but probably it’s better to say that later into the conversation when they start to see the opportunity for improvement upfront – because when you get the data out, those KPAs, all of a sudden, they were like, “what do you mean with that variable? And who is that one?” And when they thought out which bubble they were, they didn’t like it, and they wanted to fix it, and when we gave them a platform to actually fix it, they rallied really strong around that. So I would say that it’s probably easier for your employed group but if you have an employed group of physicians, unless they are motivated to do this kind of work, financially they’re less engaged than the affiliated group. So I think you can do it.  You’ve just got to get them in the process early and they’ve got to buy into the decision-­‐making that you’re making.

Bernie:            Okay.  Can you explain CAUTI measures website?

Dr. Croston:     Say that again. I’m sorry.

Unknown:       Can you explain the CAUTI measures?

Dr. Croston:     Oh the CAUTI measures. I don’t know if I can get you to the full website piece but let me tell you from a million foot level what it looks like. CAUTI measures are basically a number of catheter days, the infection per number of catheter days.  And the more catheter days you have, the more infections you get. That’s just the basic premise. We’re not built to have plastic stuff hanging out of us.  So the real goal is to reduce that. The only way to really reduce those catheter infections is by not having them in in the first place. So we had to do a cultural shift on that.  We had to tell our ER doctors to stop putting them in and wait for them go upstairs. We had to tell our ICU doctors that on rounds we had to build them into their notes. They need to ask the question every day, “Do I really need this catheter?” We had to tell our nurses to stop being patient-­‐friendly and  leave the catheter in one more day because they didn’t feel good and they didn’t want to get up. And all those things, you can find all those numbers and types of measurement at the CDC -­‐ NHSN website. It’s all on there. And so, this is all the stuff that’s coming down from CMS. So those numbers are out there.  We’re not going to get paid for the care of those patients when it relates to those healthcare-­‐acquired infections, so we’re fairly motivated to drop that number.

Did that answer your question?

Bernie:            I think that was very helpful.  What role does health information management staff have in the data analysis and process improvement initiatives?

Dr. Croston:     Well most of the things pivoted off of them. So we have two groups here.  We have decision support, and now it’s called measurement reporting, but we had a decision support group and then we had our IT group. And when it came to the analytics,  they lead in the decision support world. And so, on these guidance teams – let me go back a step and just say you can’t really generate a workflow change without changing the way the information is recorded, understand the definitions and putting it in a consistent place. So you can’t really make improvements in workflow until you’ve got a consistent way of measuring it. So our guidance teams were run by our data architects and they’re able to pull information and guide the process. The medical director ran the meetings but the data architects were there, helping build information for the doctors in a meaningful way so the doctors could answer their questions. And their IT folks were at the meeting when we were making changes to the record or the Epic. So if we needed to record information in Epic, everybody knew where to go look and that we could create an automated number from ourselves. IT had to do that build for us, so they were included as well. We didn’t have them at every meeting early on while we were building the stuff but once we got into a track where we’re going to start making a change to Epic or change the workflow or ask Epic questions, we pulled them into the meetings and then they were there until they had solved the problem.

Bernie:            Okay.  I am going to make this the last question because we’re running out of time.

Dr. Croston:     Sure.

Bernie:            When did you initiate your Epic system and how is it related to your automated medical record? Also, when were you able to achieve meaningful continuous quality improvement data?

Dr. Croston:     We launched Epic in 2006 (outpatient) and in 2008 was the inpatient side. We were able to – I think Health Catalyst came in here in 2010 maybe and we were able to start making meaningful continuous process improvement almost in the same year but they kind of finished the warehouse towards the end of the year, so it was really 2011 when the process started to kick in, and most of these results started coming, you know, the outcomes started happening in 2012. So we went from no warehouse to warehouse and then process. And so, we would identify a problem, work on it for 3 months, or work on it for a while and then wait 3 months and collect more data and come back to it and deal with it. So it takes a while to generate outcomes that are measurable but we really had meaningful results in 2012. The hardest thing is to keep a continuous process going but that’s the way you build the dashboard and that’s why the teams don’t go away. You keep them in place and you keep them working on the next thing. And you’ve got to keep repurposing people, so that you don’t have to keep adding resources as you build more guidance teams.

Bernie:            Okay.  Well thank you very much.  So thank you everyone for joining us today. If you’d like to view the archive of this webinar or share it with a colleague, please visit the on-­‐demand webinar section on the website.  Thanks again and have a great day!