Looking Back On Clinical Decision Support and Data Warehousing


Looking Back on Clinical Decision Support and Data Warehousing

[00:01]

[Dale Sanders]

Thanks Tyler. Thanks everyone. Thanks for joining us. I’ll do my best to make this a good use of your time today. This is kind of an experiment of sort. It just occurred to me that it might be interesting to reflect back on what we thought, or at least what I thought, was important, and I have topical information at the time, and reflect on how accurate it was or inaccurate it was at that time and should we be learning from history and applying that going forward. The first version of this presentation and lecture that I gave was at Northwestern University Medical Informatics Program in 2006, and then I repeated it again in 2007, and then this was the final version at the University of Victoria and Denis Protti up in D.C. in 2008

There are a lot of slides in here when I committed to doing this and this look back, I forgot how long it was. These were actually two-hour lectures. So, we’ll have to breeze through some of these slides. I think there’s an order of 80 or 90 slides. But it has been fun for me and I really look forward in the Q&A session to hear your thoughts and reactions to these slides. So please start thinking in those terms.

My Background [01:31]

Okay. Now, this is not a slide obviously that I had back in 2008. Just for those of you who do not know me, I wanted to give a quick summary of my background. I started out as an Air Force Command and Control and Intelligence Officer back in 1983. To the left of that diagram, I have a Chemistry and Biology undergraduate degrees. And you can say that the first half of my career was really spent around National Defense, National Intelligence, specifically in the nuclear world, nuclear weapons world. I made the jump to healthcare because I could see some parallels in decision-making, as I indicated in that second bullet at the top of the slide. Decision-making in these environments kind of all boils down to false positives and false negatives, and then optimizing an intervention and response to a given situation. And those parallels between the military and healthcare pre-direct. So I’ve been in healthcare and been blessed to be associated with some great organizations, especially that relates to IT and decision support, and now I’m with Health Catalyst. And hopefully building products will make all of this easier for all of you.

Acknowledgements & Thanks [02:48]

Okay. Now back to the slides. By the way, I’ve inserted a couple of slides that were not in the original presentation just to contrast and kind of give an update on a particular topic that reflects current thinking. So you’ll see a couple of more slides, like the previous one there.

At this time, back in 2008, I was acknowledging Robert Jenders and Matthew Sailors for some of the work that they had contributed to this field of work, especially around Clinical Decisions Support & the Arden Syntax. And frankly, I’ve lost track with both of those colleagues and I would be interested to know if they are still out there or not.

Overview [03:30]

So, we’ll go over sort of the trends that I observed in patient information systems at that time. We’ll talk about some basic terms and concepts in data warehousing. That was a very hot topic in 2008. Not many organizations were doing that. I’ll go through a couple of case study examples based on our experience at Intermountain and Northwestern. And then we’ll dive into kind of the current state of thinking around clinical decision support at that time.

Information Systems: The Three Perspectives [04:03]

So, I broke information systems into three large categories – Analytic systems being kind of goal measurement, query and reporting tools, enterprise data warehouses, benchmarking data, aggregating, exposing data. Knowledge systems, which was about organizing, sharing, and linking information right. So these were tools like document imaging, videoconferencing, collaborate tools. And then the transaction systems that were all about collecting data – EMRs, billing systems, GL. I do not think much of this has change. I think I would still kind of break the world of information systems up into these categories. But the big difference now is of course that we are starting to blend all of these into a single user interface. So you go back to 2008, these were all pretty separate. There was a transaction system, there was a data warehouse, there might be, if you are lucky, a video conferencing system or a collaborative system or document imaging, but they were all pretty separate.

Well now what we are seeing and what we are doing and rightly so is we are blending all of these together into a single user interface, so that you can interact with the benefits and the functions of each of these formerly separate entities in IT.

Patient Information Systems Trends [05:30]

At that time, the trends that I thought were noteworthy, we are moving towards transportability and interoperability. It’s amazing that we are not any further along. I mean just from experiment, I tried to get a copy of my medical record from a healthcare provider recently and that was over three months ago. Still not very affordable. I do not think. Not at all. Real-time alerts and reminders, we were starting to see more of that. I think we would probably say today they haven’t been all that helpful. A lot of organizations have turned those off. But I think they are getting better. We were making progress and I think we have made good progress on data-driven treatment planning.

Disease management at the point-of-care, I would say that was certainly a trend that we were pursuing very much so at Intermountain. We are getting better at that now. We still have a long way to go. Payer-driven data collection and Pay for Performance was just emerging. Quality of care reporting irrelative to payers was just starting to come on the scene and become a real hot topic. And I thought transparency of cost was coming sooner than what we have seen. Certainly not a lot of progress in that regard.

Patient Information System Trends [06:56]

Health consumerism was in its infancy. I think we have seen some very significant progress in this regard and that is going to continue to increase. More demands for transparent information access. Security and privacy at that time, this is all pre-HIPAA, or at least real to to pre-HIPAA. We had computerized patient records. They were starting to emerge. All sorts of federal investment at that time was just beginning to happen. I think a lot of us feel that that took off under the high tech act, but the reality is incentives and money for EHRs actually started out in the Bush Administration. And then the regional health information networks were starting to receive funding at that time as well. Pretty interesting. Again, this is 2006 to 2008. So we are 10 years into some of these trends.

Patient Care Data “Customers” [08:04]

I have put patient care data at the center of this diagram then, indicating that we had four general categories of what I considered customers of that data. So the financial customers were all about billing for the most part and cost accounting. Clinical customers are physicians and clinicians. Our third-party payers, insurance companies relative to the financial side of things. And then the external reporting and accreditation/regulatory kinds of things.

Functional Framework: Electronic Health Record [08:45]

I took a shot at diagraming what I thought was sort of the functional framework of an electronic health record, and remember this is 10 years ago, trying to figure out structurally how should we think about the progression and the evolution in an electronic health record. So over on the left is kind of the core functions – registration, scheduling, accounts receivable, all those sort of things. You are in an advantageous position as an organization if you could track benefit plans, co-pays, referrals, coordination of benefits, risk management, patient education. At that time, you would have been a significant differentiator if you had encounter documentation, charge capture, diagnostic coding, e-prescribing, allergy alerts, drug-drug interactions, medical history. And then you were really a leading edge if you had messaging and real time collaboration between the care team members of the patient and patient family, if you had a patient portal, self-scheduling, self-registration, account management, results and history, pharmacy refills, e-prescribing, credit card payment. Those were leading-edge things. Kind of interesting, we progressed in some of those pretty well. Not so much in others, especially around self-scheduling and self-registration. And of course, there is still a lot to debate about whether we should give patients access to their own results without first being filtered by their clinician. And then you are off the edge in terms of capability if you could offer meaningful, maintainable point-of-care decision support.

Underneath all of this was analytics, I thought at that time, business intelligence, pay for performance metrics, now workflow, and then regional and external entities that we, at that time, had to start trying to figure out how we were going to share this information with these external entities.

So I think, you know, reasonably accurate depiction how things have evolved looking back on it. I do not know that we have made as much progress as fast as we should have but not too far off base with these thoughts about 10 years ago.

The Future EHR User Interface [11:22]

I proposed at that time what I thought a future EHR user interface would look like, not necessarily graphically but functionally. So there would be patient-specific data, much like our current EHRs, you know, tell me about this patient in this encounter. We would have a disease management data displayed in that user interface. Tell me about managing patients like this. How am I doing, what should I be doing. Finally there were treatment options about patients like that – where are my options for treatment is patient that is based on data, what are the most common tests in medications ordered for patients like this. I would also, at that time, believe that we should be exposing folks to cost of care. So we had those discussions and raised that awareness. And then at that time I thought it is important that we share clinical outcomes data in that same user interface just to get an idea on how satisfied patients were with these treatments and in this context. And at that time, I categorized outcomes as patient satisfaction. I think we now have a different view of that. But for the most part, I think this is still where we need to head with this blend of analytics, algorithms, and transaction-based data all supporting clinical quality and cost control and making it easier for physicians to do the right thing informed by data. I think this still holds up.

Closed Loop Analytics [13:02]

This is the first time I actually started scribbling like I do more commonly nowadays. I scribbled a user interface here that was largely based around the work that we did at Intermountain.

And by the way, I would like to go back to that previous slide.

The Future EHR User Interface [13:21]

This is not anything that uniquely came out of my brain. I just patterned this functional user interface around the success that I saw at Intermountain with some of our clinical decision support modules. And the most advanced modules we had at that time in the health system satisfied almost all of the criteria that I am describing here. So I was just borrowing from patterns that preceded me and the (13:51) that I wrote on at Intermountain and saying, you know what, in general, for commercial EMRs, we need to follow these patterns that are equations that Intermountain clearly preferred and valued in the health system.

Closed Loop Analytics [14:07]

So taking all of that, then I sketched this out and I said, you know, the future needs to look something like this, with patient-specific information on the left about this encounter, then comparative analytics about patients like me with a cost of care component as well. And that was when I started calling this closed loop analytics. So that we are taking that transaction information about patients on the left and we are matching it against patterns of similar patients in the background and closing that all back to the point of care. I still think this is a valid way to describe in general what we are trying to do.

I might also note that this looks like what amounts to the Triple Aim. I did not call it the Triple Aim. Credit to Don Berwick for labeling it as such. But this is essentially the Triple Aim. It is patient-specific information, disease and population management, and economics of care all in the same user interface.

‘Closing the Loops’ on Clinical Outcomes to Optimize Quality [15:14]

This is a slide that I inserted to show kind of current thinking. I am along with my colleagues up in Canada, Corinne Eggert, Ken Moselle, and Denis Protti, we have produced this model that looks a little more complicated than it is, but we are now describing clinical decision support in three simple loops. The decision support that you need to support population health, the decision support that you need to optimize and understand and apply protocols for subsets of that population, and then the decision support that you need to improve a very specific patient’s life in loop A.

And one of the things that I want to mention here is every once in a while, I worry that we believe that somehow population health is going to trickle down. It is like going back to trickle-down economics. That somehow our focus on population health is going to trickle down to patients. But I would argue the complete opposite. The strength of Intermountain, for example, has always been the application of really personal care in loop A. And population health starts one patient at a time and it rolls up. Population health does not roll down. It rolls up one patient at a time. And every now, again, I think that we are a little bit lost on population health and we are overlooking the importance of very personalized care at loop A and what we can do with technology for both patients and clinicians to make that loop more data-driven and more effective.

Enterprise Data Warehousing [17:06]

So, at this point, I went into date warehousing. Again, this was a very early topic at that time. Not many people in the industry were doing it.

Multiple, Collaborative Organizations [17:17]

This was the diagram that I used to describe what we were trying to achieve with data warehouses. At Intermountain, we were not so much about multiple, collaborative organizations, sharing data. We were pretty much hospital X but having kind of a single organization. But we actually had over 20 hospitals and some in the neighborhood of 120 clinics at that time. And we were having a challenge organizing and putting all that data into a single perspective on patient care and looking across the entire system to identify how we were treating patients of particular types.

So, we have this history in healthcare of having very disparate information systems all supporting these different functions – billing and accounts receivable, claims processing, patient perception of outcomes, results and outcomes, encounters, orders, procedures, diagnoses. All of those in the past were very disparate information systems sometimes glued together with HL7. With the advent of the monolithic systems like Epic and Cerner, that the disparate vendors are not quite the same problem. The data integration challenges are not quite as bad as they were at Intermountain and then again in the early days at Northwestern. But they are still challenging today, especially in the model that we see now, which I think this diagram accurately predicted what was going to happen, and that is more collaboration across organizations and the inescapable need when you are trying to collaborate to consolidate data across those multiple organizations. I mean this is if you kind of step back and look at it, this is essentially an ACO. Different governance structures, different IT systems, same patients trying to understand how you are treating those patients. There is really no other way to go about that understanding than to implement a data warehouse.

Now, the concepts around data warehousing and our practice of that, the patterns of data warehousing certainly proved and changed but the general concept is still there.

Sanders’ Hierarchy of Analytic Maturity [19:36]

I published at this time what I call the Hierarchy of Analytic Maturity from basic business reporting to what I called real-time analytic fusion, blending patient-specific data with general patient type data. Other physicians who saw patients like this, ordered these medications and tests, and what were the outcomes.

And one of the reasons I felt it is important to put this slide together is I saw a lot of organizations grabbing for the brass ring of that bottom bullet when they still did not have a good handle on basic business reporting, compliance reporting, accreditation. And in fact, this was a reflection of our early journey at Intermountain when we built the data warehouse. We were a little bit enamored with things in these last two bullets and we were spending an inordinate amount of time in a patchwork sort of way addressing the needs of Joint Commission reporting, for example, or STS reporting. So we were spending too much time in inefficiency at these other levels because it was more interesting to work at these higher levels of maturity. So we actually paused their analytic strategy at Intermountain and said, look, we have got to clean up and make it more efficient to get these mundane sort of utilitarian reporting out of in the way, so that we can free up those resources and those skills to work on this higher value, more interesting analytics at the higher end of this.

So that was kind of the background thinking I had with this hierarchy.

Healthcare Analytics Adoption Model [21:15]

That eventually merged a couple of years ago into this model, which Denis Protti, David Burton, and I published in, I believe, 2011 or 2012. And it is our attempt to create kind of a course curriculum, a progression towards analytic maturity and a way to measure your maturity on this scale. We borrowed purposely from the EMR Adoption Model, the HIMSS EMR Adoption Model. And again, the message here is that these are kind of the steps that you have to go through and you should go through and you should go through in order to achieve level 8. And if you try to jump up to level 8, it’s like a freshman Physics student taking bar mechanics in the first semester. It is going to be a real struggle, and in fact you are probably going to eventually have to go back, start over and get the foundation and the fundamentals out of the way.

So if you progress through this in a very recipe-like fashion. This is an evidence-based way to achieve level 8 if you deliberately go through this Analytic Adoption Model that we produced a few years ago.

Vertical and Horizontal Strategy [22:40]

At that time, I proposed a vertical and horizontal strategy around decision support and analytics. Now, I divided it up here into step one, focusing on clinical excellence program. So, pretty much a long service line and this is, again, a reflection of what we did at Intermountain.

And then step two, focusing on the operational excellence needed for these services across all of these clinical programs. So, what can we do to make lab more efficient, pharmacy more efficient, radiology more efficient. And then how do we reflect better analytics and decision support in each of these clinical programs areas, taking kind of a process improvement perspective. My thoughts on this have evolved. I am not so sure it is a good idea to break these clinical excellence programs down like this. I mean I think you still can but there is the danger in kind of siloed optimization if you do not think about these now in terms of population health from start to finish. These tend to be kind of very focused on a snapshot of care delivery to a patient, as opposed to an entire patient’s life and episodes of care. So I think there is some danger in breaking the world up into this. There is still some validity to it but it is not quite good enough to address population health, I do not believe.

Examples of Clinical Goals [24:23]

So these were some of the examples of clinical goals that we established. Again, very data driven at Intermountain. And then we took some of these and incorporated the same at Northwestern. So, decreasing the number of elective inductions by 50 percent. These are board level goals. Keep the variable cost of deliveries without complications to 5.73 percent. I mean incredibly specific, right? Diabetes, LDL management. Diabetes, glucose management rather in ICU patients. Post-surgery radiation therapy protocol is 100 percent for breast cancer patients with positive nodes and tumor sizes indicated there.

So these are examples of kind of clinical goals that we have raised and supported from an analytics and a clinical decision support perspective at Intermountain. And I would say that Intermountain was incredibly visionary in this regard. I still do not see this level of specificity and commitment at the board level to clinical process improvement and clinical outcomes across the country.

DOQ-IT/PQRI Examples [25:48]

At that time, DOQ-IT, which I think emerged around 2005, as I recall. PQRI was coming out. Those were the early days of federal incentives for computerizing healthcare and those were the early days of developing what we see now, which is kind of runaway process metrics, and I say that in a discouraging way because I think we have taken these process metrics, dictating the physician how they should practice care instead of focusing on outcomes to a whole new level of chaos and I would lobby a bit now as Macro evolves that we put emphasis on measuring outcomes instead of so many process measures. It is kind of interesting in Health Catalyst how much time we spend with clients on this process of care measures that may or may not have anything to do with outcomes and they are somewhere in the neighborhood of 2000 now quality measures in the US Inventory and less than 7 percent of those have anything to do with clinical outcomes. The rest is measuring what clinicians do, worrying more about the means than the end result. So, all these should unionize in opposition to that trend.

Requiring More Than a Metric [27:17]

I borrowed this slide from the Advisory Board Company. I highly respect the Advisory Board Company at that time and still do. They had a good way of depicting that it is more than a metric. And so, they took DVT rate as an example to depict here. Patients at risk, what are you going to do, prophylaxes and things like that, what have you not done. Walking through sort of the thinking, the clinical decision making associated with that and the measurement thoughts, but then how you overlay that with the data that you have. I mean just very insightful on their part because what I see a lot of times is unfortunately we pursue metrics and things without tying to metrics back to these kinds of clinical thinking, number one. Number two, a lot of times we will pursue a process improvement initiative and not appreciate the fact that we do not have data or the data we have is mushy and inaccurate.

So, you have to think about these two environments in the same context. And I would say that this still definitely applies to today’s world. I am particularly concerned in the patient safety arena. I see growing emphasis on patient safety, which I think is great. But the problem with that in terms of trying to measure it is the data associated with the root cause events that cause a patient safety event is still largely missing. We really do not have what I call that dark matter data around patient safety events. We just do not have the means and the tools to collect that data yet.

So, I worry a little bit that we are attempting to measure something with our patient safety emphasis that is ramping up at the federal level without a good strategy to address the data that is needed for it.

Rolling Out What’s Easily Accessible [29:21]

The Advisory Board Company again had great insight here, which was let us focus on the data that we do have and what can we measure that will get us partially down the way of process improvement and try to do something in the first 90 days to do so. Then over time we will try to figure out how to expand the data content that we need to improve the accuracy of measuring that DVT process, what do we need to do around instrumentation of patient, what do we need to do within the EHR, what is the missing data, and have a formal data acquisition strategy to round out the precision of understanding as outline in the clinical environment. So, very insightful on their part. Still completely applicable, but I see a lot of times this concept being overlooked and underappreciated in the industry right now.

Tackling One Problem at a Time

Focusing on a Few Key Metrics [30:23]

They also had great advice on tackling one problem at a time. And I like this diagram because it is kind of interesting. It reflects what I call conference room analytics, which is okay, but not quite good enough. Conference room analytics is that in those three loops that I described earlier – populations, protocols, and patients. Conference room analytics tends to dominate the environment right now and we are kind of stuck in the protocols and the population loops right now. But we have to move analytics and decision support down to that patient loop.

At this time, there was really no hope for doing that right back in 2008 to 2006. So conference room analytics, as indicated in this diagram, was totally appropriate. No problem. But the key message here and is still applicable today is that focus on a few key metrics at a time because it is very tough to bite off more than two or three of these metrics when you start plotting out a process or other clinical improvement initiative. So, it is really hard to move from the conference room to the improvement level if you try to take on too much. Over time, as you become accustomed to this culturally and how to spin up these teams and how to review the data and govern the data, improve its quality, you will be able to move through these process improvement initiatives at a faster clip but initially do not try to do too much.

Fixing Problems at the Source

Data Quality Feedback Loops [32:04]

Another slide that I thought was particularly useful at that time, and that was fixing data problems at their source. I do not think there is much arguing against this now. But in those days there was a belief that you had to cleanse data before you loaded it in the data warehouse. And thankfully I had made lots of mistakes in my career prior to coming into the healthcare arena. I was onto this pattern and principle years ahead but it took a long time for us to appreciate this in healthcare but I think for the most part we are over it now. You actually do not want to try to clean up any data in the data warehouse. You have to identify problems through the data warehouse when you aggregate data, you have to identify quality problems using the data tools in the EDW as a tool for that, but ultimately you have to go back to the source systems and fix these data quality problems where they reside. Otherwise, they will chase your tail all day long in the data warehouse.

Choosing the Perfect Pilot [33:10]

And then I think this is the slide from the Advisory Board, and that is choosing the perfect pilot. And by the way, these slides were all influenced by meetings that we had with the Advisory Board about what we had learned at Intermountain. So this is kind of a combined Advisory Board, as well as Intermountain perspective that you are seeing in these slides.

So, it is important to have data stewards. If this is a data mart in the data warehouse, you have to have physician referrals, someone has to own that, supply chain, materials manager, is your data steward there. So assigning data stewardship to these sources of data is critically important.

Then choosing clear ROI, looking for high variability, processes, is this environment even measurable, is it a known business problem, do we have the technical capability and feasibility to collect the data, and is there a responsive motivated person that is comfortable with data that can take all of this and drive improvement into the organization. This recipe is still completely appropriate.

And in fact, I was talking to a class yesterday, Adam Dillard’s class, and he asked, “What are you seeing in the industry right now that is holding organizations back?” And I would say it might be able to summarize it in this slide. I mean there is a couple of others things but you can aggregate data and you can pull data together. But if you do not put together a framework like this, a data governance, data champions, being very deliberate about what you choose to focus on for improvement, you are going to miss the mark with analytics and decision support.

So this slide in general represents what I think is kind of still a missing behavior in the industry right now that is holding this back.

Structure vs. Unstructured Data [35:02]

Okay. So speaking of data and content and working with what you have and what you do not, this is my attempt at that time to portray kind of the computable analytic value of data through the representation of human experience and knowledge. So from a computer standpoint, it is easiest to understand structured and discrete data. But the reality is face-to-face in audio and video are the best way for us to understand and represent human knowledge and experience in a clinical setting. So we have this tension, in which we keep forcing more and more structured data collection on our clinician. They are becoming our data samplers instead of our clinicians when the real value to understanding human knowledge and experience resides in these forms of data down here in the lower right.

We have not made a whole lot of progress in this yet. I mean you are starting to see telemedicine and that kind of things that we are making it a little easier for this. We do not do not do a whole lot. We do not record in terms of audio. We do not record a physician-client interaction. So it is one thing to represent a patient-physician interaction in a note but I have always thought it might be handed to record that interaction, assuming that all parties, especially the patients, are comfortable with that, so that you could go back and fill in where the text note missed the nuances of the interaction. I have always thought that EMRs are ought to be have a video and audio recording capability that you can refer back to when necessary. Someday, using natural language processing, we would be able to process that narrative, that dialogue, that conversation that takes place between the patient and the physician and turn it into something that looks more like computable information in the upper left.

Case Study Example

Intermountain Healthcare

[37:16]

So let us run through a couple of case studies at Intermountain.

Case Study [37:20]

And again, I have got to be cautious with time. I think we are only one-third of the way through here. So we will have to breeze through some of these. One of the early case studies of analytic success and decision support success at Intermountain was around our diabetes program, and it was driven by this recognition that most diabetic patients were receiving inadequate care. So, we integrated data from five different sources, lab, problem list, insurance claims, pharmacy, hospital, coding, and we ended up winning the National Exemplary Practice Award in 2002 because of that. The results of this.

Diabetes CPM: Key Indicators [38:00]

These are the key indicators that we focused on. A1c, to get that below 7, blood pressure management, LDL management, triglycerides, at least annual foot exams, microalbumin test at least annually, and eye checks at least annually. So those were all the clinical goals that we set out to achieve. For the most part, they are process goals. I shocked someone the other day when I said it does not matter if you have diabetes. It matters if you have the consequences of diabetes, the comorbidities and all the things downstream effects of diabetes. Not to say we should try to manage diabetes but what really matters is whether they are progressing towards the downstream effects of diabetes. So these are still process measures to some degree but we all know that they do help stop that progression to the deleterious effects of diabetes.

Case Study: Diabetes Management [39:05]

So, when we started this program in June of 1999 – again, this is a screenshot from the actual Intermountain, at that time different logo and everything, Intermountain Healthcare’s diabetes program. You can see the trend around HbA1c is greater than 9. So this is the number of patients that had A1c above 9 and you can see just a dramatic improvement in five years the number of patients that were out of control.

Case Study: diabetes Management

Likewise, you can see a dramatic increase in patients that are very much within control and we were doing everything we can in this context to lower their risk even further by getting those A1c under 7.

Diabetes Management Peer Comparison Chart [39:49]

We put all this together in a diabetes management peer comparison chart so that physicians could see how they were doing according to themselves, the region we had. At that time, I think we had six regions in Intermountain. And then system-wide and how they were doing in each of these different measures. So a physician could see how they were performing against the system and the region.

Case Study [40:21]

And we enabled this. We would push those out in different formats and things like that but a physician could call that up relatively easily even at the point of care if they wanted to see it. So a big success there around diabetes at that time, and I still think that model pretty well holds up. That is still something that we should all be pursuing in the industry today.

Another success story was around CV discharge meds and it was just basically the protocol adherence. It is around appropriate discharge medicines following a CV event, ischemic heart disease and MI in particular.

And so, in 1994, when we realized that this was a problem, and then we estimated because we had no data, but we estimated about 15 percent of the time patients were receiving appropriate discharge medicines at discharge. But after putting a data collection strategy in place and organizing a clinical process improvement team and a data governance team, just like the Advisory Board slide suggested. We went to 98 percent. We had a hard data to prove that.

Case Study: CV Discharge Meds [41:28]

And this is an example, again a screenshot from those old dashboards. This shows beta-blocker usage, one of the important meds for discharge of a CAD patient. Our goal is 90 percent of that time and then you can see how our actual performance came about, far achieving that right up above 95 percent above the goal.

Case Study: CV Discharge Meds [41:59]

Same thing for Coumadin usage. And for the most part we would present this data in the conference rooms to physicians and we would also push this out to them in their own personal dashboards. So we were kind of stuck in conference room analytics, not quite at the point of care yet, but still having a great impact on patient care.

The Tangible Benefits [42:21]

And this slide summarizes the tangible benefits of adhering to those standard protocols. Before and after implementing this, the adherence to these discharge medications compared that nationally to benchmark in 2000, we were far exceeding those. And then this is the important part and we backed into these numbers and you can attribute better adherence to these through actual lives they have reached through these clinical categories. So, that’s how it translates into patient lives saved.

Case Study [43:03]

One more case study, I think, here from Intermountain. This was around our labor and delivery – elective inductions. Again, we set out these very specific goals at that time, around elective inductions in 39-week gestation.

Elective Inductions [43:17]

And you can see that it drops like a cliff when we start measuring. A lot of this is the Hawthorne effect. But that is on important part. The data is not doing anything in and of itself. The data is revealing current state of affairs, making people more aware of the situation, and you can see the dramatic drop-off that it had when we implemented the data collection and data analysis services with this.

Elective Inductions [43:48]

And then you can see here on this slide, the cumulative savings. Now, the cool thing about Intermountain is executives at Intermountain were incredibly committed to making decisions that impacted the bottom line in a negative way. They were committed to clinical quality and patient improvement of life no matter what it meant to the bottom line. So they were living accountable care years before it was popular. And of course, they had a vested interest in that because they had an insurance plan at the time, but this applied to every insurance payer that we worked with then. We only insure about 40 percent of our patients at that time. Sixty percent of our patients came from other payers that benefited from this too. And so, you could argue that that fee-for-service model that we were under at that time were those third-party payers. We took a heap financially for this and the credit goes to Intermountain leadership for doing that.

So far, so good…

Northwestern’s EDW [44:56]

Okay. So, at this time, then I also started talking about Northwestern’s EDW. I moved to Northwestern in around 2005, as I recall, and we wanted to replicate at Northwestern and improve upon what we did at Intermountain. It’s a little shocky for me because Northwestern is an academic medical center that operated a lot differently and had different motives and goals than did Intermountain. And so, I had an adjustment to go through at that time. There are just different data motives and different things going on as an academic medical center, especially at that time that were not a cross-over to Intermountain. So, I was a little off-base the first year or so that I was at Northwestern, trying to figure out what I was going to do with data warehousing and decision support in this context because it was quite a bit different in Intermountain.

Data Loaded to Date [45:47]

So we have dealt with data warehouse there at Northwestern and this is just an indication of the number of, you know, the heap points we had. This was again about 2007, I would say, these numbers. We had three, what is that, billion records. Three trillion records. Terabytes. 2.2 Terabytes. We joked about the number of truckloads of data and the complete works of Shakespeare that that equated to.

Early Adopters and Value of the EDW [46:17]

We had some early adopters and value of the EDW at that time – Nugene, which was, and this is again kind of – you would not see me doing anything with genomics at Intermountain, but in Northwestern it was very important. I mean the theme there was really research – clinical trials, research, publications, and that kind of thing. That was a part of Intermountain but certainly not the dominant theme and the culture.

So genomics was an important part of this. We had some really interesting success with phenotyping very early on. Neurosurgery outcomes, especially around movement disorders, created perinatal patient registry for clinical quality outcomes and BMI relationships to complications in Women and Newborns area, deliveries. So you can see some of the early things we were doing in this diagram at Northwestern and I would say that all this kind of took place within the first year after we started developing the EDW there. So we were starting to see results pretty quickly.

Specific Research Example [47:27]

There were some specific research examples. Rapid turnaround. In the past the poor researchers might not ever have access to this kind of data but number one, we are giving them access to data they never had before; and number two, we were doing that in a very fast turnaround.

So this is an example of kind of a complicated query and data set that we turned around to support a grant submission. So one of the cool and early values for the data warehouse at Northwestern was the value that it had to obtain in grant funding. We estimated in the first three years the data warehouse at Northwestern that it directly contributed to about $15 million in grant money that we would have otherwise not seen. So again, a very different motive than Intermountain. No less important though.

Other Examples [48:26]

Here are some other examples for research studies, how many patients with an NSAID who had low renal function. There is the answer to that. What percentage of patients diagnosed with multiple myeloma in remission over age 18 who were prescribed bisphosphonates in the past 12 months – 18 percent. How many patients had a 1 or more low ejection fraction who have received a low ejection fraction measurement within the last 180 days who have not been seen by a clinician and there is a list of commissions. And so you are looking at situations there right with gaps in care, where in this case it is good for business in a fee-for-service model to bring those patients in for follow-up with those kind of high risk indicators. It is also good for patients and their lives. So those are some examples. Still very current. This is the sort of thing that should go on in the EDW data warehouse every day, multiple times a day in today’s world.

Examples [49:35]

This is a reflection of some work that David Baker did. I am very blessed to work with David when I was there as a GIM doc and I learned a lot from David at that time. These are just some quality measures that we were pursuing at the time and David was leading, showing adherence to these measures primarily around cardiovascular disease, and the progression we made and improvements we made from 2007 to 2008. Some are dramatic, some are not so dramatic, and that was actually something that we realized – is that there may be an asymptote beyond which you just cannot keep improving. There may be factors that we just cannot address and it could be that if you are already at the high level of performance clinically, you may not be able to do much more than some asymptote. And where that asymptote resides, I think, depends on the patient type, their condition, and especially their socioeconomic status and social determinants of health. You can invest to all sorts of money trying to squeeze more and more improvement out of that asymptote and you are going to finally reach a declining return on investment.

And I think that is going to be one of the most important things going forward in population – is what I call return on engagement. And knowing when you probably hit the asymptote and further emphasis on trying to improve a clinical outcome or condition beyond that asymptote makes no sense. I think culturally and societally speaking, I think we are going to have a challenge that is inevitably going to face us in that regard.

Changes in quality measures during the first 3 months of the study [51:30]

This is another study that David supported showing screening mammography rates, cervical cancer screening. You will not see dramatic increases here, and again there was that awareness of, wow, it costs a lot to achieve less than impressive results. And that is when I first started thinking about this notion of return on engagement with the Northwestern as a consequence of this and it is something we are going to face. I mean I am absolutely convinced this notion on return on engagement we are going to face in this society.

Physician Performance (most recent 3 months)

Aspirin for Primary Prevention in Diabetes [52:11]

This is kind of interesting. We measured the adherence to prophylactic aspirin for diabetes patients over time. And this is one we actually embedded the analytics and the decision support back into Epic. So we closed the loop. We measured this on the back end and then we closed the loop by creating best practices alerts and other changes to the user interface that would remind physicians when they are treating patients with diabetes to prescribe that prophylactic aspirin. Those of you who might recall, during the same timeframe, it became clear that if that diabetic patient did not have any other indications for cardiovascular risk, that the risk of prophylactic aspirin was probably not worth the benefit. And so, there was a change to protocol that we implemented and reflected that back into Epic and analytics very quickly because we had the data and the we had the pathway to do that back to Epic.

Anticoagulation for Heart Failure with Atrial Fibrillation [53:16]

The same kind of thing here with anticoagulation therapy for heart failure patients with atrial fibrillation. You can see dramatic improvements here just by producing a result.

Cervical Cancer Screening [53:31]

And again, kind of reflecting back, this all still applies to what we do today. There is still value in these kinds of things today. There is our cervical cancer screening. You can see dramatic improvements there.

Why Didn’t the Patient Follow the Protocol? [53:42]

One of the things we started to recognize at Northwestern was this notion of the patient’s engagement in their care. And it gets back to that asymptote and how much you invest if a patient is either not willing or incapable of participating in the protocol. So there were 167 reasons in this particular case for not following the advice of preventive service. We went back. Nine resulted – actually the data was bad, they resulted in having a service. We just did not see it. Two patients could not afford the medication and 14 patients just refused, and of those, zero started.

So, this again gets back to what can a physician really be held accountable for? If a patient is unwilling or incapable of participating in a protocol for some reason, I think there has to be risk adjusted. I just do not see how we can hold physicians accountable for that social factor. So the future here I think is risk stratification and you could call it maybe severity adjusting patients according to their socioeconomic status and their willingness to engage in their own care. I think that is an inevitable part of where we are headed and I think we are under appreciating the importance of that right now, and in fact I think it is one of the reasons physicians are getting so burned out because they realized they can only do so much and yet they are being held accountable as if they can do so much more.

Why Didn’t the Physician Follow the Protocol? [55:19]

So, we also studied why the physician did not follow protocols. You folks have all probably seen this kind of things before. I will not go into the details. We will post these slides so you can look at those. But it all involves some review with peers about, you know, let us discuss why you did not follow the protocol, what are we doing and do we need to adjust the protocol for some reason that we did not previously understand. Only six of those 147 indicated a change in management was required. Some significant resistant on the part of the physician to follow what most everyone else thought was a good protocol.

So to me, this, you know, everyone likes to say, oh physicians are kind of a tough culture to change and all that kind of thing. I do not believe it. Physicians are very data-driven. If you give them the data to do better work, they will and 6 out of 147 who were a bit obstinate is not indicative of an entire group of people.

Clinical Decision Support Systems [56:26]

Okay. Clinical decision support systems. Now, we transition. We were talking about analytics and data warehousing. Now, this was kind of what is the status of all this getting to the point of care, where are we.

Clinical DSS Structure [56:39]

So at that time, my thinking was that we had kind of three types of decision support. We have point-of-care, which at that time was amounting to alerts and reminders; we had kind of offline retrospective, what happened? You have seen that in those previous slides; and we have prospective and that was kind of the predictive world – what is going to happen? That is how I saw decision support at the time. I think this still holds up a little bit. I think alerts and reminders, we have found that that is not necessarily very effective or at least the way that we are implementing right now are not effective. There is too many false positives and it is driving physicians crazy.

Where Does it Appear? [57:25]

And where does it appear? Where does clinical decision support appear at that time? It was organization of data and kind of the “checklist effect”, how do you implement checklist and order sets and things like that. There were stand-alone expert systems at that time. This might (57:42) care doctor as an example of that. We had the emergency data repositories, like data warehouses and mining data to support the retrospectives. And then we were trying to integrate it into workflow through the EMR and CPOE. So this was kind of how I saw things appearing at that time.

The Revolution in CDSS [58:02]

I also saw it and I was curious to see that I call this a revolution. Phase 1 really was not revolutionary but I thought it was at the time. Phase 1 was focusing on quality and safety of care. We are barely entering that phase now back then. Phase 2, I predicted would be what I called economics of care and are we providing cost-effective care, not just good care and is it cost-effective and could we be more cost-effective. And then at that time what I called genomics of care and I think today this would equate to personalized medicine and how are we making all of this more tailored to the individual patient. So that is how I thought decision support evolving. We have not done much of anything, I would argue, in any of these phases to a significant degree in the current EHR environment. Not to say that we cannot. I think everyone has been consumed with the pulling EHRs. Now we have got to figure out how to implement some version of these three phases of decision support to help physicians and patients.

Key Architectural Elements [59:13]

The key architectural elements at that time for decision support were an EMR and the central data repository, the data warehouse at that time. You could also argue that all these other data systems, like lab, radiology, etc., they are also an important part of that data capture. You had to have a controlled, structured vocabulary so that you could see consistency across these systems. At that time we were struggling. We were trying to figure out how to represent knowledge and I think a big lesson learned here is that those knowledge representation vocabularies have essentially fallen by the wayside because they are just too difficult. And instead, what we now have the capability to do is infer knowledge from data that we never had in the past and from algorithms that we never had in the past. So rather than trying to impart a knowledge reference in patient like the Arden syntax on the world, the data can essentially tell us what that knowledge looks like, and I think that is one of the coolest things about where we are right now.

Foundation and Rationale for Decision Support Models [60:20]

So the foundation and rationale at that time, it was about math, math models and how do you build a mathematical model around these decision support environments. We were applying a lot of Bayesian statistics to that. That still happens now. We were also applying a lot of rule-based decision-making, IF THEN. And what we found, and this is actually something I learned in the military, that those IF THEN systems are very fragile and they largely have not proven to be successful in healthcare or for the most part in any complicated decision-making environment. And again, going back to where we headed, machine learning is going to give us the flexibility that IF THEN little systems could not and do not, and a lot of those machine learning algorithms have, of course, Bayesian models embedded in it now. So IF THEN is not a good way to handle representation of human knowledge. It is too concrete, it is too binary. Bayesian and other fuzzy techniques are much better.

Justification for CDSS:

Medical Errors [61:38]

This was a slide that I used to help justify clinical decision support. I was focusing on medical errors at the time and patient safety events. This topic now has re-emerged. I quoted the IOM report. The most recent study that I think John Hopkins sponsored and I recall BMJ published, those numbers are probably more like 400,000 deaths per year, not 98,000. So we are probably killing about 400,000 deaths per year. If you read that BMJ article, healthcare is the third leading cause of death. So, I expect of new and greater emphasis on patient safety going forward in the country and the decision support that we need to improve patient safety.

Now, the challenge there will be, as I mentioned earlier, how do we actually get the data, that dark matter data, into the environment. It does not exist right now. I do not know exactly how we are going to do that yet because it is inherently difficult to measure some of the events and procedures that lead to a patient’s safety event. But this is going to be a big new area for decision support for us and we have not made, you know, not much success in this regard over the last few years. Although some success. I will not criticize this too much. We have made some success in this area.

Definitions: What is an error? [63:03]

This is kind of a common definition of a clinical error at that time, or errors in general, not just clinical. But error of execution, failure of an action to be completed as planned. Error of planning, it was the wrong plan to begin with. There was an adverse event caused by a medical management or not the result of the patient’s condition. We did something. Preventable was an adverse event attributable to an error. And then negligent was exactly what it describes. So that was kind of my attempt to put a definition around what is an error in a clinical sense.

Errors in Medicine [63:51]

And then more numbers at that time about the error rates in hospitals, just justifying that we need to spend some time on this.

Errors in Medicine [63:59]

Again, more data at that time. I think this is pretty outdated. It would be interesting to bounce this against current numbers.

Clinical DSS: The Impact [64:13]

We had seen up to that time some impact where clinical decision support could help. In the study produced and published in JAMA, CDSS improved practitioner performance in 64 percent of the 97 studies. That is not too bad and I would like to think we can do better than that now given the data and the tools that we have, and I am sure we can. But what we have to do is get it out of academic medicine. That is the problem with a lot of these decision support tools, is that they have never really made it out of the large organizations. That was certainly the case at Intermountain.

Case Studies:

Examples of CDSS Effectiveness [64:56]

And speaking of Intermountain, at LDS Hospital, I was blessed to serve as the Director of Medical Informatics there, following in the footsteps of people like Homer Warner and Al Pryor and Reid Gardner and Peter Haug. I mean just incredibly a lucky thing for me to be able to do that. And the antibiotic assistant there I think is a framework for decision support that we need to rinse and repeat across every patient type. If you have not read about it, I encourage you to do so, but it is I think the case study example of awesome decision support in a clinical setting that hits that Triple Aim. It is data driven. It retains the ability for the physician to decide on their own whether they are going to follow the decision support advice or not, but we had great results as a consequence of these numbers indicated here.

So that was a big deal. I was talking about it 10 years ago. I still talk about it today. I still think it is the bellwether of what we need to do in other condition types.

Examples (continued):

Preventable ADEs [66:03]

CPOE implementation at that time was still very complicated and very controversial and you can see the numbers associated with what we saw. All sorts of reductions in errors and other patient safety events.

Examples (continued) [66:25]

A couple of other examples of reducing. We have done a test ordering, results in saving money. Again, at that time, a little controversial because we were still in a fee-for-service medicine and not many organizations were ready to cut their charges by 13 percent around labs. And then a preventive health reminders at that time with HIV and how to screening more effectively there.

Examples (continued)

[66:57]

This was a systematic review. The 68 studies, 66 percent of those 65 showed benefit to physician performance, 9 out of 15 for drug dosing, 1 out of 5 for diagnostic aids, 14 out of 19 for preventive care, mostly care gaps. So, plenty of examples of where it can help.

Other CDSS Success Stories [67:18]

Other success stories that I have been associated with, especially at Intermountain, was bilirubin management in neonates, ventilator management around ARDS, Coumadin management, and these are all very embedded in the EMR, by the way. These are point-of-care decision support. High glucose management in the ICU, antibiotic assistant, I mentioned earlier, and then just infectious disease monitoring. So this was not conference room analytics. This is point-of-care decision support.

Medical Artificial Intelligence

Just another term for decision support [67:48]

Okay. Now, into the really real stuff, at that time, I was talking to you about and we called it medical artificial intelligence. It is a term that fell out of favor and now it is kind of coming back into favor again. I, at that time, would tell people it is just another term for decision support.

Goals of AI [68:09]

So the goals of AI that I would say, and again this is kind of an early discussion on the topic, create computer assistance which achieves human levels of reasoning. That was the bottom line.

Knowledge Representation Formalisms: Their Role [68:20]

We spent a lot of time during those years, both while I was in healthcare and then previous to healthcare, trying to understand and express human knowledge and how does that relate to how we can computerize it.

So, human knowledge kind of came in express policies, institutional, national, local, formulate interventions in medical practice, make local variations in guidelines, and then provide all that “intelligence” to clinical expert systems. And you will note that I used the term “expert system” here because at that time expert system was typically associated with the Arden Syntax and with IF THEN rules. And so, I was still kind of stuck in the IF THEN mindset, which I have completely jettisoned by now. But we are moving from national sort of guidelines all the way down to, okay, how do we implement this in an expert system or decision support module.

Forms of Knowledge Representation [69:20]

At that time, these were the forms of knowledge representation that were emerging and existed in healthcare. Bayesian/probabilistic, GLIF, the Guideline Interchange Format, that I do not even think is around anymore. I am not sure. I have not heard anybody talk about in a long time. Case-based reasoning. We were doing a lot of work with ontologies. Decision tables, neural networks, Bayesian belief networks, the procedural stuff, again going back to the Arden Syntax, and the production rules. Those were all different forms of knowledge representation.

Now, I think the interesting thing is we have seen that these attempts at knowledge representation are very challenging and I will show you some examples of why they are challenging.

Now, we are switching. And rather than trying to impart knowledge and put a box around what we think the world looks like, we are now collecting enough data and I think we now have algorithms and sort of ensembles of algorithms that we did not have before that will allow the algorithms to tell us how knowledge is being represented, rather than us imparting these frameworks. So, big shift in that regard and that is one of the reasons I am more encouraged and optimistic than I have ever been before about machine learning and AI in healthcare.

Roots of Medical AI [70:39]

So here is a slide that depicts kind of the progression of it. Note, I purposely left that area in there. This was in the late 1070s. That would have been Theodoric of York timeframe. This was 1970s, MYCIN at Stanford, focusing on the rules-based decision support for infectious disease and antibiotic therapies. PUFF, which was based on MYCIN, which was pulmonary data interpretation. Again, very rules-based and I think we have all learned that rules-based AI just does not work very well.

Roots of Medical AI [71:13]

APACHE was one of the early things and credit to folks like Vi Shaffer that were involved with APACHE and it was a great example. Point of care, real time decision support in the ICUs for risk monitoring.

Computers Are Good At… [71:31]

I also would talk about, you know, computers are good at what? They are good computational functions – add, subtract, and that kind of thing. Symbolic reasoning and pattern recognition. And in particular, what we are seeing now, and we knew for a long time, that now we are starting to collecting up data and healthcare to appreciate this. Pattern recognition is going to be the basis for everything that we do both at Health Catalyst but also I think in the industry and it is this ability to recognize patterns in the data that frankly you could never see with a declarative programming language, with the programmer siting down, trying to see these patterns themselves, you would never be able to see these patterns. Now, we have ensembles of algorithms we can piece together that show us patterns in data that we have never had before. So that is the basis of the future – is this pattern recognition (72:23) base.

The Arden Syntax [72:25]

I will not go into Arden Syntax too much. If you want to study that, you can. But it was, at the time, kind of the leading way to represent knowledge in a clinical setting. It was adopted by HL7. I do not think there is much going on with it anymore. I think it has kind of become a bit stale. Again, because it is so hard to impart these knowledge frameworks on knowledge, it is just hard to do.

Arden Syntax: Assessment [72:49]

There were a few vendors at that time. I believe Siemens, McKesson, Eclipsys. I think we are all taking a shot at using the Arden Syntax to improve their EHRs. It did not go very far.

Support for Arden Syntax [73:05]

And here are some of those. At that time at Cedars-Sinai, I was doing a lot of work there. A lot of work with this at Intermountain as well. It has not gone anywhere.

Arden Syntax – History [73:17]

Oh and Regenstrief. I forgot about Regenstrief was involved and that was all published around. It actually came out, I think, before 1989. So I think that might be – I might be wrong.

Arden Syntax – Rationale [73:35]

But again, it was an attempt to make medical knowledge available at the decision making and making that knowledge transportable and verifiable. And I will not go through these anymore, but the reality is Arden did not work out and for all those reasons I mentioned.

Pattern Recognition [73:52]

I talked about the benefits of pattern recognition. This is going to be the future for better AI and better decision support in healthcare. So if you have not studied pattern recognition, I would become familiar with the basic concepts. We are all going to be affected by it.

Wikipedia [74:06]

And this is kind of interesting. I mean this still applies up to 10 years later, but pattern recognition basically means, you know, based on either a priori knowledge or statistical information extracted from patterns. So you always are combining a training set of data and a real data. You run that through a sensor, you look for features to extract and then you classify those features in the patterns.

One of the things that Watson has struggled with, for example, IBM and Watson, is not having an adequate training set in healthcare, especially around clinical outcomes to train the algorithms. So, we all had to be aware of this and we have to be aware of what vendors are trying to sell us. We have to ask, do we have the training set to bounce against the real data. And quite often we do not have that training data set yet in healthcare. It is going to take us a number of years to build up the data volumes we need to really get the high value out of these algorithms. But we will get there. I am convinced to that.

Other AI Methods [75:15]

At that time, there were genetic algorithms, search algorithms, constraint-based problem solving. Frame-based reasoning. These were all the things that were affecting our perception of clinical decision support. Some of these still hold up. Some have change a lot. I think genetic algorithms conceptually has kind of died in favor of simpler to understand models, although the concepts are still in the way in the algorithms.

A lot of these others kind of fall on other (75:47). Not a lot going on with constraint or frame-based reasoning anymore either.

Frame Example [75:53]

Let me give you an example of these frame-based reasoning environments, and this is the kind of tagging that you have to put around knowledge, and at that time, there was no other way to tag knowledge and try to categorize knowledge than to do it manually. And so you can see that it is just not scalable.

So the frame here is of course at a university. The slot is a patient enrollment. The class is student. The cardinality allowed minimum is 2, the maximum would be 30. So you can just see how difficult it would be to scale this across a large body of knowledge. It is just almost impossible to do, which is why these machine learning algorithms and pattern recognition algorithms are going to tell us a lot of this without having to go through these really complicated frames.

Now, there is some value intellectually in just going through this process, so that you understand the problem that you are trying to address. But other than a problem solving technique, it is not a scalable technique for actually implementing decision support in any setting.

Arden Example [77:02]

Here is an example. This is a new slide that I added to this deck. By the way, friends, this came from a JAMIA article, in part published by a former colleague, Peter Haug, showing the Arden Syntax and how you have to tag knowledge and data with this knowledge representation tagging that Arden requires. It is just a very hard thing to scale.

In Summary [77:27]

So I finally made it at the end of these slides. Wow. 84 slides. The big summary here, friends, at that time, was that Enterprise Data Warehouses and Electronic Medical Records work hand-in-hand to address Clinical Decision Support. And I think that still holds up today. I still think we have a lot to do at the patient level at that loop that I mentioned earlier. But we are getting better and better at this as an industry. I am very encouraged.

At that time, I opined that Artificial Intelligence had yet to prove itself scalable beyond informatics research projects and I think that is still largely true today. But I think we are on the verge of seeing a renaissance in AI and machine learning in healthcare because we now have more data than we had in the past. We are still missing really important data like the social determinants of health that are so important outcomes and we are not collecting outcomes data from patients yet. Without those two days that it is going to be hard for us to achieve the real potential of AI and machine learning.

So I would just close today by saying we have to start collecting patient outcomes data and we have to start collecting social determinants of health data, according to that IOM study that came out a couple of years ago, if we are really going to derive benefit from machine learning and AI.

Thank You! [79:00]

That is it. We have got questions and discussion?

Healthcare Analytics Summit 16 [79:02]

Again, this is a slide that I had many years ago. Tyler asked me to pop this slide up which was its advertising, I think, our Health Analytics Summit in the fall, September 6th through the 8th in Salt Lake City. We have got some really interesting speakers. Don Berwick will be there. Eric Siegel will be there. He is a great predictive analytics guy. David Torchiana from Partners, an important colleague of ours, will be there. You can see all the names here.

I might call attention to one that I think is going to be particularly interesting, they all will be, but that is Anne Milgram is going to talk to us about the use of predictive analytics and machine learning in the criminal justice system. And a lot of people do not know that we are using predictive analytics and machine learning to inform sentencing and also inform whether we put criminals on parole and probation or not. So it is going to be very interesting.

Okay. That is it.

Tyler, do you have anything to say, friend? Or should we go to the question? We do not have a whole lot of time left.

[Tyler Morgan]

Well, we do have a couple of poll questions. We have our giveaways for the Healthcare Analytics Summit. We would like to be able to do that right now before we get to the questions, if we can. This will take just a moment.

So this is just a couple polls that I will launch up here.

Are you interested in attending the Healthcare Analytics Summit in Salt Lake City? (Single Registration) [80:29]

We have got – the first is going to be the single registration for the summit. Please respond if you like to attend on the 6th through the 8th. Now, because of high demand and limited space, these registrations must be redeemed by registering for the summit by August 15th when they will expire. So I will leave this up for just a few moments more and then we will put up the team of 3.

And Dale, I believe you have got access to look at the questions now, so you can start reviewing some of those, so we can jump into the Q&A?

[Dale Sanders]

Yes.

[Tyler Morgan]

That time. Wonderful.

[Dale Sanders]

Yes, there is not many questions. We should probably give it only five minutes.

QUESTIONS ANSWERS
This is all material and ancient history. What about now and into the future?

 

Well I do not know if you stuck around to the end but I tried to touch on that a little bit. And again, the whole premise of this presentation today was a look at ancient history and bounce that against what we are seeing today.   So that was kind of the whole intent here. I guess in that regard I hit the mark but maybe not to your satisfaction.

 

Request for slides Yes, we will post the slides.
Question about the cost for 400,000 patient deaths. Yes, that is a complicated question to ask because you would have to ask the lawyers how they would equate that to sort of total economic loss to the country from 400,000 unnecessary deaths.

 

Where is patient engagement in this adherence?

 

Well yes, I think that is the problem. I think we have to start measuring a patient’s willingness and capability when we take them into our care delivery environment. Part of the care management update process has to include profiling patients to the degree that we can, their ability to participate in their own care, and their willingness to participate in their own care. We have patient activation measures. We kind of know how to do that. We have the 15 very simple social determinants of health data elements that IOM, Bill Steadman was on that study, produced a couple of years ago.   So we know the basic data that we need to start collecting. We just do not collect it yet. But we have to start doing that. Then we have to start pulling that into our risk stratification algorithms and also our care management strategies.

 

You seem optimistic about machine learning and pattern recognition in other areas. Has it really demonstrated its (83:27) to curated or structured organization of data or have we just decided it is good enough?

 

Well we are seeing things that make me more optimistic about this than in the past. Going back to the work we had at Intermountain, if you look at the three patterns that I think we consistently followed at Intermountain, it was patients like this who were treated like this had outcomes like this. And so, we were developing algorithms right now that help us identify without us imparting the definition of a patient type.   We can identify patterns of patients in the data that we do not have to define a priori. So, patients like this pattern, we are nailing that right now.

 

Patients that were treated like this, right?   How are patients like this treated?   That is a more complicated, lots more variability in that pattern, but we are starting to make progress on that too.

 

And then the last thing who had outcomes like this, that is the part that is frankly missing because we are not measuring outcomes. But we are seeing very encouraging results in those first two patterns, patients like this who were treated like this, and then the next step will be had outcomes like this.

 

For example, can Google types avoid adding non-fruits to its fruit category?

 

It is complicated, as healthcare is. I think we have simpler pattern recognition problems to address than what Google is trying to address. I was reading a book to my little girl the other night, ‘Frosty the Snowman’. She wanted to read ‘Frosty the Snowman’ in July, and I was all in favor of that.   And in the book, the kids are bringing twigs to the snowman, and I said, “What are those twigs going to be, Swift? What are the sticks going to be?” And she said, “Those are going to be his arms.” And I thought for a second and then I thought there is not a pattern recognition program in the world that would ever recognize that those twigs in that context are going to be arms. But that is not the kind of problem that we have to address in healthcare. I think our problems are surprisingly easier to identify they had significant impact.

 

Now, over time, our challenge is we will probably approach Google. But right now I think they are within reach and addressable.

 

 

[Tyler Morgan]

Okay. Well Dale, we do have one last poll question we would like to ask everyone. We have got a couple minutes left for that.

How interested are you in someone from Health Catalyst contacting you about a demonstration of our solutions? [86:30]

And that is that we have had many requests in the past for more information about Health Catalyst, who we are or what we do. Well, our webinars are intended to be educational. We would like to be able to let those know, if you are interested in having someone from Health Catalyst reach out to you, schedule a demonstration of our solutions, please answer this poll question. I will leave that up there for you.

It looks like we have got another question came in. I will leave this up, Dale, if you want to address that question.

[Dale Sanders]

Oh yeah. This is from Vince Vitali. Hi Vince, good to hear from you. Vince is always good about retrospectives too, by the way. “Amazing how much is still relevant. Why aren’t we making much progress in so many areas? (Or maybe you are just that smart.” I do not think it is the latter, but thanks, Vince. I do not know. To be honest with you, Vince, I really do not know. I mean I think the economic model of healthcare has not necessarily demanded that we do this. I think the reality is it has been okay to be mediocre in a lot of these areas. The economics are changing. I think we did not really have much data back then. EHRs were still probably in the 25 to 30 percent adoption rate. We did not have the data savvy consumer that we have today that is going to demand that we become more data driven in healthcare.

And then I think another reason is, you know, frankly it is very hard to modify our current EHRs to support this kind of decision support. They are made to support general patient care. We had challenges. When I came from Intermountain at Northwestern, where I went from Health to Cerner and Epic. At Health, we owned the entire EHR, everything about the API. We could do whatever we wanted to with it. When I came to Northwestern and tried to replicate that, it was pretty hard because Cerner and Epic were developing tools that had to be generally applicable across all patient types in the entire market. And that kind of works against the approach we had an Intermountain, which was very targeted, very specific patient types, and data to support very specific decision support initiatives. We were very specific. Cerner and Epic are very general and understandably. So I think we are going to see a shift actually, where I think FHIR will help us become more specific, and I think you are going to start seeing decision support modules that are more targeted around patient types.

Okay. I guess we better…

[Tyler Morgan]

Yes. We are at time. Dale, thank you so much for this presentation. I thank everybody for hanging on with us.

[Dale Sanders]

Yes.

[Tyler Morgan]

I would like to remind everybody that shortly after this webinar, you will receive an email with links to the recording of the webinar and the presentation slides, as well as the Summit giveaway winners. Also, please look forward to the transcript notification we will send you once that is ready.

On behalf of Dale Sanders, as well as the rest of us here at Health Catalyst, thank you for joining us today. This webinar is now concluded.

[Dale Sanders]

Thanks everyone. Have a great day.

[END OF TRANSCRIPT]