Predictive Analytics: It's the Intervention That Matters (Webinar)


Predictive Analytics: It’s the Intervention That Matters (Transcript)

[Tyler Morgan]
Good day all and welcome to Health Catalyst’ Fall Webinar Series.  We are excited to kick up the series with today’s webinar, Predictive Analytics: It’s The Intervention That Matters. My  name is Tyler Morgan and I will be your moderator today. Throughout our presentation, we encourage you to interact with our presenters by typing in questions and comments using the questions pane in your control panel. We will be answering questions at the end of the presentation during our questions and answers time. If we don’t have time to address your questions during the webinar, we will follow up with you afterward. We are recording today’s session, and after the event you will receive an email with a link to the recording,  presentational slides, and information on registering for the next webinars in the series. I am very happy to introduce our presenters today, Dale Sanders and David Crockett.

Dale Sanders has a diverse 30 year background in IT, with the most recent 17 years in healthcare IT. In addition to serving as Health Catalyst senior vice president of strategy, Dale is  a senior research fellow for the Advisory Board Companies and current senior technology advisor for the National Health System in the Cayman Islands. To learn more about Dale and his incredibly diverse background, you can look him up on his LinkedIn profile and follow him on Twitter and then he will have a slide with that contact information available.

David Crockett is Health Catalyst’s senior director of Research and Predictive Analytics. He brings nearly 20 years of translational research experience in pathology, laboratory and clinical diagnostics.  His recent work includes patents in computer prediction models for phenotype effect of uncertain gene variants. Dr. Crockett has published more than 50 peer reviewed journal articles in areas such as bioinformatics, biomarker discovery, immunology, molecular oncology, genomics and proteomics. He holds a BA in molecular biology from Brigham Young University, and a Ph.D. in biomedical informatics from the University of Utah, recognized as one of the top training programs for informatics in the world. Dr. Crockett’s current focus is the ability to predict patient health outcomes and enable the next level of prescriptive analytics – the science of determining the most effective interventions to maintain health.

I will now turn the time over to Dale Sanders.  Dale…

[Dale Sanders]

Thanks Tyler and thank you everyone for again sharing your time with us. It’s a true honor and blessing on our part to be here with you today. I see a lot of dear friends and colleagues on the attendees list and we’ll do our best to share our time in a quality fashion with you today.

I just want to mention, at the beginning, I got an email yesterday from a good friend and also at CMIO and he mentioned that – he said here, “One thing that I think would be helpful, generally speaking, is helping executives and operational leaders get a really concrete idea of what true value analytics brings to a healthcare organization. These days the terms ‘predictive analytics’ and ‘big data’ are thrown around so frequently and so casually, that for many they have become devoid of actual meaning.” And I fully agree and empathize with that. And what we hope to do today is try to clarify that, try to separate some of the hype from reality. And the message that you’ll hear from us on a pretty regular basis is you don’t really need to worry too much about predictive analytics yet. There are other things that you should be doing that are a lot less complicated and that will lay the foundation for predictive analytics. But focus on the simple, easy things first. So what we’re going to do today is talk about all those lessons learned that we’ve experienced, David and I have experienced in our careers and the direction that David is going to take Health Catalyst and kind of provide you with that background, so that when it does come time to make big decisions around predictive analytics, you’ll be in a good position to do that.

Audience Poll

Benefits of predictive analytics to healthcare over the next three years?

So one of the things we like to do is take a quick poll at the beginning of the presentation to assess everyone’s thoughts about the benefits of predictive analytics in the next three years. So I think Tyler is going to pop a poll up to attendees here. And we just wanna gauge at the very beginning what do you think the impact will be in three years of predictive analytics.

Dale:  Tyler, do you have that?

Tyler:   That poll is up and we’ve got folks voting. We’ll leave this up for another 25 to 30 seconds…

Dale:  And then you’ll share those results in real time with the audience. Is that right?

Tyler:   Absolutely.

Dale:  Okay.  And while everyone is doing that, we put three years on here on purpose because if we would have asked in one year, I think the answer would be a lot different. And then I’ll share with you my answer and we’re all done here.

Tyler:   Okay.  We’re gonna close the poll in about 5 seconds…Alright. Based on the poll, we had 80% voted and we’ll see that present…45% said they believe there would be major impact.  Are you able to see these results, Dale?

Dale:  Yup, I can see them. And so, what I was – my answer would be I think there’s going to be some impact.  And the reason I say some instead of major is because it takes a long time to drive change in healthcare.  I actually think that we could see major impact if the economic models of healthcare in the US change faster, but at the rates that (06:10) is going and the economic model is choosing, I think it’s gonna be a little harder to extract that significant benefit from predictive analytics. But that’s straight.  I appreciate everybody showing and sharing your thoughts on that. And at the end of the presentation, we’re gonna ask you that same question again and see if anything has changed.

[Dale Sanders]

Overview

So let’s see here.  There we go. In overview, what we’re going to do today is I’m gonna kind of give sort of the human interest and color commentary about my experiences in predictive analytics and this odd parallel between “treating” terrorist and treating patients using predictive analytics. My background includes an anti terrorism work. In fact, that’s what led me to healthcare. And then David is gonna give you his graduate level crash course that is gonna be invaluable.  I’ve gone through the slides myself and I think it’s just gonna be a really interesting presentation, very valuable.

An Oddly Relevant Career Path

So this oddly relevant career path that I have, I spent a number of years in the Air Force as a CIO.  I specialized in nuclear warfare operations, worked for an organization called the Strategic Air Command. And when I got out of the Air Force, I worked for TRW.  And TRW was a big Space and Defense company at that time. Now, they’ve vested themselves of that and all they do is automotive parts.  One of the major contracts that I had as an employee of TRW is I worked for the National Security Agency and my team was responsible for protecting the US Nuclear Command & Control System. So we worked with the National Labs and the Department of Defense and we would dream up all of these crazy different scenarios and we would attempt everything from stealing a warhead to hacking into the Command & Control System and the National Laboratories would work with us to develop all of these bizarre technologies to see how possible these scenarios were to pull up. And of course modeling and predictive analytics were a huge huge part of what we did.

One of the interesting things that TRW also offered was their credit risk scoring. So they were the owners of what’s now known as Experian. So between NSA and this credit risk scoring environment, TRW had their hands on a lot of data and in particular they were some of the leading engineers at the time around predictive analytics. Then the last time that I worked on at TRW was for the Pentagon and was called the Strategic Execution Decision Aid, and this was an attempt to help clarify and reduce the variability of decision making in the event that we face a nuclear war. And so, as bizarre as all that sounds, it’s amazing to me how parallel a lot of the processes are between this environment and healthcare.  And I hope you’ll see a little bit more of that in the following slides.

Key Messages & Themes

So on key messages and themes today is that we tend to be fixated right now on predictions and interventions around readmission. It’s a little disappointing to me because what we really should be working on is predicting and intervening on admissions. We’re worried about readmissions but we should be thinking upstream and we should be aiming much higher than that.

Predictions without interventions are useless and potentially worse than useless.  There are a lot of times when you can predict things and you look at it and you’ll go, wow, I wish I didn’t even know that because we have no means to intervene.  So if you don’t have a strategy for intervention to go along with your strategy for technology and predictive analytics, you expose yourself to a lot of problems.

Correlation does not imply causation and I think David is gonna touch on this a little bit more but predictive analytics, especially in healthcare and where there’s humans involved, there are a lot of very complicated surrounding factors that have nothing to do with causation but you’ll see these correlations in the data in the algorithms. And so, being able to toss at the side and see through that is very very important.

Missing data equals poor predictions no matter what the environment, be it healthcare or terrorism or nuclear warfare. And in our environment, we’re missing huge amounts of data around patient outcomes, familial data and genomics right now. We’re not bending that back to the point of care. We’re not even collecting familial data.  We’re not even collecting patient outcomes data.  And so without that we really can’t close the loop effectively on predictive analytics towards patient care.

Some of the most important predictions, you don’t need a computer to tell you. Nurses and physicians could tell you on their experiences. It’s a lot more effective, it’s a lot more accurate, and it’s a lot less costly.

Last thing is as I mentioned earlier when it comes to analytics, take care of the basics first. There will come a time for predictive analytics. And if you follow some of the advice that all of us have learned the hard way, you’ll be prepared for predictive analytics, as well as a lot of other tools by laying that foundation first.

Most Common Causes for Readmission

Speaking of the easiest things, Robert Wood Johnson Foundation published in February of this year (2013) the most common causes for readmission. And they’re listed here.  There is no caregiver or family support at home, they’re getting poor discharge instructions or inaccurate discharge instructions, patients didn’t understand those discharge instructions, they were discharged too soon. And interestingly, patients were referred outside of the network of the hospital that treated them. So the hand scribbled message down there at the bottom is that ‘we don’t need an expensive predictive analytics program to tell us what to do. We already know.’  When you think about, that if David and I were true vendors trying to sell products instead of solutions, we’d be advocating predictive analytics right now. But we’re not.  We’re really not.  We’re telling you to take care of basic things first. Predictive analytics will come but not right now.

Healthcare Analytics Adoption Model

Some of you have seen this analytics adoption model that we have been developing over the last number of years. More recently, we finalized this. This represents the final version of it and I in particular want to acknowledge Dr. David Burton and Denis Protti. Dear dear friends to me, dear colleagues to me. And the contributions of Dave made to the development of this, in addition to a number of other folks.

But again I wanna bring attention to this. In this analytics adoption model that we’ve developed – and by the way this is based upon years and years of in the trenches experience in healthcare and we really feel very strongly that this is a really helpful model to follow deliberately.  We don’t call for predictive analytics until we get up to level 7 on this model.

There’s a lot to do laying a foundation and taking care of other basic things before you get up to level 7. Now, there are some organizations in the industry that operate at these upper levels.

Places like Intermountain Healthcare are a good example to that. And so, they’re prepared to take care of and take advantage of predictive analytics in that environment. They have the culture, they have the technology to leverage it effectively.

I’m gonna ask all of you to take a look at this and rate your own organizations.  Where do you think you operate consistently day in and day out on this adoption model? And I’m gonna pause and let you absorb this for just a minute and then we’re going to ask you to answers that question in another poll and then in a couple of slides. So where do you operate consistently day in and day out at your organization?  Take a look at that model there.

Okay.  Now, I’m gonna move on this info.

So the goal that we had in developing this model is so we could have most of the content and the value under one page so people could carry it around and you can literally use this as a checklist to ask yourself and ask your organization about where you’re operating on the model. It’s a little too detailed to go over in this presentation.

I’m gonna pause for just a minute to let you absorb a little bit of it.  And now we’re gonna ask you, where do you think you operate or organizationally day in and day out on this adoption model? Are you down here at the fragment of point solutions? Do you have a data warehouse in place?  Have you integrated some of your vocabulary in patient registries? Automated internal reporting, is it consistent? Reliable? How is your external reporting? Can you adopt all the new requirements? Are you dealing with clinical effectiveness and accountable care? Are you really addressing the triple aim, which is patient centered care, population centered care and economics of care at the point of care?

Dale:  Okay.  How are we doing on the poll, Tyler? Is it coming along?

Tyler:   We’ve got about 45% of attendees that have voted.

Dale:  Okay.

Tyler:   We’ll leave this up for another 15 to 20 seconds…Alright. We will close the poll now and let’s share the results.

Dale:  So that’s interesting.  And this is consistent with other polls that we’ve had where most organizations are still operating at level 1 and 2 and trying to get up to level 3 and 4. It’s good to see and I love to know the organizations that are operating at level 7 and 8 and we need to bring you to the forefront of attention in healthcare so that organizations can learn from you and follow your role model. So it might be interesting to follow up with the folks that have responded in those upper levels.

Audience Poll

Analytics Adoption Model: At what level does your organization consistently and reliably function?

Okay.  Moving on now.  So here’s the poll.  We’ll share all that later.

Challenge of Predicting Anything Human

So let me just mention this. These are kind of (17:31) thoughts here but very important.  It really is quite important actually. There is this challenge of predicting anything that’s human. And if you read philosophy and the progression of map and physics over the centuries, you see this pattern emerging in which the maturity of the body of knowledge progresses along with the maturity of the mathematical models that surround that body of knowledge. So the more math we can apply to that body of knowledge, the more we understand it.

Astronomy is a good example of that. Astronomy was for many years just based upon observation.  We didn’t really know how to model it mathematically. And so, we had some very odd interpretations of what was happening in the universe and the stars and the galaxy and things. But over time, as we became better at applying math to that, we could understand it, we could predict it, and it cleared up that body of knowledge. And the same thing happens in medicine. So sociology, the psychology of random violence, anthropology, anti-terrorism, they’re very difficult environments to model with mathematics. Classical physics, quantum physics, electrical engineering, on the other hand, quite easy to model, quite easy to predict with mathematics.

Medicine was somewhere kind of in the middle, where we do have some math associated with the field of medicine in the body of knowledge. Lab results are a good example to that. Those are numbers, measuring things, functional measurements of behavioral health, functional measurements of physical health, the biomechanics of devices in surgery. We’re making in roads towards the development of that mathematical model but we’re actually still a long way away. And there’s actually a very big gap here.  And you see economics is this word combination of math and human behavior. Industrial engineering is the same thing. This graph really doesn’t represent the enormity of the gap that exists between medicine and these more hard sciences over here at the right. And we need to be aware of that as it’s important to understand that there’s an (19:58) of understanding that we can achieve right now given the map that we have available for medicine and that cascades into these predictive algorithms.

You just have to understand there are some things about our ability to model human behavior that we don’t understand yet, and that in turn should tamper in paper what you expect from predictive analytics.

Sampling Rate vs. Predictability

There’s another important concept here that relates to healthcare, and experimentation in general, the hard sciences, and that is that the sampling rate and the volume of data in an experiment is directly proportional to the predictability of the next experiment. So for example, if you look at and you were a part of a NASA or an Air Force Space launch, you’d see that the data that’s collected around that single space launch is at least a million times greater than the data that we collect around patient care. If you think about the patient care sampling rate that we have right now, it’s only when you really have an encounter. It’s when you go into the doctor’s office.  And the sampling rate, the volume of data that we generate still isn’t that much. A lot of times we like to think it’s a lot of data, in particular the imaging data, but that [inaudible] data doesn’t have the mathematical models around it that are very effective yet, either there’s some measurements and things like that. But overall we haven’t instrumented the patient like we have instrumented this space launch and until we do that it’s gonna be very hard for us to predict what’s going to happen next to the patient care, specifically in the populations.

Can We Learn From Nuclear Warfare Decision Making?

So going back to that bizarre background, what kind of lessons can we learn from nuclear warfare? Well the odd thing is we have this “clinical” observations in nuclear warfare, just like we have lab observations, and that is we had satellites and radars that indicated some kind of an enemy launch.  That was our clinical observation.  We thought they have a predictive “diagnosis”. “Are we under attack or not?”  Is this a false positive from those lab results? Is this a false positive from the satellites and radar? The decision making timeframe was like that in an ER or an ICU. We had less than 4 minutes to make decisions when those enemy submarines were launching from the east coast of the US. It was very time critical. And during that time, we had to make decision about treatment and intervention and that scenario was about launching on warning or not, do we try to write this out, is it really happening, do we launch, because if we can’t launch we’re not gonna have the ability to launch later on. Very stressful environment. They’re very direct parallels to healthcare.

Desired “Outcomes”

We also worked around desired “outcomes” and these are literally taken from what we’d known at that time as the single integrated operational plant for the United States Military and these were our desired outcomes when we were faced with this decision making environment. These are the outcomes that we were trying to achieve. First and foremost, there was retention in US society as it was described in the Constitution. Then we had to retain the ability to govern and command US forces, minimize the loss of US lives, destruction in US infrastructure.  And we had to do all of these as quickly as possible with minimal expenditure of US Military resources. So again, you see the parallels.  These are the kinds of things that we need to (23:35) healthcare first, what our desired outcomes, we need to do that as cheaply and as quickly as we possibly can.

Where And How Can a Computer Help?

So we undertook the computerization of this process in the late 1980’s and into the 1990’s when the (23:51) was coming apart and the control of nuclear weapons was a big unknown in the political environment world at that time. And there is an act called the Nunn Lugar Act that set aside tens of millions that ended up being hundreds of millions dollars trying to improve our ability to react reliably and minimally in this environment. And that was my first exposure to big predictive analytic algorithms. So at the center of this, you have this poor stressed out military commander trying to decide how much time is left, what our human intelligence is saying about the situation, are the sensors right or wrong, which targets are under attack and what might that imply about the enemy, where is the president and all this, have we contacted him or her, which of our forces are available to respond, what if we do respond, is that gonna escalate things, and what is the appropriate response given the situation in the world. And then all of that had to fold into a response and intervention.  As I mentioned, we had 4 minutes to make that decision.

What I observed as a member of the battle staff in these situations was enormous variability of decision making across the general officers at the middle of this decision and the command authorities at the civilian level.  Great variability in the way they approached this. And so, I was working on this for the military and for the Pentagon trying to standardize and reduce the risk of disasters and the horrible outcomes when I started studying the use of computer within healthcare, thinking I would apply that back to the military. And I was introduced to Al Pryor and Homer Warner and Reed Gardner, Peter Haug, Scott Evans, a lot of brilliant people out at Intermountain Healthcare. David Classen.  And I learned at that time that healthcare was in  fact a lot less computerized and a lot less advanced than what we were in the military. And so, I made a pivot in my career towards healthcare saying that, you know, that there’s a lot of opportunity to help the world on a different level by applying what we learned in the military to healthcare.

This is a screenshot from the SAC Underground Command Post, just kind of a human interest story there. This is the environment we worked in and you can imagine how complicated the predictive algorithms were for the general apertures in a battle staff making those decisions.

Lessons For Healthcare

So what kind of lessons that I learned from this that I tried to apply to healthcare?  Humans didn’t trust the predictive models. So we modeled all this out and spent literally hundreds of millions of dollars on this but the human decision makers didn’t trust those predictive models when the timeframe was compressed and the consequences of a bad decision were extreme. So I’m led to believe that predictive analytics initially in healthcare will be much easier to apply culturally in slowly changing situations like chronic condition management, elective procedures, ventilator weaning, glucose management, and I should actually say glucose management in the ICU, and antibiotic protocols.  But in really compressed timeframes where the outcomes are extreme and the consequences are extreme, predictive analytics algorithms didn’t have much effect on changing behavior.

The other thing that we found is that subjective human issues were not well modeled. So there was always this fear, there still is this fear that the “Rogue Commander” scenario kind of the (27:44) October scenario in which a “Rogue Commander” or a terrorist group obtains a nuclear weapon is very difficult to model. So subjective mathematical models around human behavior are still just not there yet. That said, we still need to try to at least quantify those “Rogue Commander” kind of scenarios, and in this situation, the “difficult patient”.  We’ll talk about that in just a little bit.

Without outcomes data, there’s this all guesswork. Now the good news is in nuclear warfare, we don’t have much outcomes data and hopefully that will always be the case. But it doesn’t have to be the case in healthcare.  All of us collectively as professionals should be putting enormous pressure on our EMR vendors and entrepreneurs and only see to solve this problem around outcomes data.  If we don’t collect outcomes data, our predictive algorithms for patient care are going to be next to worthless.

Quantifying the Atypical Patient

So speaking of quantifying an atypical patient in that subjective human factor, one of the things we found at Northwestern is about 30% of patients under chronic condition management fell into these atypical profiles, and that is they had a cognitive inability to participate in a protocol, they had an economic inability, a physical inability, geographic inability, religious beliefs that prevented their participation, some sort of contraindications, sometimes genetic contraindications to participation, or they were simply voluntarily non compliant. They didn’t wanna participate.

And so if you’re going to be a true accountable care organization, you have to accommodate these attributes to the patients in the population set because they are going to require a different strategy for outreach and treatment. Likewise, if you’re going to try and predict the outcomes and the treatments and the responses to these patients, you also have to use these attributes in your models. So it’s really important to be able to start thinking about standard ways to quantify these atypical patients so that we can fold those into our predictive analytics algorithms when the time comes.

Accounting For These Patients

So your algorithms must be adjusted. They’re a unique numerator in the overall denominator of patients under your care. You need a data collection and governance strategy for these attributes. You need a different interventional strategy for each of those 7 categories. And interestingly, your physician compensation model must be adjusted for these types of patients. So in the future, we’re all going to be holding physicians accountable, if not already, for some level of accountable care and quality improvement.  But you can’t hold physicians accountable for a patient that’s voluntarily non-compliant with the protocol or has some other attribute that prevents them from participating effectively in that physician’s guidance. So that’s why we need to start thinking about these things now when it’s early enough to plan for and prepare for.

Sortie Turnaround Times

And one of the important parts of that is the sortie turnaround time of aircraft. Important in the order of battle to be able to predict the arrival and turnaround time of aircraft so they can be returned to that battle space.  And that means the delivery of people, logistics, equipments, supplies, fuel…Over time what we would do in the military, and I understand this is progressing under the commercial airline industry, we would profile these incoming planes with as much detail as we possibly could, including down to the pilots that were onboard, and we would compare those incoming profiles with our database of similar profiles and their turnaround times. And then we could predict more accurately the turnaround time of these aircrafts back to the order of battle.  And if you ever had the chance to speak to a military commander who got to manage the order of battle, they’re some of the smartest people that you’ll ever encounter. It’s an enormously complicated thing to manage. And so, these predictive analytic algorithms become very very important in managing this very complicated battle space here.

Patient Flight Path Profiler

So it hit me a few years ago at Northwestern that theses add parallels to the arrival of patients in healthcare. So as soon as we see a patient on their way to, in some fashion, in a data way,  we see them arriving into our healthcare environment, the more we can start profiling those patients and comparing those patients, so those that have been through this before and with how quickly they return to a life space and a comfortable good life, then we can start understanding what we’re doing right and wrong about these patients that are inbound just like we could those aircrafts. David and I are gonna work more on this concept later and hopefully be able to demonstrate these products to you in the next few months.

Healthcare As a Battle Field…?

So Healthcare as a battle field is sort not parallel but the truth is there’s a lot of parallels between that order of battle and the order of care, and that includes demand forecasting:  What do we need and when? And it’s really, it’s almost separate from the notion of measuring outcomes, although that’s still very important.  So as the patient is arriving or is becoming a member of your accountable care organization, starting to profile patients like this as quickly as possible, comparing them against the other millions of patients that you have in your enterprise data warehouse, exposing that to the predictive algorithm so that you can now forecast the demand and the timing for quantity, types of people, equipment, supplies, medications, facilities, and then arriving that in the order of care when it’s appropriate, making sure that everything is available when it’s supposed to be they’re not sitting around unused when it’s not needed.  And then ideally, as we measure outcomes, we’ll fold that all back to the data warehouse and start optimizing this loop.

NSA, Terrorists, and Patients

So the interesting thing about the work in the NSA is that characterizing terrorism, characterizing patients end up being an enormously similar process, and it comes in three parts. We would spend a lot of time trying to identify those people in the world who might be a terrorist, and as it turns out, the indicators are very strongly associated with family and friends and your lifestyle behaviors.  That’s what drives those algorithms and it has interesting parallels to patients as well. We would characterize folks who were a terrorist through their active participation in an active terrorism and then of course there were those who were no longer terrorists, and in those cases we either reformed you, we jailed you, or you died. But that same basic model applies to the categorization and the management of patients as they progress from you might have a disease through you do have a disease through you hopefully no longer because we treated you effectively.

Predicting Terrorist Risk

The important thing about that process is not just predicting the progression from here to here to here but it’s predicting the risk associated with that – because that’s what starts driving your interventional strategies – it’s when you really understand the risks. When we’re dealing with terrorism risks, it’s the combination of the probability of an attack times the probability of success if attack occurs and the consequences. And we always have to understand what are  the costs of intervention and mitigation, and do they significantly outweigh the risks. And those are the same kinds of discussions we need to have around patient care. So predicting patient risk is a similar algorithm but it’s not about the success of the attack. It’s about what’s the probability of the disease, what’s the consequence of that disease and “disease” in this context is anything that could detract from health. So it could be an adverse event, it could be an acute care condition. But we have to start folding predictive analytics and risk management into the same time or rather the same thought space.

We Know the Probabilities

We know the probabilities, right? This came from the folks at HCUP that pushed this out the other day.  We know the number of hospital stays per 10,000. This is where we should be focusing our risk management strategy in healthcare and our predictive analytics strategy because we know that this is the primary driver towards our hospitals right now, these disease states.  We have to start thinking about the consequences in terms of dollars and what are strategies and costs to intervene will be and how that relates back to our predictive analytics strategy.

Lessons from Healthcare

So what are these lessons that we learned from the NSA and terrorism? That multiple predictive models that “vote” in an ensemble is the most accurate method, not single models. In the absence of data, and until more data is available, multiple expert opinion in those is better than nothing. Leveraging the “wisdom of crowds”, I’ll talk just a little bit more about that. This backward chaining, supervised learning, are the most accurate.  So you know what the disease was, you know what the outcome was, and you’re trying to work backwards from that to understand why it happened rather than working forward from data trying to understand what might happen. A lot more complicated to work backwards but it’s also a lot less flexible to work backwards.  David will talk more about that later.  Interestingly, family, friends and what you read were major predictors of terrorism risks. When I was in that space, interestingly, we don’t collect familial data in the course of care but that we had a great value in healthcare.  We found that if you associate with more than one terrorist group, it’s an even greater indication of your commitment to become a terrorist and it has these interesting parallels to comorbidity. The more chronic disease that you have, the higher risk you represent to the accountable care organization.  And then even when we did accurately predict terrorist activity, we found that the cultural willingness and ability to intervene were also incredibly difficult and you can see that in the press today and there’s more going on behind the scenes that we’re not even aware of. Arm drone “assassinations” I think would be around TSA profiling, very difficult interventions, culturally very challenging.  In healthcare, the BRCA genes and prophylactic mastectomies are also a very interesting example of the brutal nature of interventions and how difficult they can be.

There’s more reading on this topic, I won’t go into that, but it has interesting parallels to risk management in healthcare that I think everyone would benefit from.

Suggestive Analytics

I wanna talk a little bit about suggestive analytics. I think it’s a lot easier than predictive analytics. It leverages the “wisdom of the crowds”. If you haven’t read “Nudge”, it’s a really simple easy read and you’ll get exactly what I’m saying. By surrounding the decision making environment with related analytics, you don’t have to predict anything. You’re just nudging people towards different behavior.

The example I use all the time is the way that our transactions around purchasing a book like “Nudge” are surrounded by all sorts of other suggestive analytics, other books, the ratings, different kinds of prices, and all of that is suggesting changes to my behavior in the purchase of that book.

We can do the same thing in healthcare through closed loop analytics at the point of care and that’s the notion of the triple aim – that is 2/3 of the user interface that does have EMR is really ought to be about analytics. Patient specific information, patients like this with comparative analytics, typical outcomes, population health metrics. And then cost of care, what’s the average cost of care for this patient, what’s the cost of care for this encounter, all of that being said by analytics at the point of care. And that’s the notion of the triple aim.

The Antibiotic Assistant

The antibiotic assistant at Intermountain Healthcare was a program that David Classen and Scott Evans developed and worked on for many years. It’s still there.  I was lucky enough to be associated with it when I was at LDS Hospital. A great example of folding patient specific information, predictive efficacy of a protocol and cost of care all back at the point of care. So this is all driven by analytics, embedded within the patient’s chart in the health system at Intermountain Healthcare. One of the earliest and I think most forward thinking implementations of the triple aim.

The Antibiotic Assistant Impact

Antibiotic assistant had huge impact.  Complications declined, doses declined, and the antibiotic cost per patient declined as well. And all we did was display the cost to physicians.

Stories of Correlation vs. Causation

There’s interesting stories now about correlation or causation I wanna talk about and this is important.  And again, David can talk more about this. But a few years ago, there was this great kind of joke, it was actually quite an impressive joke by a professor from UC Berkeley, in which he produced a paper that said the production of butter in Bangladesh was responsible for and could be correlated to 75% of the movement on the Standard & Poor’s 500. And what he was really saying is that you can basically tie just about any data together to show a correlation if you want to but it doesn’t have anything to do with causation. And we see that a lot in healthcare.

One of the bizarre things that I saw at TRW is we took those NSA developed algorithms, we went down to our siblings in a credit reporting environment and we said, I think we can help you predict your credit scores more effectively. But what immediately jumped out of that was that African Americans were the highest risk in those credit algorithms. And of course that’s incredibly for causation and correlation interaction.  And the sociologist that we had on staff helped explain the bigger picture that in fact if you believe that correlation were causation, you would actually make the situation a lot worst.

Interestingly, in healthcare a few years ago we thought that hormone replacement therapy and cardiovascular disease in women were connected, that women undergoing HRT had lower cardiovascular disease.  But when we really looked into it – and so a lot of people were actually signed up for that. A lot of women were going to their doctors asking for hormone replacement therapy because of that. But what we really found later was that those undergoing hormone replacement therapy were actually from higher income levels that had more leisure time and could afford to exercise and eat better diet. It didn’t have anything to do with the hormone replacement therapy.

Audience Poll

So we’re about too close on my section of the presentation.  I’d like for us to pop up the next poll and ask yourself, how confident are you that your organization is prepared to combine the technology of predictive analytics with the processes of intervention?

And let’s put it – let’s say today – no, let’s not say today. I’m sorry.  Let’s say within a year, let’s put a timeframe on this, within a year, how effective do you think your organization would be at combining predictive analytics with these processes and strategies for intervention around analytics?

Dale:  Tyler, I assume that’s chunking away. There it is.  That’s great as there seems some optimistic numbers, which means that folks are thinking about this. And again, they’re very confident. I really feel very strongly, those of you that are confident about this, we need to get your stories out to the rest of the industry.  If there’s some way we can follow up with you and facilitate that, we would appreciate it.

Tyler:   Alright. We’ll leave this poll open for just another 5 to 10 seconds and then we’ll show the results…

Dale:  Are we good?

Tyler:   Here are our results.

Dale:  Alright. That’s great. Thanks, Tyler.  Very interesting to run. Thanks for participating in that.

Wrap Up

Okay.  Wrapping things up then, we’re in an extreme hype cycle. Be careful. Take care of those lower levels of the analytic adoption model first.  In the meantime, all of us in the vendor space will be developing good tools so that when the timing is right, we’ll be there for you. The human mathematical model in healthcare is years away, so manage your expectations carefully. We’re under appreciating and under utilizing the impact of suggestive analytics and we need to put pressure on the EMR vendors to accommodate that in their APIs. And then of course  you’ve heard this a million times, intervening is the hard part, predicting is the easy part.

Many thanks

And I think that is the end of my section. Here’s my contact info and everything else, contact info, and we’ll make these slides available to everyone later so that you don’t have to copy that all down.  Thank you everyone very much for sharing your time. Now, David is gonna be going into this content, which is going to be very very interesting.

And David, thank you and I appreciate sharing the time with you too.

DAVID CROCKETT: THE GRADUATE CRASH COURSE

[David Crockett]

Thanks, Dale.  That was great. I could certainly listen to that stuff all day long.  In terms of time check we have scheduled and now we’re in the half of this and we will be sure to have a bit of time at the end for questions and answers and I will get going on my slides.

Objectives

So today, topics that we’ll cover…everybody can see that okay. I wanna talk about just some machine learning overview, examples of leading Open Source and Commercial software. We’ll also work through an example building an actual predictive model on our example data set. And along the way we’ll summarize some key lessons that we’d learned.

Machine Learning 101

So welcome to your Machine Learning 101. The bad news, I’m gonna try to cram several semesters of machine learning and high performance computing into just three or four slides. But the good news, you get to avoid, you know, we got rid of boring stuffy professors.

Machine Learning is a discipline that deals with design and development of algorithms and leveraging computers to deal with sets of data that’s too big for us look at by hand.

One major focus of machine learning is to help the computer recognize patterns in the data and make decisions based on those patterns. And Dale has done such a nice job, first rate job, of giving some examples about all the military and other industries.

Machine Learning 102

So the next is perhaps of course where, Machine Learning 102, groups and every major industry and discipline face the challenge of extracting meaningful information from these large data sets.

And there’s of course wide range of applications for the use of Machine Learning.  Some of the better known examples include things like detecting credit card fraud, optimizing internet search engines, DNA sequence alignment, stock market analysis, and speech recognition. But importantly, there are existing industries that are historically very very good at managing population and managing risks.  The two that come to mind, you know, immediately are gambling casinos, life insurance, things like that. And again, Dale did such a nice job of introducing that topic, putting that to overlap with some of the unexpected areas and how in healthcare we can leverage that existing head start in this predictive analytic space.

Algorithms

Algorithms used in machine learning approaches can be – to the more popular approaches shown here on the top are Supervised Learning sometimes also called a classification, when the outcome is known ahead of time; and Unsupervised Learning, when the outcome is not available. And today, we’ll talk mostly about the Supervised Learning.

But one potential complication in the current healthcare system, and Dale alluded to earlier, is that comprehensive outcomes data is often missing. Now, this lack of capturing the final outcome severely limits in the utility of machine learning tools in this particular setting and it’s one of the big obstacles to widespread adoption in trust. So without a class outcome or a label to train the algorithm, that supervised model cannot be easily built, and sometimes also called structured or backward training, as Dale mentioned. But without the outcome itself, it’s hard to train supervised learning models.

The Modeling Process

Step 1

So a simple schematic, you know, a predictive modeling process might be instructed to review.

The first step is to carefully define what problem you want the computer to address.  You gather than necessary initial data, including outcomes when it’s available. Then we evaluate several different algorithm approaches to gauge performance.

Step 2

In step 2, we refine this process by selecting perhaps one or two of the best performing models. And now we test with a separate data set to validate their approach.

Step 3

The final step is to run the model in a real world setting. But a word of warning here to avoid confusion, to be sure to understand the purpose of them and the terminology around these training and test sets, how they can be used, how they should be split, and any subsequent independent validation using a new set of data. All those ideas are very key to understand.

A Few Definitions

A few test definitions will be helpful for us to know as all of us move forward in this area of predictive analytics. And depending on the industry and the staff expertise, terms used might vary. So when we indicate data that’s being put in and used as an input, different groups might call that features or attributes or variables and will also have a class or a label that tells us what the outcome is, a yes or a no. In terms of output, you know, after a bit of math and the algorithm splits out an outcome, this may be called again a prediction or a forecast or a trend or outcome, depending on the industry and the audience.

Feature selection is a great tool, a mathematical approach to simplify the input to only those variables having the most impact on the outcome.

And classification, as we mentioned earlier, a category of supervised learning. It will classify a group according to an object where it best fits using the characters or features or attributes of that object and place it where it belongs.

Feature Selection

So we’re nearly done with the lecture material. So congratulations for hanging in there.  This slide will list just a few representative methods used in feature selection, sometimes also called attribute selection.  This technique is used to trim down a list of variables to only those that have the greatest effect or impact on a given outcome.  So approaches such as principal components or chi squared may be familiar to some of you already, even if you didn’t realize that they could be used for selecting the best set of features to train a predictive model.

Know Your Data…

It is important to understand the data being used but even more important just to know the strength and limitations of the various algorithms that you want to use. So for example, in general, linear regression is for continuous data and logistic regression is for categorical or discrete data.  (54:50) deals with missing data much better than any other approaches. Support vector machine algorithms like SMO are very extremely powerful for classifying a binary classification or non-linear classifier. They’re very computationally demanding to train and run and are very sensitive to launch your data.  And that’s a few examples.

Insight #1

So lesson #1 that we wanna point out – in healthcare, the irony of technology driven and  maybe a more generalized predictor model, that inputs dictator or global features, is that any targeted utility is almost all that’s lost. So prediction focused on a specific clinical setting or a specific patient need will always outperform a generic predictor in terms of accuracy and utility. The reason for this, the very features that characterize the condition well are the very attributes that train an accurate predictor.  If those features don’t stand out above the background noise, the predictor only predicts the noise well. So the full power of prediction is best realized when specific variables are gathered and the target clinical need is met and participants are willing to act.

Specific Improves Accuracy

Another way to state this is specific champs (56:09) global. Now, I wanna emphasize the point that correcting specific data is always better than simply adding more data. Feature selection can really help in this regard.

So as shown here, the precision or cost of predictor power accuracy improves dramatically as we move from a generic hospital admission model running just below 80% when we move to a predictor specifically focused on cardiac patients up to about 90% accuracy. Why is this? It may seem obvious but I wanna be sure. Additional features that are a characteristic of that population only can now be added and leveraged above and beyond the generic tool. So just one example of this type of additional data fields might be lab values that are specific to – as cardiac markers, etc.

Classification

And as mentioned earlier, one of the more popular applications of supervised learning is classification. There are various computational methods used in this approach, these with their own strengths and limitations. The next few slides will cover simple examples of a rule based classification and regression based classifiers and although the other approaches have been used very successfully in many industry settings as well.

Classification – Rules Based

So rule based classification is perhaps rather more straightforward to understand and that’s simply because the output is often human readable, meaning that I can look at the final model and see the exact rules that are being used to generate the classifier decision.  So my kids often tease me sometimes saying the true cost of completing graduate school and PhD work is not the time of the tuition.  The real cost is in terms of hair, how much hair was lost. So having that example shows that my gender is now, and I’m older than 40. They’d be getting a little thin on top, maybe you’re running around 65% probability or some metric. But if I’m a male and over 40 and I have a grandpa or an uncle or a brother that already has lost their hair, you’re much higher, over 80%, and a good chance that I can be forecasted to fall in that group as well by a rule based  classification.

Classification – Regression

Regression simply finds our weight or a multiplier now to each variable in terms of how much impact that has on our final outcome.  So you do the math, you sum up the whole expression and you have your number. Now the same variables as before but perhaps now a little less hair on this guy. So in detail, you can get caught up in principle components or eigenvectors or ROC curves but in fact this regression can be one of the more straightforward ways to approach classification as well.

Classification – Tree Based

A decision tree or tree based classifiers are similar to rules, but now instead of compounding a list of yes or no rules, we move into a flow diagram of trees and branches and leaves and bagging and pruning. We can also move forwards and backwards along the branches to optimize the output for the cross label or outcome. So here in this example, if I’m a male, yes, and over 40, yes, I could consider shopping for some alternative hair care products.  One disadvantage of the output in a tree model, although it’s not hard to code and implement, it may not always be human readable from the number of trees and branches that are a result of the model.

Insight #2

Lesson #2, the lesson learned and insight. So integrated prediction is the period of stand alone application, and what that means is people who work in a database environment and database discipline understand keenly that data plus context equals knowledge. So prediction should be used in the context of when and where it’s needed with the specific clinical readers that have the willingness to hack on that appropriate intervention and measures. Furthermore, using prediction in context within a comprehensive and data rich warehouse environment will always be superior to a stand alone application in a given data silo environment.

The Cost of Readmissions

Dale mentioned this point earlier. To better illustrate it, let’s spend a few minutes talking about hospital readmission, the cost of the patient making unplanned returns at the emergency rooms (60:37) to say the lease.  So several software options are beginning to merge to help hospitals avoid financial penalties associated with reporting this metric. For the purpose of today’s discussion, I simply wanna point out that a metric such as predicting hospital readmission can be enhanced by embedding it where a clinician or an administrator wants to get and needs to use it.

Prediction in Context

So what does this term in context mean? So don’t do prediction for prediction’s sake.  A single box is great or arrows up and down. The integration within a more complete clinical picture is where the real value is recognized.

So in context, (61:16) to recognize the value of the prediction will be enhanced when implemented and evaluated in the framework of the appropriate data collection. In other words, in a data warehouse environment where associated patient details are available on demand and also can be viewed in concert with the predictor…it makes that predictor driven intervention much more successful.

Data Warehouse Synergy

Now a great example of this potential synergy is the existing Rothman Index, an early indicator of wellness. This is the very proven algorithm. It captures trends from multiple data feeds, the vital signs and lab values and nursing assessments. And this data taken as a whole will often provide early warning as the patient begins to fail or even a careful human observer can possibly connect all the dots between so many unrelated data points simultaneously. Now, using this metric within a rich data warehouse environment would boost the chances for an impactful clinical intervention and enhance evaluation of its use and now available views could also be seen with finance or patient satisfaction, etc., now available in that warehouse environment. So remember, one key to the success of the algorithm is, first, obtaining all the necessary data.  And when you assess only part of the picture, it may yield an incorrect view for prediction model.

Evaluating Performance

It’s helpful to understand a few more definitions.  So these first two on the left describe correctly identifying which outcome are really true and which is really not true. If the outcome does not correctly assign the classification, it will be mislabeled or known as a false positive or false negative.  So in patient care, both of these have serious ramifications and need to be minimized.

Sensitivity now on the right box represents a true positive rate. Specificity yields up a true negative rate and positive predictive value, or precision as sometimes called, is one way to measure the overall performance and also compare performance between models.

Again, sensitivity measures the actual positives which are correctly identified.  Specificity measures the proportion of negatives which are correctly identified.

Always a trade off…

But for those of you that’s been in a considerable of time in airports, you can appreciate this analogy of airport security scanners. If the setting is much too sensitive, anything will set up the alarms, loose chains or keys or belt buckle, hair clips. And that’s very impressive if sensitivity is the only concern but certainly not practical in terms of moving people through quickly and safely.

And this trade off between sensitivity and specificity is often graphed as an ROC curve, a ROC plot as sometimes called.

ROC Curves

This is a simple schematic here on the left of our ROC plot with sensitivity on the Y axis and 1K specificity along the X axis. And we sometimes just hear the term area under the curve to describe how well some statistic model is performing.  And all this simply means is the metric moves up and away from this diagonal that represents a random chance or a 50/50 line that moves closer to the top left, representing perfect prediction of 1. And this dark gray area now shaded represents one way to gauge how well the predictor is performing. This is also sometimes known as a c statistic, it’s analogous to that. And precision can also be used to compare performance across different models.

Insight #3

So insight #3 in lesson learned, can we trust clinical predictors?  And I’m of the opinion in laboratory and diagnostic setting that a lot of clinicians don’t trust computers to tell them what to do.  Predictive analytics is a very powerful tool in this regard but only when it’s used with specific clinical focus in the context of necessary data and implemented within a warehouse environment to prompt intervention and allow a better evaluation of that intervention.  So in the end, we should have kind of a single, right, we wanna leverage historical patient data to help improve current patient outcomes and we should know first (65:41) our ability to interpret that data.

Open Source Tools

So I’ve listed just some representative examples of open source tools including popular softwares such as R, Weka, statistical package arc is found at the CRAN network, (65:59) at Fred Hutchinson Cancer Center. It’s widely used its open source tool, thousands of specific libraries are packaged (66:07) for a variety of applications. As of September this month, our package repository had about 4,800 available packages and that number is growing exponentially. So packages are, these are submitted or shared to assist with statistical computing for topics such as biology or genetics, finance or neural networks, time series modeling and many others. In addition to R, there’s a lot of examples here on this page and excellent tools for forecasting or prediction.  I’ll give you the example.

Open Source Example

So around the year 2000, Breiman and Cutler out of Berkeley, this random forest classifier was developed and trademarked. Now, the license is exclusively to Southford systems, in the commercial release of the software, but you can find several different implementations of the same approach in open source software, whether it’s C# or Python, Visual Basic.net lab and Java. So different toolkits and different implementations.  The point there is a similar approach can also be found in several different implementation languages or environments and specific to different industries, and we can learn from those at head start.

Open Source Standards

I should mention briefly, also in open source, something that’s pretty interesting.  So PMML, or Predictive Model Markup Language, it’s a common file format. It can be used to import and export models as a goal between different software.  For example, all of your existing models developed in one original software package can now be imported into a new predictive environment and to avoid rebuilding everything one by one and we can see the obvious advantage to that. So algorithms such as association or clustering, neural networks, decision trees, all those can be effectively represented in this PMML.

Commercial Tools

On the commercial side, various tools and vendor solutions also exist. In fact, it can make a person a little busy to sort through all the advertising and that’s Google search results. Now, if you’re clever, you may have noticed that I haven’t listed Health Catalyst on this slide. While we understand this predictive space very well, we do not, we simply don’t do prediction for prediction’s sake.  So in order to be successful, we feel that clinical event prediction and the subsequent intervention should both be content driven and clinician driven, and really part of the bigger picture of clinical change and that culture of improvement in healthcare.

Enhancing Commercial Tools

In fact, prediction should be used in the context of when and where it’s needed with clinical leaders that have the willingness to act on appropriate intervention measures, but importantly an underlying data warehouse platform’s key to gathering those rich data sets that are necessary for training and implementing predictors, and data warehouse environment will actually feed into and enhance many of these vendor tools.

Insight #4

So insight #4 in this lesson learned is don’t underestimate the challenge of implementation. What in the state of industry – so from the previous three slides, we see that various options will exist when it comes to developing predictive algorithms or stratifying patient risk. There are so many different choices, so it’s hard to decide which one is good for us.  So we have to adopt multiple options to just get the job done. And this presents with a pretty daunting challenge to healthcare personnel tasked with sorting through all the buzz works and the marketing noise.  So really healthcare providers need to partner with groups that have a keen understanding of leading academic and the commercial tools and all the expertise to develop appropriate prediction models to improve patient care and recognize that market value.

David: Tyler, let’s go with that poll question now if you have it ready.

Tyler:   Alright. We have launched poll of the solutions you are aware of, is your organization leaning toward open source tools, commercial vendor solutions or not sure? We’ll leave this poll open for about 35 seconds.

David: That’s looking pretty good. It seems like we have a very honest audience today and they’re answering a lot of not sure and that’s a great answer.

Tyler:   Alright. In about 5 seconds we’ll close the poll and show the results.

Tyler:   Alright. We’re showing 46% selected commercial vendor solutions, 43% said not sure, 11% said open source tools.

David: Thank you for that, Tyler.  What that’s telling me is commercial vendors and their sales force in their marketing is about 50% successful. Other groups have existing expertise and they’re hardly aware of perhaps open source tools, but there’s a large chunk of people out there that simply haven’t made up their mind or they’re not sure. Timing is appropriate or what tool is appropriate and fits into their environment. Those are all great questions.

Predictive Modeling Demo

I wanna launch now into a few slides and actually go through a working model. We’ll have some fun with this. I wanna show step-by-step how to build a simple prediction model using the (72:07) open source tool on this Weka. The software was developed by Ian Witten and et al in a computer science department at the University of Waikato in New Zealand.

Weka File Explorer

So the main screen of Weka File Explorer looks like this. We’ll open data sets, that screenshot in there already. This diabetes data set comes from Pima Indian, a common and popular test, training and test, the step that’s used within Weka in the healthcare folders. It includes 8 variables, 1 through 8, number of times pregnant, plasma glucose, blood pressure, triceps skin fold thickness, a 2 hour serum insulin, body mass index, pedigree if you have relatives with diabetes, and age. And the outcome and the class or the label they’re calling is a tested_negative or tested_positive and we can see there’s 500 in blue tested negative and 268 tested red and positive.  So this is our opening screen.

Weka Classify – Zero Rules

Now, we start choosing a classifier, the second tab over, and ZeroR, Zero Rules, is a great starting place to give an idea of baseline performance because zero rules are used, so hence the name.

Next, we want to talk for just a moment about data options. We can ask it to use the entire data set for training, assuming that we’re gonna supply a corresponding test to that later.

Another good option and popular, we can tell it to do cross validation, 5 folds or 10 folds. This is similar to reborn out, take one out, and it will split it into 10 pieces.  It will train on 9 and use the 10th piece for testing and it will do that 10 times. Or we can simply ask it to do a percentage split, 1/3, 2/3 or 70/30, something like that, for training and for testing.

And the third step, well we’ll just simply click on start. Now after a few seconds, the model is built, and we’ll see some summary of statistics over here including, on the right side…65% was correctly classified and an area of (inaudible) line like flipping the coin, 50/50, heads or tails, and the closer the line moves to the top left, the better the sensitivity and specificity of that predictor.

Weka Classify – One Rule

A simple next step, OneR, stands for One Rule, is to ask the computer to just choose one rule to choose the best path to get to predicting test positive for diabetes patients. Again, we’ll use 10 folds cross validation and click start.

True positives now jump to about 72%, and our area under the curve did a pretty good job up to 66%.

And the rule is plasma glucose, is that the best one rule that it chooses above 154, we’re gonna start testing everybody positive and it takes about 5 seconds to make this model.

Weka Classify S JRIP

We’ll now move from One Rule to a next classifier we can try that allows multiple rules to be combined. This is called JRIP. It implements a rule learner, it’s repeated incremental pruning to produce error reduction, as well as called Ripper, first proposed by Will Cohen.

Although the true positives now move up slightly to 76, the area under the curve takes a nice jump now to nearly 74%.

And we can see now a combination of serum/plasma plus body mass index, insulin levels, pedigree, if you have cousins, and a number of times you got pregnant before and your age. All those snapped together. There’s 4 total rules and it took 6 seconds to build, and we see improvements in accuracy.

Weka Classify S Regression

The last one I wanna show is regression. So now we’ll choose Logistic and again cross validation.

And now the true positive moves up slightly to 77%, but the area under the curve improves much more up to above 83%. So if I was choosing a predictor at this point in time to work in this diabetes setting in a given patient population, this would be a pretty good candidate to move forward with. As mentioned in the introduction, other classifiers — I should back up and show this.

So weight to multipliers can also be detailed.  It can be easily seen and this one again, it took only about 6 seconds to build that model.

Other classifiers would also be great to try. These are nearest neighbor, support vector machine, neural networks, many of those are also available inside of this Weka package. 

Machine Learning Models

So some key points I think to summarize here just as we’re getting close to finishing. Predictive model is actually a simple easy part.  But the important part clinically for market value is really what was found and what does it mean, what do we do about it. The domain expertise is a must both in terms of the technical algorithm knowledge and clinical expertise paired together. But in terms of clinical utility and trust, I again wanna emphasize the specific focus will always trump or outperform a global or a big data approach and that culture and the willingness to intervene is absolutely critical to make improvements in your healthcare system.

Predictive Analytics:  Insights to Implementation/Intervention

So let’s summarize just a bit here. In order to be successful, we really feel that clinical event prediction and the subsequent intervention, again both content driven and clinical driven, the underlying data warehouse platform is really key to gathering which data set is necessary for training and implementing a predictor successfully. But notably, prediction should only be used in the context of when and where it’s needed with clinical leaders that have a willingness to act on appropriate intervention and measures. The more specific term is prescriptive analytics, which includes probably evidence, recommendations, and actions for each predictive category or outcome.  And specifically, predictions should link carefully to clinical priorities and measurable events such as cost effectiveness, clinical protocols or patient outcomes.  And finally, these predictor intervention sets can best be evaluated within that same warehouse environment.

So with that, I think I’ll pass along. A big thank you to the audience and we’ll turn it back over to Dale to hold some question and answer session.

[END OF TRANSCRIPT]