Population Stratification Made Easy, Quick, and Transparent for Anyone


Eric Just:                       I’m not going to go into a lengthy bio here, but here on the front page I do want to just call out that in my prior career I was a software developer, I’ve been a data architect, a data analyst, and an analytics team manager. And a lot of those experiences have really come together to help to create some of the vision for the product that we’re going to see today. And if I go back and I rewind time and I imagine what it would be like if I had these tools then, I think there is so much more I could do. So I get really excited about this primarily because I see it as the outgrowth of many, many years of experience and feedback, not just from me, but all the members of our teach who are working on this product, and our professional services teams who are out there with Health Catalyst clients using it all the time.

Eric Just:                       I’ll start with just a brief opener, this is about patient stratification. And if you look at that $3.5 trillion spent on healthcare annually in the US, that can be attributed to 5% of the population. So the use case that we’re going to be talking about today is, how do we identify that 5% at the highest risk? Or even the patients who are at risk for becoming high risk patients. This is the use case of patient stratification. How do we, out of all of these patients, how do we find the ones who are going to be most benefit from a care management or population health intervention?

Eric Just:                       So I’m going to start with an introduction, just discussing why stratification for population health is so challenging. And it’s a mixture of the specific use case of population health stratification, but it’s also mixed up with a lot of things that are just challenging in analytics in general. So I’m going to mix and match those in this introduction section, and then we’ll get to learning more about the product and the platform.

Eric Just:                       We did a poll of analysts across the country, and we found that 50-80% of analyst’s time is spent doing other things, not analytics. Things like preparing data, using SQL or “sequel”, it’s a database language that we use to get data from a relational database. Repeating tasks, doing the same things over and over again, we’re going to double-click on that in just a moment. Inefficient communication, so getting the right business input to the problem that they’re trying to solve. And doing simple tasks that can be automated.

Eric Just:                       Now this is just a general problem with analytics. If analysts are spending 50-80% of their time doing other things, it makes it very difficult to get them to spend time on some of the most challenging analytics tasks, like stratifying patients. So just a little bit more about repetitive tasks. When you look at work that a lot of analysts are doing in environments where reuse is not promoted, they spend time almost reinventing the wheel for every different query. So if they’re asked to do a new task they might have to integrate diagnosis from multiple data sources to answer that. And then they might have to do that same work again to answer another question.

Eric Just:                       Things like identifying which encounters were in-patient encounters. Things like calculating length of stay, or identifying the diagnosis codes that identify heart failure patients. These are all things that analysts are doing, and it really feels like reinventing the wheel. And reinventing the wheel is just challenging, right? If you’re reinventing a definition for each of these every time you approach a problem, it’s not the best use of time. Nor is it the best use of compute resources.

Eric Just:                       So on the right-hand side here I just have a screenshot of actual reinventions of the wheel throughout the 20th century. These were patents that were filed, none of them actually turned out to be commercially successful. So just a reminder to us all, that reinventing the wheel is not something that’s a valuable task.

Eric Just:                       When an analyst is going after a population of patients, they’re typically doing this in partnership with a subject matter expert, like a population health business user in this case. And there should be a very efficient cycle of communication between the analyst and their subject matter expert. And it really revolved around refining and iterating on that definition. And actually the problem here, it really amounts to a language barrier. And when I say language barrier, I’m not necessarily talking about the languages that they speak, but the languages that they work in.

Eric Just:                       So an analyst typically works in a language called SQL, I talked about it before, S-Q-L, to write a lot of this code into a database query. And they’re translating the business user’s requirements into that code. And when they’re done with that, or even while they’re developing it, it looks like this. I’m going to scroll through this very long and complicated query. And this is to identify patients who had our medications prescribed on discharge. And you can see things like embedded diagnosis codes, we just saw that. And medications that are hard coded in here, medication lists. And there’s a lot of different blocks to this logic, and then you finally put it all together in the end, and what you can see is obviously that it’s very complex.

Eric Just:                       It’s difficult or impossible for a business user to understand what the analyst is doing, so it becomes a little bit of a black … or actually a big black box. And the challenges with this is that it also leads to variation, right? When you do something this complicated there’s no guarantee that two analysts are going to do it the same way. So are you really getting the same answer? And there’s little visibility into that business user who may want to actually change some of the code groupings that are in this query. So this is a very inefficient way for the analyst to get results from their business users.

Eric Just:                       I just want to take a moment to thank Elizabeth Coudare for providing that example query.

Eric Just:                       And it goes back to what I mentioned before, this number, we’re going to play Jeopardy here. I’m going to give you the answer, and then tell you what the question was. 30% is the answer, and the question was, again this was in our poll that we asked of analysts. The question was, what percentage of analysts, or data science agree with the following statement? Analytic results in my organization are consistent, if you ask different analysts to do the same analysis you will get the same results. That’s bad. That means that even the individuals generating the answers don’t feel that they’re process is consistent.

Eric Just:                       And you know what? There’s so much complexity in some of the code, you may not even get the same answer from the same analyst if you ask them at different times.

Eric Just:                       To add to the challenges, specifically with stratification, these are the most complex use cases that you can deal with from a population definition standpoint. So if defining a clinical population isn’t difficult enough, this is what a typical population health stratification query would look like. Find me polychronic patients who were recently discharged who are at high risk for readmission. And an analyst will know that it’s a lot more complicated than it reads. An analyst will know that they need to integrate claims and clinical data to get the full picture of the patient.

Eric Just:                       An analyst will know that they actually need definition for all of the different chronic conditions. They also need to put that together into a definition of what it means to be polychronic, which chronic conditions are we counting, and how many of them? They also know that recently discharged needs to be refined from where and how long ago. And if there’s a machine learning algorithm involved in the readmission risk, finding those high risk for readmission patients, there’s a lot of inputs that need to be gathered. It’s a growing awareness that the real challenges with machine learning is getting the data, and getting the data prepared in such a way that it can be used. So getting those inputs is a big challenge. And then actually generating and running the machine learning model, and capturing the output, and then using that in the query.

Eric Just:                       Now unfortunately many times when it’s implemented it looks like this, a big black box where we’ve got all sorts of different component pieces to the query, but it all has to be stitched together into a really gnarly SQL query. And I can say from experience that I’ve been involved in three month long conversations about how to identify an asthmatic, so it’s not just a technical challenge, it’s getting everyone to agree on the very low levels of this. The population definitions for chronic conditions. So unfortunately many projects, because this is so complex, they just really never finish because, again, the analyst is reinventing each and every one of these as part of the process. And they may get stuck on a definition of one of the chronic conditions. They may get stuck on the machine learning model. And it is just so complicated that the project really never moves.

Eric Just:                       But the truth is that these can actually be broken out into their component parts. And it can be a lot easier, and that’s what we will be demonstrating a little bit later.

Eric Just:                       So I’m going to reiterate here, I’m going to go back and just review what we’ve talked about regarding the challenges with stratification for population health. First challenge, analysts are valuable specialized resources. They’re often used inefficiently because they’re operating environments that prohibit reuse. That creates a lot of waste.

Eric Just:                       Analysts are typically able to provide little visibility to their subject matter experts. And that lack of visibility, it just creates variation and more inefficiency. And then finally with regard to stratification specifically as a use case, these are the most complex definitions with no agreement on the building blocks or the component parts combined with that lack of visibility project timelines just become untenably long. Almost un-doable at times.

Eric Just:                       So I wanted to outline those, and I think the rest of the presentation is really structured around how we address each one of these challenges. So this really now becomes our agenda, and we’re going to go through each of these challenges and talk about how Health Catalyst and our products can help solve some of these challenges.

Eric Just:                       The first challenge about reuse. The main way that we address reuse is at the platform level. So one of the things that we provide is a platform called the Data Operating System, and the Data Operating System supports reusable content. One of the ways that we do that is through DOC Marts, our DOS Mart Suite, and this was launched a few months ago as part of our rapid response analytic suite. And DOS Marts are really data models that center around a specific domain. We have DOS Marts for clinical, claims, cost, surgery, person and terminology. We’re going to be hearing more, especially about the terminology in just a moment. But they really represent this curated content based on decades of experience of Health Catalyst team members, our clients, and that have really influenced this.

Eric Just:                       And the idea is that we’re trying to get the 80%, sorry, the smaller percentage of data points that are used in 80% of the downstream use cases modeled and easy to use. And what this prevents, it prevents reinvention of the wheel. If we’ve already integrated those multiple data sources, if we’ve already defined basic calculations like length of stay, the analysts don’t need to do that ground up reinvention every time. There’s a place for them to go and it’s there and ready for them to use. In fact, it’s a better wheel.

Eric Just:                       The results are more consistent because analysts are using the same starting materials that are bigger building blocks than just going straight to writing that complicated SQL. And then that leaves analysts to focus more on creating this value add business analytics versus writing those complex SQL statements.

Eric Just:                       Another part of our platform that promotes reusability is master reference and terminology data sources. This is all represented in our terminology DOS Mart. And there’s a variety of different terminologies that exist in our platform. And these are things like diagnosis codes, ICD diagnosis codes, CPT procedure codes, lab codes, medication codes. All of these are really, in my view, the dictionary of terms that we use in health IT to answer questions. So having that dictionary in the platform is absolutely critical.

Eric Just:                       And then if we have the dictionary it’s not as valuable as some higher level concept. So the dictionary is the foundation of it, and then we actually need to build on these dictionaries to promote more reusability in the platform. And what I mean by that is one of the most fundamental building blocks that we have in our platform is called a value set. And if you don’t know what a value set is, it’s basically a grouping of terms or codes that define a particular concept. So if the terminology contains a list of diagnosis codes, a value set might be those diagnosis codes that specifically identify a heart failure patient. And our platform comes with over 6,000 value sets built-in. And those value sets are sourced from the value set authority center, for example, that has a large number of value sets available from CMS, HL7.

Eric Just:                       CMS defines this chronic conditions warehouse, and these are value sets that define chronic conditions. And you’ll hear a little bit more about that in just a bit. And also our platform includes bundle payments for care improvement, or BPCI value sets. So just a host of these value sets that then become the building blocks for building more sophisticated queries. And the idea here is now, again, we don’t have to reinvent the wheel. If CMS defined heart failure diagnosis codes, that’s probably a good place for me to start. Of course there’s always refinement that goes on, but I don’t have to reinvent that list if it’s there in the platform, easy to use and easy to reference.

Eric Just:                       So this is just a screenshot of the value set authority center website. This is the source of many of the value sets that are built into our platform. And then I will also add that our data governance tool Atlas, has value sets built-in as well. There’s a tab called Value Sets and it allows you to browse the different value sets that sit in the platform. It allows you to see more details. So if I clicked on one of these you’d be able to see the individual codes that comprise it, maybe the information about the source. So we really view these value sets as a fundamental first class citizen in our data operating system, and a foundational building block for building more complex queries, and most importantly, promoting that reuse and getting the analysts out of having to reinvent the wheel every time they write a query.

Eric Just:                       Problem number two, analysts are able to provide such little visibility to subject matter experts that it creates variation and more inefficiency. We want to increase that communication cycle.

Eric Just:                       I’m now going to discuss how we handle providing greater visibility through providing authoring tools that can now break some of this logic out from the database, the complicated database code, and into the world that a business user would understand.

Eric Just:                       I’m going to introduce Population Builder, this is a product that we launched, again, a few months ago as part of our rapid response analytics solution. And the Population Builder is a tool that greatly accelerates the creation of populations in the DOS platform. It’s an easy to use tool. It enables the ability to develop, analyze and visualize populations up to 90% faster. And I’ll go out on a limb and say sometimes even quite a bit more than 90% when you think about how long it would have taken to write that really big query that we saw presented earlier.

Eric Just:                       It brings this concept of reusable content and authoring now to non-technical users. To business user. And I think you’ll be able to see that, I’m going to demo the tool in just a moment. And then most importantly, it takes these populations that are defined in the tool, or list of patients, and it publishes them into the DOS platform. And when data from Population Builder is published into the DOS platform that means it can be referenced from anywhere. So if I define a heart failure population in Population Builder, I can then see that population in some other catalyst tools, I can write queries against it for my own custom analytics, or even access that from third party tools.

Eric Just:                       So Population Builder, it improves that SME and analyst collaboration, subject matter expert and analyst collaboration. I liken it to co-piloting. Now there’s a viability that wasn’t there before, I don’t have to walk my business user through complex code, I can show them in Population Builder. And maybe they can even go in there and start editing codes, or dragging in additional new blocks of logic, so it really changes the whole conversation from a very technical, to really what it needs to be focused on, which is how do you want to identify this population.

Eric Just:                       Let’s get now back to our list of challenges, and we’re going to talk about the last challenge, which is these are the most complex definitions in terms of defining population. How do you agree on the building blocks? How do you create enough visibility such that you can do this quickly and easily?

Eric Just:                       Do you remember this? This is our black box of all the pieces and parts that are needed for this specific stratification use case of polychronic patients who were recently discharged and high risk for readmission. Here is where we’re going to show how this becomes a lot more transparent, and how we break this into component parts.

Eric Just:                       So we do this through the use of pre-defined content. And this is now when we’re starting to talk about our specific product of Population Builder stratification module. And the stratification module in a nutshell it’s predefined content that provides up to 100% efficiency gains in stratifying patients for population health. And what is in the box here? We already showed Population Builder, now this is the bundle of content that is an add-on to Population Builder. And it includes a chronic condition library of all these various chronic conditions. A definition of that comorbidity population, so it comes with an out of the box definition that identifies patients who have two or more different chronic conditions.

Eric Just:                       It includes standard risk models, things like LACE, Charlson Deyo and Elixhauser. It includes a definition of transitions of care, so patients who were currently admitted or had at least one visit in the last 72 hours. High ED utilization, we’re using an HCUP definition. It’s a little bit more complex than just saying, over X number for all patients in the last few months. It looks at the different patient types and has different thresholds for different types of patients.

Eric Just:                       And then finally our machine learning based predicted readmission. So all of these now become component parts of a much more complex use case, but a lot easier to use. So all of these different pieces that we talked about before are represented in this content library. And the great part of all of this is that it’s built on a foundation of integrated claims and clinical data. That is also part of the stratification module is generating this chassis, such that all of these different pieces are running off of an integrated repository of claims and clinical data. Which is a huge challenge in healthcare analytics in general.

Eric Just:                       So, I’m going to end with a brief conclusion and then we will have time for plenty of Q&A. Going back to your agenda, based on the challenges and stratification. Analysts are operating in environments with limited reuse, forcing them to reinvent the wheel. I hope that I demonstrated that our data operating system is indeed a platform that supports reusable content. I showed the DOS Marts, I showed the terminology, data mart that included the standard terminologies and the value sets that become the building blocks for almost everything that we have demoed in the products today. So that platform, extremely important to have that support at the platform level.

Eric Just:                       Analysts are typically able to provide such little visibility to subject matter experts, that it creates variation and more inefficiency. I hope that we demoed that the demo of Population Builder demonstrated that there is a better way than writing all of this in complicated code. There is a way that provides visibility in such a way that the business user can understand it, and maybe even author their own populations. That was Population Builder. Now both of these are foundational to our last piece here. And we talked about the challenges of stratification in general, just being such a complex definition that it just relies on so much content that it’s really hard to invent from the ground up.

Eric Just:                       And then I hope I demonstrated that our predefined content through the stratification module makes this a much, much easier task. The last demo basically dragging three filters on that represents months and months of work.

Eric Just:                       So I love ending these with a success story, and this was sent out as part of our marketing material, so there was a link in our marketing material to this success story. We have wonderful partners at Hospital Sisters Health System. They are the Alpha site for the stratification module. And this is the place where we saw that 100% efficiency gain. So, we will send out the link in the follow-up to the webinar, and you can read this story. It’s a great story about how an organization had this very manual process, how they adopted our tool and then their care management professionals can focus, their pop health professionals can focus on what matters most, which is really getting to that answer rather than trying to do all this manual work.

Eric Just:                       I’m going to end by saying thank you so much for your time. I know everybody’s time is valuable. And I really appreciate that you would spend the time with us today. I also want to say thank you to the many, many teams involved, both professional services and our technology organization who have developed this. We’ve got an incredible team of developers, informaticists, architects who are putting together all the pieces to create something that is greater than the sum of its parts. And I hope I demonstrated that today as well.

Eric Just:                       So I’m going to pause and allow Sarah to provide some content on our summit and then we’ll get into Q&A.

Sarah Stokes:                Great, thank you. Yeah, if you’ll just click over to the next slide for me. Okay, before we move into the Q&A, we do have a few giveaways for complimentary Healthcare Analytics Summit registrations. This is an annual event with more than 1,000 provider and pair attendees, and it’s occurring September 10-12 this year in Salt Lake City, Utah. The event will feature brilliant keynote speaker from the healthcare industry and beyond. And this is just a glimpse at some of the speakers that we will be highlighting at that event. And again, it’s in just over a month now.

Sarah Stokes:                So for our first giveaway, we’d like to know if you know that you’re able to attend and you’re interested in being considered for complimentary passes for a team of three to attend the Healthcare Analytics Summit, please answer this poll question. We’ll give you just a few moments here to do that. And this is your reminder too, we are rapidly approaching the Q&A, so if you have a question that’s top of mind, please do submit that and we’ll see if Eric can get to it during our Q&A time.

Sarah Stokes:                Okay, I’m going to go ahead and close that poll. You’re going to have to act fast on these. All right, and then in our next poll question, if you know that you’re able to attend and are interested in being considered for a complimentary individual pass to attend the Healthcare Analytics Summit, please answer this poll question. So maybe if you don’t have a full team that you think can travel, maybe you still want to attend. So let us know in this poll question. And we’ll give you just a few moments here again. People are a little quicker on the draw for this one.

Sarah Stokes:                Okay, we’re going to go ahead and close that poll. And then we have one more to launch before we go ahead and dive into the Q&A. All right, so lastly as Eric mentioned, while today’s webinar was focused on the Population Builder stratification module, some of you may want to learn more about this tool, or maybe you’d like to learn about other Health Catalyst products or professional services, if you would like to learn more, if you fall into this category, please answer this poll question. And we will go ahead and leave that one open as we dive into our questions.

Sarah Stokes:                All right, so I think we’ll just start at the top, you can let me know if we need to move on down.

Eric Just:                       First question. In order to provide a robust report we need valuable data that needs to be obtained from external sources like ADT, ER discharge summary for example. Are we assuming that this portion is being integrated in the risk stratification report? That’s a challenge most of us come across.

Eric Just:                       So the product that I showed, and the process of stratifying populations is based on data in the data operating system. So if data can come into the data operating system. The other thing I should mention about Pop Builder is its very customizable, so if there’s data sources that are coming in that are not represented in our DOS Marts, that data can be easily configured and wired into Population Builder.

Eric Just:                       But at a fundamental level, everything you see is data that’s coming into the DOS platform. And we do have capabilities to bring ADT and ER discharge summaries, for example, into the platform. So the answer in some cases is that some of that data will flow into DOS Marts and just be available there. And in other cases it could be implemented as an extension. So we pull a data element that may not be represented in the stock components, but the tool … And it’s not a super technical process, is good at being able to pull in data from data sources outside of the standard data models that we have behind it.

Sarah Stokes:                Okay? Next question comes from Anu who asks, there are several BI tools out there and healthcare organizations have a hard time choosing the one that suits them the best, but it comes with the cost. How do you break the ice to get the organization’s buy-in?

Eric Just:                       That’s a good question. It’s probably a subject of a webinar on its own. But really tying it to value. And showing how you can accelerate the path to getting the answers you need. And more importantly, how you’re getting real results with it. And that’s part of the reason why Health Catalyst has a strategy that we do around proving our value to clients with these results with improvements, success stories and you’ll see them on our website. But I think tying it to value is the most important part to get the organization’s buy-in.

Sarah Stokes:                Okay, this next one is a multi-layered one that comes from Jay who asks, what interoperability standards are used in your DOS? How does the data get into DOS from EHR? And how do you query the data from DOS using standard SQL? Or if DOS provides an API for external access?

Eric Just:                       Yeah.

Sarah Stokes:                So a couple questions there.

Eric Just:                       When you’re querying data from DOS there’s many organizations that are querying with direct SQL right now. And we do have some API access to DOS, but we are building out an even more robust API roadmap that will be the subject of a future discussion. And then interoperability standards, as many know we acquired a company, Medicity, which played in the interoperability space. And with that came a lot of capabilities and HL7, and FHIR, and basically all of the menu of interoperability standards that we see today. So that is actually coming into DOS as well.

Sarah Stokes:                Okay. Next question is from Naresh who asks, well he says thank you for the great presentation. So high praise there. And he says he has a specific question regarding master reference in terminology servers. Are the value sets updated automatically in the master reference tables?

Eric Just:                       Yes they are. That is part of the terminology, DOS Mart is keeping all of that information up to date. And they do change, so great question. Answer is yes.

Sarah Stokes:                Okay. And this was pulled off of another question as well. Monica was asking if you’re able to see the SQL once you’ve run that, executed a process through the Population Builder…

Eric Just:                       I should have showed that. But there is a way in the workspace. So as I was defining that hypertension population there was a button that I could press that says “generate SQL” and it pulls it up in a window. You can copy and paste it. Or you could just publish it and have it be executed automatically in the platform, but there is a way to look at the SQL that is generated. And honestly it’s a lot easier to read than human readable because it’s generated by a program after the criteria has been put in.

Sarah Stokes:                Okay, next question comes from Sam who asks, what have you found to be the most reliable data sources to feed into the tool, CCDA, QRDA, direct feed from the EDW claims, any thoughts on how to deal with garbage-in, garbage-out problems?

Eric Just:                       Another webinar. For sure, but great question. Truthfully, I believe one of the most reliable data sources is the EHR. Can you bring up the question again, I just want to see-

Sarah Stokes:                Yup.

Eric Just:                       The CCDs, QRDAs, they’re good formats but it’s wildly subject to how that’s been implemented in the EHR, and if the EHR’s workflow matches up to those file formats. So it’s very dependent on the implementing EHR. Claims, again, can be good robust data sources but there’s a lot of variation in the various differences and formats that are provided in claims. So it becomes very challenging. And thoughts on how to deal with garbage-in, garbage-out data quality.

Eric Just:                       Building data quality into the process is extremely important. And it’s part of what we do when we implement, we do data quality checks and really help people understand the data coming in. And when we talk about when people open up the catalyst platform and they start to find issues with data, more often than not it’s a garbage-in, garbage-out process. So we’re working on getting out ahead of that even better than we have before.

Sarah Stokes:                Okay, next question’s from Angela who just asks, have you used the tool with only claims data?

Eric Just:                       Yes. So Population Builder can run, we call them gains, and the grains basically define what the major source of the data is. So we can run a patient grain, which is typically using clinical data. We can run a member grain, which is typically using claims data. Or we can run the stratification module where we’re integrating those data sources. I didn’t demonstrate Population Builder in claims only mode, but there’s another mode where you’re dragging in filters that are specific to claims. So as long as that’s a data source in the DOS platform, it absolutely works.

Sarah Stokes:                All right, I’m going to kind of merge two of these next ones together. So Don asks, can you integrate this product with Red Cap? And then Monica had asked, can you use this is collaboration with Epic? So just kind of asking questions about what specific tools you’re able to tie in with.

Eric Just:                       Okay, I’ll talk about Red Cap first because Red Cap is a very valuable data source for this. For those that don’t know, Red Cap is a data collection platform that’s often used in research. It allows researchers to collect data elements that may not be in the sources themselves. And it relates back to what I said about the platform and the tool being completely extensible, so we don’t have a standard Red Cap snap-in for Population Builder.

Eric Just:                       But, the data from Red Cap can easily through standard product be configured to get pulled into the data operating system, and we can create the required filters to be able to query that within the Population Builder tool. So there is an integration path with Red Cap.

Eric Just:                       In terms of collaboration with Epic, obviously some Epic data sources can flow into our platform, and so that’s one way. Another way is through this deploying the application within the workflow, so doing what we call a closed loop configuration. And that is possible as well.

Sarah Stokes:                All right, are next question comes from Krish who asks, how do you integrate in normalized data including claims, image and visuals from disparate sources for Pop Builder?

Eric Just:                       Yeah that’s a part of the implementation process. So when I mentioned … And it goes back to the DOS Marts. I mentioned DOS Marts as a reusable content component, or a data operating system. But that’s also where that data becomes integrated and normalized. And we have a process of pulling in data, reviewing data profiles, looking at data quality, all of that is part of the DOS Mart implementation so that once those are in then Population Builder can be spun up very quickly. But it’s a tricky and long process at times. But we have a tool set that makes it much faster than it would be to do manually.

Sarah Stokes:                Okay. We are at time. You’re okay to go over a couple minutes thought, right?

Eric Just:                       Absolutely.

Sarah Stokes:                Okay, so we will take a few more questions. This next question comes from Sam who asks, can you export all assumptions, parameters, goodness of fit, et cetera of the analytical models you’re using?

Eric Just:                       There is export capabilities with respect to the … I believe the question was based around the machine learning algorithm. So there is all of that data exists in the platform and is part of the output configuration.

Sarah Stokes:                Okay. Next is from Jay who asks, are you planning to adopt FHIR standard into your product in some way?

Eric Just:                       Absolutely. FHIR is a very important standard for us. And again, a subject of another webinar, but there’s a lot of points where FHIR can be brought into our platform and we’re building that.

Sarah Stokes:                Okay, next question comes from Alex who asks, it seems for the most part that the information you used to stratify your patients comes from claims data. As most of us know, claims data comes typically with a three month lag, and by the time this data comes in your high patients could have been readmitted already. With that being said, how are you and your team deploying real time data with machine learning that could be clinically impacted in a time sensitive manner? Also, is this built on our markdown?

Eric Just:                       Absolutely. So again, we’re using a combination of claims and clinical data. So in the example that I did of polychronic patients who are at high risk for readmission, the data around the polychronic conditions was being pulled from an integrated repository of claims and clinical. So we could look at either or both of those sources to determine if the patient had the condition. The middle part where I showed recent discharges, that was from, I wouldn’t call it real time, but I would call it a daily feed of data from the EHR. And then finally the last piece of readmission was probably a mixture of both pieces.

Eric Just:                       So we’re blending the claims data with more timely data sources. And we also have a component of our data operating system called Fabric Real Time that is pulling data into the platform even more real time than a daily feed.

Sarah Stokes:                Okay, next question is from Todd who asks, does the Population Builder enable a user to query who is the next most valuable patient that I need to intervene with and why? Such as queries based on perspective cost or perspective decline in health.

Eric Just:                       That’s a great question, so it can enable that kind of query. That’s just another question that you can ask it. I showed one example, but there’s a lot of different ways that you can mix and match those tools to get at what you’re talking about there. And also, a lot of it is how you configure the output, and how you’re downstream application would look at the output to be able to look at perhaps the factors of what caused that patient to be high risk.

Sarah Stokes:                Okay. Next is from Angela and she asks, has this tool been used with a primary psychiatric population?

Eric Just:                       I don’t know if I can answer that specifically, I will say that it could. I don’t know of a specific example where that’s been done. But I would say it’s absolutely possible to do that.

Sarah Stokes:                Okay, and I think this will be our last question here. It comes from Vinod who asks, how often do you update and take into consideration the geographical location of the populations?

Eric Just:                       That’s a great question, and probably something that is hard to represent in the demo that I did. But when the data … We’re pulling all of the geographical information directly from the sources. So whatever is in the EHR or other sources that pertain to that patient is what’s coming in regarding patient location. Again, I think the concept of extensibility comes into play, so where is you had a more timely and more accurate feed of patient location, that could easily be brought into augment the data in the tool. But most of the organizations that we’re dealing with are getting their locations directly from the source systems that feed the data operating system itself.

Sarah Stokes:                Okay. Well I think that’s going to wrap things up here. A lot of positive words coming through now. No more questions though it seems. Okay, on that note we want to thank Eric for taking the time to present to us today. And we also want to thank all of you for joining us, please do us a favor and complete the short survey as you exit the presentation. Your answers will help us to deliver more valuable content in the future.

Sarah Stokes:                As a reminder, by midday tomorrow you will receive an email with links to an on demand recording, the presentation slides, and a transcription. We also like to invite you to join us for our next webinar, Population Health Management Path to Value, which will be held next Wednesday August 7th. This presentation will feature John Moore, president and founder of Chilmark Research who will delve into the state of the US healthcare market, analyze our slow move to value based care, and provide key takeaways to help you better prepare for that shift. We hope that you will join us for that.

Sarah Stokes:                On behalf of Eric and all of here at Health Catalyst thank you for joining us today, this webinar is now concluded.