Healthcare Mergers and Acquisitions: Reducing IT Consolidation Expenses with a Data Operating System

Dale Sanders:                        Hi everyone. Thanks for sharing your time with us today. I hope to, I’ll do my best to make it a good use of your time. It’s an interesting topic for me, both personally as well as professionally. In this new role that I have on the vendor side of the house, I’m grateful enough to have the opportunity to watch kind of the entire industry, not only against our client base, but also against client prospects and things like that. Watching the evolution of mergers and acquisitions, and the effect that data has on the pre and post merger activities of these organizations. It prompted me to put some of the thoughts and observations together in these slides here today. Thanks Russ Tabot for this cool little graphic showing an unusual acquisition, one fish to another.

Alright. In full disclosure, everyone knows I work for Health Catalyst and the technology that we produce is definitely related to the concepts that I advocate in this lecture. No sense in getting around that. That said, it’s not a sales pitch. The technology strategies and concepts that I advocate herein are the same ones that I would follow myself if I were still an operational healthcare CIO. The basis of these webinars is always around our operating principles, and in particular this one, which is about transparency.

We take these operating principles extremely seriously. We tell the truth and we face the truth about ourselves. We act as the same company, culture, and people in all settings. Doesn’t matter where you’re going to see us, you’ll see the same behaviors, same values. We treat all confidential information appropriately, whether it’s ours or another vendors, or a client’s, or a patient’s. The bottom line here is we recommend the best solutions for our clients, whether or not those solutions come from Health Catalyst. We’re a firm believer in what goes around comes around, and the power of karma, and operating in a more selfless mode than a selfish mode. Everything that you see today is within the context of this operating principle and our values.

Today’s story, we’re going to talk about digitizing healthcare, emphasizing that EHRs were just the beginning. We invested a lot of money and time in EHRs, but the reality is we are just beginning to digitize the patient and the process experience in healthcare. I’ll talk a little bit about sort of high level concepts, mergers, acquisitions, and partnerships, and the IT strategy associated with those. Then I’m going to wrap it up by talking about the data operating system and connecting it back to the role that it plays in mergers, and acquisitions and partnerships.

For those of you … I did notice that I think 13% of the attendees were executives. I would ask that you hang in there today. We’re going to talk some technical talk here. The reason I think it’s important now for executives to stay with these conversations is the recognition that for better or worse, faster or slower, your company now runs at the speed of software. That’s just the reality of life we have. If we’re going to improve healthcare in the country, make it better, faster, more affordable, more accessible, we have to develop better software and make it available to all parts of the industry.

Everything that you want and need to do today strategically, is either helped or hindered by software and data. My closing thought on this is all C-levels need to be a little bit Chief Information Office and Chief Digital Officer. That’s just the nature of the world that we operate in. If you decide to delegate all of that knowledge to others, you’re delegating what amounts to the future of your company. Now certainly trust is an important part of our culture, but you have to understand the domain within which you’re operating. The domain of healthcare today is driven largely by software, as is all industries.

Just a reminder that digital disruption is already happening. The world’s largest taxi company owns no taxis. The world’s largest voice and video communications companies own no telco. The most popular media company owns no content. The largest lodging company owns no property. The most valuable retailer owns no inventory. The world’s largest software vendors don’t write the apps. This is a very different world digitally and economically than we’ve ever seen before. The rules are being turned upside down about what makes great products and great economic models. I would say that the digital future of healthcare the world’s largest and most successful companies will own no hospitals.

There’s Mr. Burns mwaa rubbing his hands together. There’s a little tongue and cheek humor there friends, but the reality is there’s something on the horizon and I would imagine that this statement, though somewhat joking, will largely be true within the next five to ten years certainly. Speaking of the digitization of health, I always have found interesting parallels between personal health and car maintenance, where an ounce of prevention is worth a pound of cure. Every 10 hours Tesla collects one million miles equivalent of driving data from all of their cars. Think about that. Every 10 hours, the equivalent of one million driving miles of data about their cars, their performance, where they’re going, where they’ve been. That amounts to 25 gigabytes per car per hour, 25 gigabytes. That’s a lot of data on an hourly basis. Having had to collect this data and calculate it myself, we collect 100 megabytes per patient per year on average. Now that’s not counting digital images. That’s a slightly different form of data, so I’m not including digital images.

By the way, neither is Tesla. Tesla right now they have some experiments where they’re collecting imagery data from their cameras, but largely the data that they’re collecting is not about imagery, it’s about telemetry and performance of the car. The same kind of data categorically that we need to collect about patients. So there you are in your Tesla, hands free driving, data being streamed to the cloud on a regular basis, constant basis actually, and Elan Musk omnisciently watching over all of us. This is a quote from the Chief Commercial Officer of Innovation at Hitachi. Hitachi makes a lot of the sensors, and a lot of the, all automobiles, not just the automobiles that are headed towards autonomous driving. He said recently in a conference, “We can fix problems in your car and make it safer long before you know you need it. We believe that 10,000 fatalities and 500,000 injuries per year will be prevented in the U.S. alone. That’s what you can do with data and predictive and preventive maintenance in a car.” They’re achieving that through the digitization of the automobile experience.

I would just mention that in the digital future of healthcare, we have to think about digitizing two things. We have to digitize the patient, that’s the asset that we’re trying to manage and improve, and we have to digitize the processes associated with that patient, so how we optimize the experience for the patient. So digitizing the thing, the asset, and digitizing the process. You could argue that we’ve digitized the process with EHRs, laboratory systems, radiology systems, registration, scheduling, even human resources, materials management, general ledger. Those are all digitization of processes, but we haven’t digitized the patient outside the ICU. The ICU is the most digitally oriented part of the patient experience right now. You could argue it has some problems too, but we’re not digitizing the patient experience to the extent that other industries have digitized their assets like automobiles.

I use this slide over and over, I think I’ve been using it now for something like six years, just to reinforce that 80% of the factors affecting health outcomes fall outside the traditional delivery of healthcare. We only traditionally have about 20% influence on length of life and quality of life. If we’re going to be an accountable care organization, we have to expand our skills and our data ecosystem up and down this continuum. We have to collect data about behaviors. We have to collect data about social and economic factors, the physical environment of our patients. We have to roll that together with a better digitized experience around the clinical care that we provide today.

The breadth of the human health data ecosystem is significant, and I have a growing appreciation for this over the last few years. Identifying what we need for personalized care in community health means collecting data in each one of these bubbles. This cartoon, as rudimentary as it appears, is actually our strategic data acquisition roadmap at Health Catalyst. I would advocate that it be yours as well in your health system. You need to start thinking about more than just the lower left portion of this cartoon. You need to start asking yourself how are we going to round out more robust picture of healthcare encounter data for one.

How are we going to add biometric data to our strategy? What are we going to do about collecting outcomes data from the patients? Are we going to utilize consumer based data to round out our risk assessments and predictive models? Are we going to collect the small, but very important socioeconomic data about that patient? What are we doing about genomic, and epigenetic, and familial data and microbiome data, and rounding out the digital image and the digital appreciation that we have for the patient at the center of this diagram? That’s the digitization of the patient that I’m talking about. And I might emphasize that by the way we have barely any data on healthy patients. What little data we are collecting around patient care right now is based on a very thin slice of patients who for the most part present for treatment, when it’s really important that we start collecting data on healthy patients to try to understand why they are healthy and using them as role models for patients who seek better health.

Humans tend to gravitate towards freedom of choice. We have six billion smart phones by 2020 and a population of eight billion people. What does this mean? If we watch the evolution of healthcare, or rather IT in general, we went from mainframes, which there was really freedom from choice, to client server thick PCs, 80s and 90s, web application and thin PCs in the 2000s, and finally mobile smart phones where we have almost ultimate freedom of choice in the software that we decide to augment our lives personally. Well in contrast to that, we went the other direction in healthcare IT during similar periods. In the 1990s we were characterized in healthcare as sort of best of breed architectures. At the center of that best of breed architecture were HL7 engines and messages tying together disparate systems such as EMRs, lab systems, radiology, pharmacy systems, registration, scheduling and billing.

Well what we found is that they were fragile and expensive to maintain, but they did have flexibility. But we decided the fragility and the expense of maintaining those HL7 interfaces was not worth the pain. So we migrated towards a single vendor and a single data model where all of those formerly disparate systems now are provided by monolithic vendors. They tend to be expensive to buy. They’re certainly less fragile than HL7 was in some ways, some ways they’re not. They’re very difficult to upgrade because of the tight coupling between all these systems, whereas you had loose coupling in the diagram on the left. You could upgrade lab without affecting pharmacy for example. And they’re certainly less flexible. You can’t move products and vendors in and out of this architecture very well, so you’re locked in.

In the meantime, this is what happened to us as consumers. This is a screenshot from my iPhone. I counted I have 103 applications on my iPhone from 89 different vendors. They all build on a common platform with open software standards. That’s what makes this experience and freedom of choice possible. I wouldn’t expect, nor would I hope, for a single vendor to meet all of my needs. Way beyond that today. I wouldn’t even want that for all sorts of reasons.

Now let’s talk a little bit about mergers, acquisitions, and partnerships, and some of the IT strategies that can help or hurt in these situations. Just emphasize that it’s not over. The rate of M&A activity in healthcare has been staggering. For 11 straight quarters, we’ve had more than 200 M&A’s in healthcare totaling $49.6 billion. This is from a report published by PWC last month. Eleven straight quarters of more than 200 M&As in healthcare. If anyone thinks it’s going to stop, I think we still have quite a bit of runway left in this activity.

One of the interesting things that McKinsey points out is that top performers across all industries in M&A focus first on data integration, and have a plan to do so within six months post merger. Just put that in the back of your mind. We’ll talk a little bit more about this later, but data integration is critical to the post merger value of the activity, critical. The longer you delay that data integration, the longer you delay the value that you hope to achieve from the M&A. Another interesting point that McKinsey mention is that 40% of the M&A value in healthcare is directly tied to IT strategy. If you don’t think through your IT strategy, both pre and post merger, it’s likely you’re going to lose 40% of the value that you hope to achieve. You’re going to leave that on the table.

What I see with very rare exception is the IT strategy plays almost no part in the discussions of healthcare mergers and acquisitions. I see, again there are a few exceptions, some of our clients, our notable clients being very good at this, but I would say 90% of what I watch in the industry from an M&A perspective, the CIO and the IT strategy are blocked out of the conference room. I think that’s a reflection of our low digital quotient in healthcare. We still don’t appreciate the importance of IT strategy and IT as an enabler to value in healthcare yet. It’s still largely perceived as an administrative burden and overhead, not as a strategic asset.

A couple of my assertions around this. I like to say that your company is not integrated until your data is integrated. Boy do I see this in the real world. We’ll go through an M&A, no data integration strategy was lined up in the pre-merger activity, post-merger six, seven, eight, nine months into it, the new organization still can’t produce basic KPIs and basic financial performance indicators about the new company. Even more difficult is calculating clinical quality measures that have a risk based financial component associated with them. I see this all the time. It’s not unusual to see even 9 to 12 months later enormous struggles around producing basic KPIs around the new company.

I’d argue that HIEs are not sufficient for data integration, not even close. They are a very small piece of what needs to be integrated in the data ecosystem of an M&A, teeny tiny slice. We’re not talking about just clinical data that needs to be integrated. We’re talking about supply chain, human resources, finance, GL. The entire ecosystem of your IT infrastructure needs to have an integration strategy, and it needs to be focused on data. This is where I have a very firm opinion that ripping and replacing EMR and ERP systems with a single common vendor is not an affordable or timely strategy. Unfortunately, I see this being a default strategy in healthcare, especially among the C-suite. They immediately assume that after a merger or acquisition we have to figure out a way to achieve a common EHR or a common ER platform across a new organization. I’m just saying that is absolutely not affordable or timely anymore. I can’t imagine advocating this if I were still an operational CIO.

Finally, I assert that M&A strategy in the digital world is more about data acquisition than bricks and mortar. This is not about acquiring hospitals and clinics. You have to think about what data am I acquiring as part of this merger and acquisition that’s going to help me deliver better faster care to the broader scope of patients that we now serve as a consequence of the M&A. What’s the data acquisition strategy look like? I would argue in some cases you might even care little or none about the bricks and mortar, and the M&A strategy might be all about data. Not necessarily about bricks and mortar, or even personnel or other assets, but what you’re going after in the future of healthcare is data as a strategic asset.

Let me talk a little bit about the way you can integrate IT, and bounce that against a traditional diagram of the technology stack. At the lower level of the IT stack you have computing infrastructure, servers, networks, data center storage, all that sort of thing. In your IT strategy, pre-merger, post-merger, you can look at this and you can say well we might be able to consolidate some of that, especially data centers, virtual data centers. We might be able to consolidate networks. We’re certainly going to have to consolidate identity management across the network. There’s usually some opportunity in an IT integration strategy at this level, although it’s becoming less and less important in terms of financial value to the organizations to focus on this layer.

The most important part of the IT strategy at this level is actually starting to plot a strategy towards the cloud, to the large cloud vendors like Amazon, Google, Microsoft, and how do you leverage the utility computing that’s available there. I can tell you right now if I was still a practicing CIO, the last thing that I would be doing is investing more in my own data centers. There’s absolutely no way that that’s scalable anymore. It’s also a lot less secure than what Google, Microsoft, and Amazon can provide now.

Moving up the stack is to databases and operating systems, so Oracle, SQL, MySQL, Hadoop, Spark, iOS, Android, Linux, Windows. Those are starting to become a little harder to integrate because when you start playing with this level of technology in a stack, you’re starting to affect applications above it, and it becomes harder and harder to do so and integrate at this level without affecting the applications that reside on top of them. So it’s a little challenging at this level to build an IT integration strategy.

Now the data content layer that sits on top of this is the layer at which opportunity for integration starts to emerge that’s affordable and high value, and can be offered quickly. I’ll talk a little bit more about that in just a second. Software applications are currently in healthcare very difficult to integrate and consolidate because what happens is if you start to consolidate at this level, it’s a vertical problem through the entire stack, sometimes all the way down to the computing infrastructure, certainly through the databases and operating systems. In order to achieve consolidation at the application level, the current software environment in healthcare requires that you integrate all three of these layers in the technology stack. That’s what makes it so expensive.

Unfortunately, there’s a tendency in healthcare to start here and assume that that’s the place to start the integration strategy. But as McKinsey and others in other industries, not just my opinion, the data content layer is the only layer in the stack that can be peeled away without impacting the other layers. You can peel data content away from data models, you can populate new applications with it, you can reuse that data, you can repurpose it, and it actually has no effect vertically on the other three layers of the stack. So starting your IT integration strategy at the data content layer is the way to make value of the merger and acquisition to bear fruit sooner.

All right, let’s talk about some of the common motives behind M&A in healthcare for a second. Certainly we’re always interested in economies of scale. That tends to be sort of the most dominant motive right now in healthcare M&As. The assumption being we can be more efficient through consolidation and shared services and other infrastructure, including IT. We can combine scarce resources, achieve economies of scale. We can move into complimentary markets either by geography or product, so we can move, expand our geography where we haven’t provided services in the past, or we can expand services and products that we’ve never offered before. We can smear the risk and reduce the risk through larger populations and greater revenue, another important motive. Sometimes we can find opportunities to improve an underperforming organization or asset, and sort of extract that hidden revenue and that hidden opportunity that’s being left on the table. Those are the most common motives that I see in M&As in healthcare right now.

IT for the most part falls to the economies of scale. That’s where the mindset of most executives in healthcare are right now, that it’s going to make us more efficient through shared services, infrastructure that leverages economies of scale. I would say that’s a little short sided. There generally isn’t a strategy of any significance that I see other than outside a very small number of forward thinking organizations other than the hope of administrative savings through IT consolidation. The strategic value of data acquisition in an M&A is still largely ignored. McKinsey has a really good report that I highly recommend called Understanding the Strategic Value of IT in an M&A. It’s a great read. It’s easy to read. The McKinsey folks are, I respect them probably more than any other consulting company for their thoughtfulness and their insights. Highly recommend that report.

Some of what they suggest in that report is that 10% to 15% of the cost savings come from a successful IT integration strategy. If you think about the size of many M&As and the financial justification for an M&A, if you leave 10% to 15% of your opportunity on the table, that’s in some cases hundreds of millions, if not billions of dollar in today’s M&A world. It’s certainly hundreds of millions, absolutely positively tens of millions of dollars that you’re leaving on the table if you don’t have a forethought insightful IT integration strategy. What McKinsey suggests is behaviors that they’ve seen that lead to a successful IT strategy is number one the acquirer first gets its own IT house in order.

Quite often this means pausing, and if you have a strategic view of M&A, you’ll have to think about this one, two, three years ahead of the actual M&A activity. You’ve got to get your own IT house in order first before you take on the chaos of another system and try to integrate that IT into yours. It’s not linearly more complicated, it’s exponentially more complicated to bring on another system if you have problems with your own IT house as the acquirer. Some forward thinking organizations, mostly outside of healthcare, are developing what’s known as services oriented architectures anticipating the need to be flexible and adaptable. That term is getting a little bit outdated, but I’ll talk a little bit more about it later, but it’s essentially a services oriented architecture that looks a lot like a date operating system that I’ll talk about soon.

IT leaders are heavily involved in the due diligence process prior to the acquisition. The CIO, the Chief Digital Officer, they are sitting at the table, they’re not locked out of the conference room. In fact, a lot of the forecast for revenue growth and cost synergies have historically been driven by financial rules of thumb, and they ignore the challenges of integrating the business and clinical functions that IT enables. IT can play an important part in the due diligence, especially of a data warehouse. You understand your own environment, you can model the acquisition yourself, you can negotiate the acquisition from a position of strength because you know your own P&L, you know your own operations good enough to leverage that data for a negotiating strength.

Then third carefully plan a post-merger integration during the due diligence phase and factor those into the overall acquisition cost of the M&A. Those are the three dominant behaviors that McKinsey has identified as the leaders exhibiting in an M&A. By the way, M&As are hard to drive value. We all know that. M&As traditionally underperform their aspirations, but if you follow these three things as it relates to IT, you’ll at least mitigate that risk and hopefully access that 10% to 15% that you’d otherwise leave on the table.

Then finally just emphasize that the M&A includes a data integration plan that can be implemented within six months post-merger. Six months post-merger you have to have the ability to integrate key data from the new organizations into the acquiring organization so that you can manage the new company effectively. I would emphasize that this is not a systems or applications integration plan, this is a data integration plan, and it’s largely focused around the availability and the leverage of a data warehouse. That’s the centerpiece of a data integration plan that’s active within six months post-merger. It’s a data warehouse that enables that.

The prevailing solution in healthcare is to rip and replace ERP, EHR, and other systems. That’s just the default solution I see over and over again. I would argue you’ll be finished in three to four years if you’re lucky. You’ll be 74% on average over schedule, you’ll be 59% over budget, and you’ll deliver 56% less value than predicted. Those are the numbers. Those are not my opinions. Those are the numbers studied across all industries by McKinsey and by Standish. In the meantime, while all of this is going on, your risks as an organization in a fast moving industry are going up, and your margins are going down. I just can’t imagine following a rip and replace strategy in today’s world. There’s just too much risk and too much cost. There’s just not enough juice in the orange to make it valuable.

So let me share a few numbers. This is just about EHRs. I’m talking a lot about EHRs, but you have to think about all the disparate information systems in your organization that might be affected by a rip and replace strategy, and it’s more than just EHRs. You have dozens of applications in today’s healthcare systems. These are my personally calculated numbers based on firsthand experience in this world. This is what I’ve observed. As a minimum, and I encourage all of you to get out your calculators and see if this feels right in your organizations, ripping and replacing an EHR is going to cost you a minimum of $13,000 per employee. If you’ve got 8,000 to 10,000 employees in your organization, multiply that times $13,000. That’s the minimum you’re going to spend on a new EHR. It’s tens and hundreds of millions of dollars. As a minimum you’ll spend $41,000 per physician. Again, these are conservative numbers. These are very defendable, and they’re still very, very high. Very expensive, hundreds of millions of dollars is not uncommon in the journey to rip and replace or install a greenfield EHR.

In some ways, ripping and replacing an EHR is harder than greenfield installation. I point to one of my old organizations Intermountain where we had Help for many years, homegrown EHR, and the pain of ripping and replacing Help has been tremendous. God bless Intermountain and Cerner for the effort that they’re going through right now to do so, but there’s a reason that Help has been around since 1967 and hard to replace. The pain of ripping and replacing an EHR is orders of magnitude greater than starting from a greenfield implementation.

Becker’s published a report recently, a summary of a report, that indicating that 61% of healthcare officials indicate a terrible or poor ROI on EHRs. That was a survey of 1,100 people, healthcare officials, a very informed audience. 61% believe that our current investment in EHRs has turned out to be terrible or poor. If that’s the state of affairs, and yet our default solution to integration is to rip and replace EHRs and invest more, it just doesn’t pass the farm boy common sense that I would advocate.

Had a conversation about a year ago with a good friend of mine, and I asked him, “Would you ever consolidate to a single EHR?” This was his response, “Oh no. I cannot imagine. We’ve looked at it a few times, but the total cost are in the billions, and for what? Minor incremental value? We’re using our data warehouse and building a software services layer to tie them together until there are better options.” This is from the largest for-profit health system in the U.S. that has the internal staff and IT capability to build this sort of temporary solution on their own, this software services layer.

Now the interesting thing is you can extend the life of an existing EHR with a good data strategy. Even though it sounds like I might be criticizing EHRs today, the reality is we can make them better with a good data strategy. Here are the typical six stages of technology life cycle, from development of a product, introduction of the product to the market, the growth phase, the maturity phase, and then sometimes the decline and exit of that product from the market. You can stretch this curve both horizontally and vertically based on market demand. You can with new technology, and evolution of the product, you can keep bumping this curve up. If you focus on reinventing the product here, kind of midpoint in its maturity life cycle, you’ll survive. If you wait too late, the product is going to exit the market and other options will appear.

Microsoft is like the perfect example of a company that’s managed to keep bumping office products and Windows products through technology and driving other forms of demand. They keep bumping this curve up. The Microsoft products have been in the market now for, gosh I don’t even know, it has to be at least 30 years. That’s pretty amazing when you think about it. They get criticized. They have in the past, less under Satya Nadella, but they’ve done a great job of constantly bumping this so that it’s not, they’re not losing the market. Well the demand for EHR has been stretch by federal incentives. This demand for EHR is a [drovid 00:37:13] option, and growth and maturity in the market was a little bit of an artificial economic incentive. Those days are over. Demand is not going to be driving the adoption or extending the life cycle of the products anymore. From this point forward, EHRs will be hopefully driven more by free market demand rather than federal incentives.

I would argue that the underlying software and database technology, most EHRs it commoditized a long time ago. Normally you could extend this life cycle. You could bump this curve up by refreshing the technology, but most EHRs are based on pretty old technology and pretty old software applications. If they’re not reinventing themselves now, [inaudible 00:38:00] fashion, I think all EHRs risk being commoditized and replaced by more innovative products fairly soon. The bottom line is we can stretch the life cycle and the value of EHRs with a data operating system and open APIs. If the EHR vendors would open up their APIs, we can actually bump them up just like Microsoft has done with their products. We can increase their value. We can add to their stickiness in a client environment. A lot of EHR vendors don’t quite appreciate this yet, they’re starting to. We’re starting to see app stores and things like that appear, so that’s good. That will increase the life cycle and the value of EHRs. It’s not as if I’m advocating we replace EHRs, we can do things to extend their life by using things like a data operating system.

Tyler:                                          Dale, can we take just a pause here for a minute. We have had a couple of questions, and we’d like to remind people that yes we will be, we are recording and will be providing an email afterwards with a link to the recording. The slides you can access them right now through the handouts pane of your control panel. Also, please remember you can type in questions and comments in the questions pane of your control panel as well.

Dale Sanders:                        Great, thanks friend. Appreciate it.

Okay, so let’s talk about a data operating system here. We’re counting down, we’ve got what about 20 minutes, so I want to get through this and have a little bit of time for Q&A. Starting at the bottom of the stack, we have an amazing new capability in cloud computing thank to Amazon, Microsoft, and Google. It’s kind of, I mean it’s revolutionary. I use that term very conservatively now. My IT career spans, I’m now entering my 34th year, going back to 1983 when I joined the Air Force as a command control communications and intelligence officer, 1983. I’ve seen everything evolve, and what’s happening in the cloud is phenomenal. It’s going to change not just the nature of computing, it’s going to change society, and we’re already seeing it.

We have an amazing capability on the technology layer, the databases and operating systems that sit on top of that cloud. Oracle, SQL, MySQL, all of this technology that’s now available that was incredibly expensive and hard to maintain and work with is now cheap, affordable, incredibly effective, and flexible. Then at the top of the application stack we have this incredible software development environment characterized by things like Git, Eclipse, Angular, D3, Mono, Node, Python, R. It’s unbelievable now how fast and efficient software applications can be designed and developed and managed. Unbelievable, unlike anything I’ve ever seen in my career. Nothing even comes close.

So we’ve done great things in three of the four layers of the stack, but what we haven’t done for software developers and application engineers is make it easier for them to work within the data domain of their software. That’s the focus of the data operating system. That’s the missing piece. I believe as a computer scientist, the missing piece of the stack that we haven’t addressed yet but we are now through the data operating system, is providing domain specific data content preloaded, pre-managed, pre-organized for the benefit of software developers at the top of the stack. That also takes advantage of all of the incredible capability underneath it. That’s where the data operating system resides in the technology stack.

Imagine your smart phone for just a second. Imagine it sitting on top of a cloud based lake of data. Imagine all of the healthcare and health related data in your organization, publicly available health data, the data of your system and your health systems partners, all in this data lake peeled away from their applications. You’ve peeled the data content out, remember the technology stack, we peel the data content out, we’ve populated it in the data lake, and instead of just restricting it for analytics, we’ve now exposed it to software application development. Now software developers, not just analytic reports and things like that, can take advantage of this data lake. It’s not unusual for us to see 100 different disparate sources of data at one of the health catalyst clients in that data lake integrated and bound together in logical groupings. We’re starting to push up against 100 terabytes of data and more in these larger systems. It’s an enormous amount of data.

Now imagine that this is organized and optimized for the software developers that are writing applications on mobile devices. They don’t have to reinvent the wheel of data content. All they need to do is put their awesome new software applications at the top of the stack over the top of this data where the heavy lifting of aggregating it, organizing, and making it available has occurred. This is what’s considerably different, the applications they are developing are contributing data back to the DOS. We’re giving them the opportunity to leverage existing data in the ecosystem because we peeled it out of Legacy systems, we’ve made it available in the data lake, and we’re also giving these application developers the opportunity to add content back into that lake that didn’t exist before.

Patient reported outcomes is kind of a classic example. Most healthcare systems today have no patient reported outcomes of any significance to contribute that data into the lake, but now we can seed, not concede, but seed the data content for an application developer that can build a patient reported outcomes application. That software engineer can now make that available to patients for their use. Now as I discussed earlier, you have the opportunity for dozens or hundreds of applications of your choice meeting your specific needs. There’s not a monolithic vendor doing this. There’s 100 vendors on your iPhone with healthcare domain specific application tailored specifically to you. If you’re a cardiologist, if you’re an orthopedic surgeon, if you’re a respiratory therapist, if you’re a CFO, your iPhone is going to be populated with applications that are unique and tailored to your needs, both as a workflow tool as well as an analytics tool. This is entirely possible friends. It’s happening right now. This is not some big dreamy thing that I’m talking about. This is happening right now.

Using related concepts, DOS is a combination of the following. I mean it’s not exactly precise to use these metaphors, but it’s close. Combining the transaction level functionality of a health information exchange, sharing records, single records between disparate applications, the traditional use of a clinical data repository for clinical care and research, and a read only enterprise data warehouse all rolled together. Then making that available as a read write environment for application developers. A wordy definition is this. DOS is a platform of constantly updated raw and organized data from multiple transaction systems within a domain such as healthcare that enables rapid development and changes to the software applications built upon it.

DOS is what amounts to a hybrid architecture. Gartner calls it a Hybrid Transactional/Analytic Processing Architecture (HTAP). If you have a Gartner account, if you don’t have a Gartner account, find a way to get ahold of this report and read it. It’s very readable for IT as well as executives. From that report I quote, “Traditional data warehouse practices will be outdated by 2018, so we have to evolve toward a broader data management solution for analytics.” That’s exactly what the DOS is doing. We all know the value, if you’ve got a biology background, of hybrid vigor. It’s the superiority of the hybrid over either one of its parents in one or more traits. That’s what an HTAP architecture does.

I won’t go into the nitty gritty details of these attributes of a data operating system. I’ll skim over it. I want to focus especially on attribute five, a microservices architecture. But really quickly … By the way, I would hold this up against any vendor or your strategy going forward how capable are other vendors, how capable are you as an IT organization at providing and meeting these attributes? Reusable clinical and business logic. Streaming data, real time data, not just batch oriented data. Can it support structured and unstructured data including images? Does it have close loop capability in that you can take the output of the data operating system and bend it back to the workflow and point of decision making, whether that’s an EHR or mobile device, even a text message as a closed loop example? Does it support a microservices architecture? I’ll come back to that. Does it natively support machine learning and pattern recognition? Can you lay this data operating system over the top of any data lake? In other words, is it loosely coupled?

Let me back up to microservices architecture, one of the reasons a rip and replace architecture, or rip and replace strategy doesn’t apply anymore is that modern software is developed around what’s known as a microservices architecture. If you think about Amazon and Facebook, even a lot of the other apps you have on your iPhone, those are being constantly updated in small little micro updates. Sometimes as often as every other day. Tiny little updates constantly updating it. Ripping and replacing those applications is no longer necessary because they’re being constantly updating and evolving. They’re evolving constantly through minor updates. Rip and replace is no longer necessary. Rip and replace is actually and artifact of old software architectures that can’t evolve on a microservices basis.

The lucky thing about Health Catalyst is we started our software development life cycle after microservices architecture became the dominant design pattern. We’re luck enough to have an architecture now that’s probably 70% or 80% microservices based, and we’re headed towards 100%. The challenge that a lot of vendors have in healthcare IT is they started their software engineering environment decades ago before microservices were a standard pattern and before the tools were available to support it. Those Legacy applications at some point have to go through a major re-architecture to take advantage of microservices design patterns. But the rip and replace mentality, the six month upgrades that it typically requires for ERP and EHR systems, that’s a symptom of an old architecture, not a microservices architecture.

Interesting thing is now that we have microservices applications in Health Catalyst, we’re running up against a problem in some of our client sites where the IT shop has traditionally only upgraded applications on a fixed basis, say once a month, every first Tuesday on every other month for example, they introduce changes into their environment. I was one of those CIOs. That’s the sort of change management process you had to have because of the old architectures that were so tightly coupled and so complicated to upgrade. But now we have the opportunity to push out updates to applications on literally a daily basis if we wanted to, but the IT organizations still have a mindset that’s built around old software updates. So it’s delaying the time to value for updates. We’re going to have a cultural change that we have to go through in healthcare IT to accommodate this new mindset that allows for constant updates of applications and the constant delivery of new value.

This is the diagram of a data operating system. You’ll see Health Catalyst referenced on here friends. I have no problem with you peeling the Health Catalyst name off of here and putting your name, your organization’s name on here. This is an architecture that is I think the future of not just healthcare software, but also other industries, and other industries are already doing this actually. You have the low level data. We’ve peeled data out of the hundreds of different systems that are in your organization now. We’ve made that available for software development, not just analytics as a traditional data warehouse would. We have read write content enhancement available at this level. We have abstracted the complexity of that data in the fabric application layer. Then at the top layer we’re leveraging all of those great software tools that I mentioned earlier in the microservices architecture. And Purposely designing this so that clients can build applications against it and third party applications, third party developers. Standard kind of architecture, standard kind of thing that’s happening in Silicon Valley.

What about the cost of this kind of thing? This is very interesting. Data operating systems and their recent predecessors, what I would call an old fashioned data warehouse, cost a fraction of ripping and replacing EHRs, ERPs, and other applications. If what you’re trying to achieve first is data integration, the first step should be some version of a data operating system or if nothing else an old school data warehouse. Typically it’s $3 million to $5 million to install a data warehouse in a medium to large size organization. It can be deployed and operational in a few months, certainly within the six month window that McKinsey suggests. It’s about $2 million to $3 million per year to operate and evolve. Now compare that to the numbers I shared earlier. This works out to about $340 per employee versus $13,000 per employee for an EHR.

Even if you are committed to a rip and replace strategy for some reason that I probably wouldn’t understand or advocate, it’s still worth spending a small amount of money to achieve rapid value from data integration using a data warehouse or a data operating system to do so. That’s a pretty low risk number in the big scheme of things compared to $13,000 per employee. There’s a great white paper written by Woven Ware that healthcare IT folks in particular should read, but I would also suggest that CFOs and other C levels read it so that they’re informed about mergers and acquisitions. It’s called Stop the Rip and Replace Spiral, A Practical Guide to Application Modernization.

By the way, Amazon went through this from 2001 to 2009. They created what amounted to a retail data operating system. They implemented concepts similar to DOS as well as what’s known as a DevOps culture in their software development, and the results were dramatic. They could deploy 30 times faster applications. They had 200x reduction in lead times for new features, 60 times fewer failures, and an interesting one, 168 times faster recovery as a consequence of following these new design patterns and the concepts of a data operating system.

Alright, we are almost done. The moral of this story, the digital future of healthcare is about data that persists for a patient’s lifetime. Think about that okay. The data in healthcare persists for a patient’s lifetime, but applications are going to turn constantly on top of that data. They’re going to last for months or at most a few years. That’s what we’re seeing in all software industries. Data lasts forever, applications turn constantly. It’s all about choice, adaptability, and personal software preferences. I’d suggest that the wrong IT strategy is going to haunt M&A value for decades, not just a couple years, it will last for a long time, and rip and replace provides incremental value for massive investment. Focus on integrating first at the data layer, not the application layer, and make sure that it’s done through a data operating system or an old school data warehouse. That is the end. Tyler?

Tyler:                                          All right thank you so much Dale.

Dale Sanders:                        Thanks friend. I’ll take questions if anybody’s got time to stick around.

Tyler:                                          Wonderful. It looks like we do have some questions coming in. First before we get to those, before we end, as Dale said this webinar is meant to be educational and not a sales pitch. However, we do get many requests about Health Catalyst. Who we are, what we do, so we do have a poll question of those who are interested in having someone contact them for product demonstration of our solutions. We’ll leave that up. While we leave this up, we’ll get to our first question, which is “Can you provide an estimate of the yearly maintenance cost of a data operating system once installed and deployed?”

Dale Sanders:                        Yeah. It’s from Gabriela Ramirez. Gabriela I mentioned that in one of the slides there friend. It’s for a medium to large size organization it’s somewhere around $2 million to $3 million, something on that order. There’s a truth at this, and we’re finding this in Health Catalyst, we have to make this scalable down to smaller organizations. It’s not easy, but smaller organizations where the IT budget is in the $4 million to $5 million range, we think the cost has to come down to somewhere certainly less than a million dollars, probably more like $700,000 per year.

Tyler:                                          I would like to let everybody know we are at time. Dale, if you’re willing to hang on to answer any further questions that we may have?

Dale Sanders:                        Sure. I can hang on for a little bit.

Tyler:                                          All right, that’s great, but we do want to let folks know that we did reach time, and that we will be sending gout email with links to the recording of the webinar and presentation slides to everybody. We do have one other question. “You mentioned about getting the C-level IT involved in the process early on during evaluation. What can an organization do if they haven’t been, but they’re midway through an M&A?”

Dale Sanders:                        Well it’s never too late to invite your IT leadership and your digital leadership into the process right. That’s what I would suggest friend. It’s never too late. Include them in that and start planning those, both the, well especially post-merger data integrations just as quickly as you can. That’s the simple answer.

Tyler:                                          We don’t have any other questions trickling in except for one, which is “What resources would you recommend to executives who may not be tech savvy?”

Dale Sanders:                        Well in this context, that document that I referenced from McKinsey is probably the best. That is the sort of the M&A strategy, or the IT strategy associated with M&As very good. Ernst and Young also has a very good white paper on the role of IT in M&A that I would … You could Google either one of those, I’m sure they’d come to the top. Those are the two I’d start with. If you have an account with Gartner, Gartner has great content on this. Not everybody has access to it, but Gartner has I would say the most robust content that would be helpful to C levels. They’ve got a good dozen articles and white papers on this topic.

Tyler:                                          All right. Well I don’t see any other questions coming in at this time, so at this time we’d like to thank everybody for joining us. Again, shortly after this webinar you will receive an email with links to the recording of the webinar and the presentation slides. On behalf of Dale Sanders as well as the rest of us here at Health Catalyst, thank you for joining us today.

Dale Sanders:                        Thanks everybody. Have a great day.

Tyler:                                          This webinar is now concluded.


How did we do?

If you rate this transcript 3 or below, this agent will not work on your future orders