Realizing the Promise of Precision Medicine
Eric Just: Dr. John Halamka is a practicing emergency physician and professor of medicine at Harvard Medical School. In his role as Chief Information Officer of Beth Israel Deaconess Medical Center, he’s responsible for all clinical, financial, administrative, and academic information technology, serving 3,000 doctors, 18,000 faculty, and three million patients.
Dr. Halamka also chairs several health IT interoperability initiatives. He specialized in the adoption of electronic health records, and the secure sharing of healthcare data for care coordination, population health, and quality improvement. I think many on the call also recognize Dr. Halamka as one of the most prolific and engaging health IT bloggers and organic farmer. He can tell you what he does at four o’clock every morning to start his day.
Dr. Halamka is going to be presenting on realizing the promise of precision medicine. We chose this topic because we believe precision medicine has profound implications for patient and clinical outcomes and is already beginning to impact everyday medical practice. We’ll get a look into real world examples of precision medicine, and emerging tools, like machine learning and blockchain. Thanks for joining us Dr. Halamka. We’ll turn it over to you now.
John Halamka: Great. Thanks so much for having me. As you say, this is such an important topic, because some of us on this webinar are doctors, or nurses, or pharmacists, or social workers, but all of us are patients, and really, don’t we want the experience of medicine to deliver that care, which is partly care we desire, that’s respectful of our needs, but also very effective for who we are as individuals?
So let’s start with a couple of definitions. Let me go to that next slide. You often hear this term, personalized medicine, or precision medicine. What is the difference between those two concepts? Paul Cerrato and I recently wrote a book on realizing the promise of precision medicine, and here’s how we define it.
Let’s imagine you’re a 50 year old female. Maybe you have diabetes. Well, personalized medicine would be that you are getting the treatment protocol, that you are getting care that has been shown to be effective for 40 year old females with diabetes. So it’s a cohort, it’s a population-level protocol, guidelines, or pathway. That’s a whole lot better of course than we have today, so we can’t criticize it. But that isn’t exactly precision, because precision would affect such things as, what is your exposal? What have you personally experienced in life? Maybe it’s certain diseases you’ve been exposed to, or environment you lived in, or a diet you’ve consumed. Maybe it’s the genome that you have, the probability of developing diseases, or the deterministic information about what diseases you actually had, or immunizations.
You can see the differences as we think about the future of how we’re going to deliver the highest quality care, that’s got the greatest safety at the lowest cost, we’ll probably be in a continuum between this idea of population health, and population personalization, to precision to the individual. You can actually see, given incumbent EHR systems, how we’re going to have to make this journey over time, because do we actually see in current electronic health records a lot of genomic data? Well, you’ll see some genomic data and biomarkers, especially around cancer care, but certainly it isn’t very common for individual genomes to be comprehensively stored and interpreted in the electronic health record.
Let me give you an example of me. I’m the second human sequenced in the personal genome project, and if you want to go look at my genome in all its glory and interpretation, it’s at personalgenomes.org. I’m patient two. What you’ll see on that particular website is my full genome, my microbiome, my medical record, my laboratory tests, really everything that’s very precise to me. What I’ve been exposed to, what I’ve experienced, down to the level of individual mutations.
You also see the interpretation of those individual mutations as high risk, high likelihood, or low risk, low likelihood. Because if I’m going to have a disease that has no impact whatsoever, I don’t so much care. But if I’m going to have a disease that’s going to be devastating and high impact, I’m going to care, and I want to change my lifestyle, my medications, or my care pathway, because …
So what did my clinician do, given that we don’t really have a lot of this data for precision medicine in the EHR yet? My clinician was able to say, “Okay. I am aware that there are certain high risk factors you have.” For example, I will likely die of prostate cancer. Many men on this webinar will die with prostate cancer, but I will die of prostate cancer.
Here’s an interesting contrast. The New England Journal of Medicine would tell us that for men in general as population prostate specific antigen, or PSA testing hasn’t really reduced morbidity and mortality over the course of the last decade, so doctors probably shouldn’t be doing yearly prostate testing. But wait, I have a genome that tells me I’m going to die of this problem, and yet The New England Journal would tell us, for men, I’m 56 by the way, so in your fifties, probably should monitor your cholesterol, LDL and triglycerides. But I’m a vegan, with no family or genetic risk for heart disease or a stroke. So what’s the point in me doing LDL cholesterol and triglycerides. My cholesterol is 72. You can see it on the website.
There we go. It’s, population or personalized medicine would say, “No prostate testing and lots of cholesterol testing.” Precision medicine would say, for me, “Test the PSA on a yearly basis. Understanding there might be some false positives. We’ll have to deal with that. And don’t bother with cholesterol LDL or triglycerides.” That is actually how we’ve shaped my care.
You can see the differences. You can see how the data requirements are different. And the analytical techniques, those are going to be kind of interesting, because we’re so used to doing generalized analytics, columns and rows in a SQL database, saying, “Does this person have this disease or no?” But remember, things like the genome or probabilities, they’re not deterministic. So I actually have to deal with the fact that what if the prevalence of prostate cancer is 5%, and I have an increased risk of 50%?
Does that mean I have a 50% chance of getting prostate cancer? No. It means that I have a 7.5% chance. I have 50% greater than the prevalence in the community, in my population. This things, like Bayesian statistics, and machine learning, and more fuzzy ways of looking at the potential direction of your health, are very different than the categorical columns and rows we’re so used to.
Eric Schmidt at our last HIMSS event said, “It’s not that Google is any smarter than anybody else, but they have more data,” and once you have more data, then using some of these more advanced techniques, so machine learning and deep learning, you’re able to identify patterns to a greater extent than others. I think what you’re going to see as we move forward into this world beyond population health and personalized medicine to precision medicine, it’s going to be those that aggregate the most genomes, those that aggregate the most exposomes, those who aggregate a rich clinical history they can get from patients, that will be able to have the best machine learning techniques that will predict the best care for you as an individual based on similar patients like you.
Let’s go to the next slide. Let’s talk through some other examples. My wife recently received a letter from her insurance company, a very fine insurance company, that said, “You know, after your cancer treatment,” she had breast cancer, stage three, diagnosed in December 2011, “you’ve been put on 22 milligram of Depot Lupron,” because this is an estrogen positive breast cancer, and you really want to remove all estrogen from the body, because otherwise you might have tumor recurrence. The insurance company said, “You’re on 22 milligrams per quarter, which is some protocol that was written for you. We don’t know exactly why or how, but we’re not going to pay for it anymore.” Why? Because there was a paper written in 1990, 28 years ago, with 13 Norwegian women for which the population of women in that paper, 11 milligrams was a sufficient dose.
My wife’s Korean by the way, so pretty unlikely that a study 28 years ago with no repeatability on a cohort of Nordic women would really be applicable to my wife, and yet the insurance company was using it to decide what the right dose of Depot Lupron was. So of course I did what anyone would do, I called the CEO of the insurance company and said, “Let’s write a New England Journal article together about care management, and what could we learn from this example of highly not precision medicine being used to decide whether payment would be given or not.”
Interestingly enough, the insurance company has since reversed its policies, has really rethought how it should adjudicate these kinds of things, and recognizes that in order for it to make a decision about an individual patient, you really need the clinical record of that patient. You really need to understand why a protocol was chosen for that patient, and you really need to have the data about a patient be a little more specific than 13 Norwegian women.
I think what we’re starting is even at the payer level there is an understanding of the importance of big data sets and machine learning techniques. It’s not a surprise that a lot of these startups that we’re seeing in the world of machine learning and big data analytics are now getting acquired. Just the other day a company out of San Diego called Interpreta was acquired by Centene, the largest Medicaid processor in our country, because Centene felt it better have the capability of looking at genomic information and trying to give patients the right care, at the right time, at the right cost, rather than just making up rules that are highly, sometimes, arbitrary.
Here’s another example of getting a bit more precise. A year ago I was diagnosed with primary hypertension. No medical condition, body mass index at 21, lots of exercise. As you heard, I get up at four AM and shovel manure. Running a farm keeps you very fit. But yet, my blood pressure was 170 over 100, and so what to do? Well, my clinician said, “We could use ACE inhibitors, beta blockers, calcium channel blockers. You’re a 56 year old male, why don’t we start on something like ACE inhibitors?” It turns out that I have a comorbidity, and that comorbidity is I have a supraventricular tachycardia, my heart races on occasion to 180 beats per minute. It’s a little irritating, makes me a little dizzy.
“Wait a minute. Okay. If we’re looking at you, the person, we have to control your blood pressure. But as well, we really want to try to block that supraventricular tachycardia. So for you beta blockers would be a better choice.” That’s great. So I was given the standard dose for a 56 year old male, 50 milligrams of Metoprolol XL. If anyone’s ever taken a beta blocker, what you’ll know is it’s like having two negative cups of coffee. Oh my God, 50 milligrams of Metoprolol, hardly get out of bed in the morning, can’t get through a lecture, just don’t feel like you want to even be yourself because your mood and your energy are bad.
Okay. Well, let’s think about it. Maybe I metabolize Metoprolol different than others. How do we figure this out? What if there’s an app on your phone that says, “We’re going to take objective data, your pulse, your blood pressure. But how do you feel? What’s your mood? Are you dizzy?” And then based on the objective and subject of data, then we can an alteration of dose. In my case, using an app we created BIDMC@Home. I’ll show you that app in a minute.
We were able to say, “Let’s go to 25 milligrams.” Blood pressure’s still under control, but moods, not great. Energy’s a little better perfect. “Let’s go to 12.5.” 12.5, blood pressure’s 110 over 70, mood is great, energy is great. We have now achieved for you the cure of your hypertension, your SVT, but yet retained your mood and your energy. Ideally, wouldn’t it be wonderful if some day we could get 3D printers for medications in our homes, and dial in the dose that’s optimal for us, and have the pills printed from a stock of, in this case, Metoprolol XL? But no, in my case I just use a pill splitter and kept splitting pills in half. It got the job done, and so today I am taking 12.5 and that’s good. But so again you see the difference between what would have been personalized. I’m a male. I have hypertension. So really, what’s precision? My comorbidities, my mood, and my metabolism of that drug.
Let me also reflect, my father passed away five years ago. He had multiple sclerosis for 23 years. He had five heart attacks. His bone marrow stopped functioning. He was a real scientist. He really wanted to experiment with all the latest bio-engineered drugs, and so he offered himself up for clinical trials. He was always the first to try new treatments for multiple sclerosis.
What he found over time is that, surprise, when a medication comes out and says it’s effective for multiple sclerosis, that is, it’s effective for maybe 25% of the patients with multiple sclerosis. They don’t quite know what 25%. So his challenge was, he often found that taking these emerging medications, the cure was worse than the disease in terms of fatigue and stiffness and pain. He even had certain side effects that resulted him in having syncopal episodes, blacking out.
Again, what you’d love to see is, can we analyze something about patients like you? Whether it’s exposome, phenotype, or genome, and say, “Yes, there are 100 MS drugs, but you know, these five are likely to be effective, and don’t go anywhere near these other 95.” You’re starting to see startups come up with this kind of analysis.
Recently, Roche bought Flatiron. Why did they buy Flatiron? Because Flatiron had two million curated cancer records showing who got cure, and what side effects, and now Roche can better target therapies. This is my experience in life, me, my wife, my father, really suggest this a direction we need to head in, and these emerging techniques and machine learning are really going to help. Next slide.
Let’s talk about some of these emerging tools. I was in a meeting this morning with a major cloud hosting provider, and we were talking about ways where the big data analytics of the future are going to be more than just the meaningful used common data set, problem, meds, allergies and labs, because certainly, some internet of things data from your home is going to be relevant.
If you’re a diabetic, your glucometer readings. If you have hypertension, like me, you can get two or three blood pressure readings a day, and look to see what happens when I wake up, when I’m stressed, or when I go to bed. I recently installed a fascinating device from Nokia under my mattress. It actually measures my sleep pattern. It shows when I’m in light sleep, deep sleep, when I’m REMing, when I’m snoring, if I have apnea, all these interesting things.
What we’ve actually discovered is we now know a little bit about how I sleep. I’ve always slept three or four hours a night, but now we sort of figured out that I close my eyes, and one minute later I’m REMing, and then I have a period of extraordinarily deep sleep, then light sleep, and then I wake up. Again, I’m now able to quantify some interesting precision items about me, and understand how to tailor my care, knowing really what my sleep pattern is all about. This kind of data will be increasingly important, and we’ll see it incorporated not only into patient phones, where it’s going today, but into electronic health records, and into some of these big data analytics, as we correlate clinical and financial and patient generated data together toward our progress on precision medicine.
Machine learning tools, let me quickly comment about those. I was a graduate student at MIT in 1996. Back then, we used LISP, a programming language with lots of parentheses, to create artificial intelligence. We would use rules, like, this is what a mammal does. This is what a mammal eats. This is what a mammal looks like. And then, we would be able to say, “And here is a giraffe. Is it a mammal?” You know that figuring out if a giraffe was a mammal could take months of programming? It was hard, hard work.
Well, today you look at the emergence of Google, DeepMind, and Amazon web services, and the machine learning components. They’re like Lego blocks that you can simply use as a service, and do some very rapid machine learning analysis on a set of inputs, and looking at, can you create prediction, an output, based on a set of inputs? Can you find patterns?
We’re seeing some fascinating things, like Google with its million retinal scans can figure out who has heart disease, and we don’t actually have any idea why. But there is something in a million images that clearly is a pattern which is indicative of heart disease. Now to show you how subtle this is, because in many ways learning is a black box, we don’t know exactly what it’s seeing. I heard, and this may be rumor, so it could be fake news, that in examining a million eye images a machine learning tool identified which eyes were male and which were female.
You think, “Wow. That’s amazing, because no human can do that.” Well guess what it turned out? It’s the eyes with mascara that seemed to be female, mostly. So, okay, this was the machine learning black box figuring out that there was something, it really had very little to do with the eye itself, was correlated to gender identity.
One of the things I’m seeing, and I’m sure we have to be a little careful of this, because at the peak of the Gartner Hype Curve is, most every day I hear a pitch that involves blockchain and cryptocurrency. That somehow these new technologies will create a revolution. The kinds of pitches I’m hearing a lot of today are, We know that machine learning techniques are more powerful with bigger training sets and more data. Challenge is getting the data.
So what if we said, “Patients, we are going to give you a payment for your clinical data, your internet of things data, or your genome.” I’m hearing a lot of pitches about, “We’re going to do an initial coin offering, and then we’re going to create a new coin. Hey, we’re going to call it the Catalyst Coin.” Don’t worry Health Catalyst, I just made that up. You get a Catalyst Coin every time you contribute your genome, and it could be an anonymous contribution. It doesn’t necessarily have to be person identified, because that way we can achieve a much larger corpus of genomes to analyze as we look to a precision medicine future.
Hard to know if any of that’s going to succeed or fail. I just mention it because I am seeing so much in the world of cryptocurrency and blockchain, and how that’s going to result in more data contributions, and public ledgers of data, these sorts of things. I think it’s very, very early.
On the concept of blockchain, I am editor of Blockchain a Healthcare, a new peer reviewed journal. What we’re seeing there is they use blockchain really more on a auditing side, or a consent management side, maybe the identity management side. There’s nothing really analytical about blockchain, so I only mention this because of the startups that I’m hearing from, and not because it is an analytic technique.
We’re also seeing a vast adoption of FHIR. The fast healthcare interoperability resource is now in every iPhone. In iOS 11.3 natively you’ve got FHIR based APIs that can ingest medical records from Epic, Cerner, and Athena, and that’s going to be expanded I’m sure to others. You’ve got new interoperability opportunities, where the patients are gathering their data from multiple sites, and curating that data, and potentially could contribute it to clinical trials, clinical research, or clinical care coordination.
Lots of interesting trends in 2018, as we are starting to get ubiquitous data transmissions from the home. Patients taking an interest in curating their data, providing us with some patient reported outcomes, and some subjective information, and novel interoperability through FHIR. That’s going to give us a whole new realm of data sets to analyze. Let’s go to the next slide.
This is BIDMC@Home. What I was getting at was we do need a little bit of translation between the data that is collected on your phone and getting it to the doctor’s office. There’s a lot of challenges. What data do I trust? What if you’re wearing, I don’t know, a heart rate monitor, and your heart rate monitor says your heart rate’s 20? Do I trust that? Do I call an ambulance? Do I put that in the electronic health record? Do I tell the doctor, “Here are 10,000 normal blood pressure measurements for your to review.” Given clinicians are so burdened already, you can imagine that’s not going to be so popular.
Here’s what we’ve decided to do at the moment, and it’s early. But what we say is, “Patient on your phone, here’s your care plan for today.” I’m going to show you, every day based on your electronic health record, and a little tool that we have inside our electronic health record, which we built, called the care plan creator. What is it you should be doing? Here are the meds you should be taking, the doses you should be taking, the diet you should be eating, the activity you should be doing.
So patients help us close the loop. Did you take your medicine today? Did you go for that walk today? Did you have a low sodium diet today, yes or no? Then we’ll pair that with the internet of things data that you agree to share. Maybe you have decided to share your bathroom scale data, or your blood pressure data, or pulse ox, or maybe your spirometry data, and yes, there are even spirometers for $10 that plug into your iPhone and gather your forced expiratory volume data. You’ve got all kinds of fascinating things that look at your care plan, your compliance to that care plan, and the effect of compliance, or non-compliance. Next slide.
Our idea is, it’s that if we can gather this information and show you based on the data from your home how well you’re doing, we can actually show you some insights as to whether or not your behavior is leading to better health or not. You say, “I see. I didn’t take my Lasix, and my weight went up by seven pounds, and my pulse ox went down, and I feel short of breath. I’d better stop eating salty snacks, and I better take my Lasix, if I’m going to help my congestive health failure get better.”
This is the sort of thing that we believe is going to be very cost effective to do as the payment-reimbursement models change. We’ve been in a fee-for-service world. We’re of course moving to a value-based purchasing world. In the past, more hospital days or more images were a profit center. Today, redundant and unnecessary testing, or admission to the hospital, is a cost, not a profit center, because we’ve got a risk contract with a fixed amount to keep the patient well.
That means our incentive is, yes, put bathroom scales in congestive heart failure patients’ homes. Yes, bill visiting nurses, and home care, and tele care, and telemedicine, so we can take data gathered about you and keep you healthy without a hospitalization or unnecessary imaging stay.
I really look forward to this next couple of years, where we’re going to see these three great trends of emerging new biological, the genome, plus new data sources, the apps and the devices we’ve talked about, plus new reimbursement methods. They will incentive us to keep patients well, and do more tele care, and keep you healthy in your home. That’s, I think, how medicine is going to change. It’s certainly the way I would like it to change, and I believe that my daughter, who’s 25, will experience a very different kind of care than I did. Let’s go to the next slide and talk a little bit in more detail on some machine learning applications.
I’ve mentioned now that we have all these novel sources of data, and that we’re moving to a different kind of care delivery. Some of the problems we have to solve are actually rather mundane. When will a patient be readmitted? Will a patient need an ICU? Will a patient show up to an appointment? How much OR time is necessary for this particular patient? These are actually questions that can be answered very well using machine learning techniques, and I know because we’ve done them.
Last November, for example, we launched a project to ask, could we free up operating room time by analyzing using machine learning techniques patients’ experience in the past? Imagine this, you need your appendix removed, but you’re a 25-year-old healthy male with no comorbidities, and Dr Famous will be doing your surgery, and Dr. Famous has done 22,000 appendectomies in the past.
Suddenly, we have different experience about, 25 year old, no comorbidities, Dr. Famous, that’s a 25 minute operation, not a two hour block. That’s based on millions of patients like you in the past that we can analyze via machine learning. Our hope is this, can we analyze all of our surgeons, look at a machine learning assigned OR time, and free up OR, as opposed to build new ORs, deal with OR congestion, or having to force midnight surgeries or something?
What happened when we did the analysis? What we found was just by changing the schedule of 15 very productive surgeons we’re able to free up 30% of OR capacity by optimizing OR times using machine learning techniques based on the experience of surgeons in the past. It’s a very real world example of how machine learning can help with workflow and process improvement in a hospital today.
25% of our patients don’t show up to the specialty appointments they book. Isn’t that interesting? It may take two months to get a specialist appointment. 25% no show rate seems high. So similarly, we’re using machine learning techniques to see if we can predict who is going to show and not, figure out if there’s an intervention we can make. Maybe they don’t have a car, so calling an Uber and getting them to their appointment solves the no show problem, uses our capacity in a much more intelligent way, especially with a very limited resource like specialty appointments.
Of course, how do hospitals work in the United States? Well, there’s a lot of Brownian motion. Well, you’re admitted. We’re going to do a whole bunch of stuff and you’re going to be discharged. Shouldn’t we be able to, with machine learning techniques, say, “Actually, for patients like you, the discharge date should be Tuesday at four?” Now, we know that between now and Tuesday at four there are 100 things to do. How do we schedule those hundred things so that you are discharged in a timely matter that’s going to give you that safe low cost respectful care? As opposed to just random change.
Again, similarly, starting with machine learning, figuring out an outcome, and then working backwards to achieve that outcome we think is going to be really important and will change the patient experience very significantly. Those are our machine learning projects. We actually have three of them in production. We’re moving on to a variety of others, and the number of use cases around machine learning are very, very significant. You’ll see, we are not trying to replace doctors here. This is trying to make workflow better based on the experience of the past, helping people, whether it’s a doctor, a nurse, a social worker, a pharmacist, practice at the top of their license, reducing their administrative burden, helping them focus. Given the burnout rates we’re seeing throughout all clinicians, this kind of tool we think is really important. Next slide.
I want to reflect for a moment on the hype cycle, because of course the Gartner Hype Cycle tells us that we go from an introduction of a technology to the peak of inflated expectation through the trough of disillusionment and the plateau of productivity. We can see a variety of techniques are in that peak of inflated expectations for 2018. That includes some of the things I’ve already talked about, machine learning and AI, but it also includes blockchain and cryptocurrencies. Let’s ask ourselves, okay, well it seems like blockchain has some potential, could it really be helpful in medicine, or is it just all hype? Let’s go to that next slide.
What did we do? We do, we did recognize that blockchain, not a database, not an analytic tool, but a good auditing tool, could be used in a care coordination experiment. Today, about 25% of our patients cross what I’ll call ACOs, and what we would call these expected in-network doctor cohort. So that, yes, you’re at Beth Israel Deaconess, but because you’ve got an ear problem, you go to the Mass Eye and Ear, you go to partners, you go somewhere else.
So it is, even in a world of perfect interoperability, of FHIR and meaningful use, it’s a little hard to know where you’ve been for care. So we asked the question, could we use an Ethereum blockchain to take ADT data, figure out simply where your records are? We’re not storing records in the blockchain, just a public ledger that says, “Your records are here.” Now, even that turns out to be slightly problematic, because what if we don’t put any data whatsoever about you on the blockchain? All we say is, “Yes, you do have records at the Betty Ford Clinic. Yes, you do have a substance abuse and mental health clinic visit. Yes, you do have an HIV clinic visit.” Just the fact that we’ve disclosed that you have medical records in these physical locations is disclosing of a clinical condition.
So we had to add to this the smart contract idea that is part of some blockchain implementations. Which is, okay, fine. Here’s a public ledger. Public ledger has certain things in it, but who can see it, and what can it be used for? This Harvard-MIT blockchain pilot continues. We’re moving on from this example of simply tracking where your records are, to really asking, can we do more advanced consent management in the blockchain, such that, let’s imagine you have all kind of consent preferences, and you put those in a public ledger, and then applications can derive your consent preferences from that public ledger. We’re working on that now. In summary, with blockchain, it’s just a public ledger of information. Your write once, you never erase. There are some use cases that could be useful.
Let’s move on to the next slide. I believe we are now at our time where we’re going to do some Q&A. I certainly look forward to the discussion.
Chris Keller: Thank you Dr. Halamka. Yes, before we move on to the Q&A, which by the way Dr. Halamka, loved your material. Great set of input from our attendees today. We’re looking forward to do those questions, which Eric will facilitate in a few minutes. Before we do that, we have a very exciting event that happens in the fall. It’s called our Healthcare Analytics Summit. I’m showing some material about that. We have a great lineup of speakers. This is an annual event, with more than 1,000 providers and payers who attend. We’re going to do something special. For those of you who want to attend, we’re going to ask two poll questions.
The first poll question gives away a complementary sets of three registrations for a team of three. Let me go ahead and go to my polls, and we will throw up that poll. If you are interested and you know you can attend, and you’d like to put your name in the giveaway, go ahead and answer this question please. We’ll take one moment. We’ll also remind people to have your question asked of Dr. Halamka you’ll need to submit it via the questions pane in the GoToWebinar control panel. Okay. We’re going to stop that poll and ask the second question. Okay, second poll is a free, or a complimentary, giveaway for an individual registration. Go and take a moment if you know you can attend and would like to be considered. Very good. I’m going to close that poll. Thank you very much.
We’re excited about that event in the fall. Before I pass the time over to Eric, while today’s topic was an education webinar on the importance of precision medicine, we often have attendees who want to know more about our cloud-based Azure application development, analytics and AI platform, called the Data Operating System, DOS, and our professional services. If you’d like someone from Health Catalyst to contact you to learn more, please answer this last poll question. As I push this out, I’m going to turn the baton over to Eric to go ahead and kickoff Q&A.
Eric Just: All right. Thanks Chris. I guess the first question that I have is about data availability. I remember, going back earlier in my career, one of the first projects I was on was a project with a surgeon, and I was the analyst on the project. He referred to my role as the catcher. He said, “You’re going to be catching all the data we’re putting into the system, and it’s your job to take that data and make great insight out of that data.
I believe he truly believed that when he imagined my role as catcher, that there was going to be these perfect pitches right through the strike zone. It wasn’t long before we realized that most of the healthcare data pitches are really wild pitches. When I think of the data that’s required for precision medicine, I think of how highly specialized, and precise, and targeted, that data collection is going to be, the more we learn about underlying diseases.
I guess the question is, how do you make sure that you’re capturing the right data within the appropriate workflow, such that you have the right data to make those decisions on patients in precision medicine? It seems like it’s going to really expand from rigid workflows to very specialized workflows that are very dependent on the patient. It’s not just more data, but it’s additional workflows. Wonder if you could comment on that Dr. Halamka?
John Halamka: Sure. We’ll just have doctors enter the data, because they’ve got lot of free time. Just kidding. No. You are absolutely right, that the provenance of the data is so important. I’ll give you a case example.
It turns out, this was a decade ago, that Israel Deaconess decided that it should stratify it’s quality metrics by race-ethnicity. So I created a very complex of races and ethnicities, and asked triage clerks in the middle of the night to enter your race-ethnicity with great precision. Do you know that 90% of the patients at Beth Israel Deaconess are Haitian-Creole? Who knew? Because it turned out, if you carriage return through the fields and ignore them completely, it defaults to that.
To your point, you need to understand where in the workflow is a data element captured? Who is capturing it? What is its appropriate and useful purpose? Because it is quite common that people say, “I want to do this study,” and I say, “That’s a wonderful idea.” But I guarantee you, that data element is not gathered by somebody who has high skill, great attention to detail. It’s just not going to be helpful to you.
I think more and more, as we look to the future, I’m guessing we’re going to capture fewer data elements with higher granularity and specificity. I’m starting to see, as we move from an era or regulation, and lots and lots of expanding quality measure, to slight deregulation, and maybe less stringent quality measures. Which will allow us to focus on getting somewhere in the workflow, whether it’s the doctor, the nurse, or the patient themselves, to enter data that we can trust. Certainly, the idea of patients curating, especially things like social determinants of health, is going to give us a higher quality of data.
Eric Just: Excellent. One role that I’ve heard developing is this concept of healthcare data digitician, so somebody who’s making sure that you have the right data on the patients. Do you see a place for that role? Is that a role that you’ve considered in your discussions on precision medicine?
John Halamka: The answer is sure. There’s maybe three thoughts that I’ll give you there. One is, more and more, as I do these machine learning activities, I’m being asked to submit unstructured data. So rather than relying on what it was that somebody entered in a structured data field, having the machine learning application reading the narrative, reading the dictation, and then deriving that information from the unstructured data. That’s an interesting idea, because you might actually get some richer information than you could get from a categorical format.
We have hired scribes. We used them in a limited way, but in the emergency department for example we’ve discovered if you have somebody who has great interest, they actually can gather the data with higher accuracy and completeness, if they are that person as you describe, kind of a data curator, following the clinician around and getting the data in right.
And then of course we have talked about devices generating data. Those devices, yes, there may be telemetry from the internet of things, but maybe we’re going to start seeing ambient listening. Siri and Alexa with appropriate permission and business associate agreements, and remember those permissions and BAs don’t exist yet, providing us as well some interesting and more complete data.
Eric Just: Thank you. Next question is really about the pace of discovery that needs to take place in order to make precision medicine a reality. I think precision medicine, it seems to require such a accelerated pace of transforming research discovery in clinical practice than we currently have today. In our discussions with health systems across the country, there’s actually still quite a few hold outs who really see a big seam and a division between research and clinical practice. They talk about, we need to setup separate specialized research systems, and they have to be cordoned off from all the systems that we use for clinical care. I guess, what question would you, or what kind of advice would you have for CIOs at organizations that have a precision medicine strategy regarding the integration of research and clinical care infrastructure?
John Halamka: This is a brilliant question. Yesterday, I was at a major medical center outside of the Boston area. I won’t mention which one. We were looking at some of the data analytics planning, some of the innovation that they had. They said, “You know, we of course have completely separated our clinical and research requirements, our data storage, and our workflow processes.” I said, “That seems entirely loony to me.” Because there is no clear demarcation between what is data for clinical use, and data that is for research us.
Of course there are regulatory issues, and 42 CFR part 11, and all that. I get that, but, to me, I think we have to have the attitude that everything about a patient, with appropriate human subject’s IRB, and consent and all the rest, is important to gather for both clinical care and research, and there should not be a firewall between those two data sets. Think of yourself as building a data lake, and it is filled with richness, and then it is used for the appropriate purpose based on, as we talked about, the metadata and provenance of the information.
Eric Just: Yeah. That’s brilliant. I think there’s been a lot of progress made in data security and the ability to have data integrated at one layer, but secured and available to the appropriate parties at the top level of the data stack. Out of curiosity, do you see there being a need for massive regulatory reform, or do you feel like a lot of the issues are really based on interpretation, and maybe lack of technical vision for how they can be implemented at a system level?
John Halamka: What a fascinating question. Remember, I served Bush for four years, and the Obama administration for six, and was there during a lot of this seminal regulatory work that brought us meaningful use, HIPAA Omnibus Rule, Affordable Care Act, and ICD 10. I think based on what I see from CIOs, from administrators, from docs, this is not an era for more regulation. People are saying, “We now have these new tools, and we have these new techniques, and we need time for experimentation and pilots. We need … ” Maybe, that’s fine government, you can tell us, “Go an achieve an outcome.” Don’t tell us how.
I really believe, and this is going to sound a little strange, but next several years belong to the private sector. They belong to the innovators. they belong to the 26 year olds in their garages. I actually would hope we don’t have more regulation in the near term, because I think doing so would constrain innovation.
Eric Just: Excellent. Here’s a question about the pace that we can expect to see precision medicine truly impacting. We know that there’s great stories about it, and there’s great pockets of it, but on a massive scale, there’s plenty of skeptics about precision medicine having a place in a value-based health system, when you really look at the total cost, so the increased funding for research, the increased technology investment that would be required. What do you see as a reasonable timeframe for the precision medicine to be driving value into our health system on a national scale?
John Halamka: Well, of course you ask a really provocative question. I’m guessing, Health Catalyst, you’re somewhere near Utah. I’m just guessing, but let’s say the United States is comprised of countries, the east coast, the west coast, the Midwest, the south, and Texas. I say that because you actually have to look at each region of the country slightly differently.
I could argue that in a value-based purchasing world with risk contracts, we’re already starting to get to elements of precision medicine on the east coast today. Why? Because our financial livelihood depends on it. What do I mean? Today, in our ACO of 3,000 doctors covering New England, we have 26 different EHRs. You say, “Wait a minute. 26 EHRs, how could you let this happen?”
These are EHRs I didn’t buy, I didn’t implement. It’s a two doctor practice in some rural area. We don’t own them. We can’t really tell them what to do, but we can say there are certain data elements we need. There are about 150 data elements that we need. They’re all part of the meaningful use common data set, and we aggregate for every patient encounter at every site of care, in-patient, out-patient, urgent care, all these data elements that are necessary for us to do comparative benchmarking, quality analytics, care pathway, care management, and these kinds of things, into a single normalized data set.
We do that purely because that’s how we’re paid. I get $10,000 to keep you healthy. Unless I do that, I can’t manage your care, I can’t manage risk. Whereas, when I visit the Midwest, the Midwest says, “We’re still getting a lot of fee-for-service here.” They haven’t quite got to that level of data aggregation and curation.
I guess I feel like some of this precision medicine stuff we’re been talking about, it’s going to be happening in the next year to two in some areas, and five to 10 in other areas, and that’s okay. The early pilots will tell us what works and what doesn’t work. I have a really great sense that even some of these larger companies, the Googles, and the Amazons, and the Apples, will be great facilitators of the acceleration you describe.
Eric Just: Okay, great. Thank you so much. One last question from me, and then we’ll cut over to the audience questions. I see genomics of course as a huge piece of the data puzzle. I think when people think of precision medicine, often times genomics is the first thing that comes to their head. What do you see as the data element, or set of data alts, that are going to drive precision medicine, that’s right after genomics? What other missing pieces of data should organizations be planning on as they put in place a precision strategy?
John Halamka: I use this term, exposome, which … What do I really mean by that? Again, it’s the experiences you have, the drugs and the bugs that you have been exposed to. It’s the things that are going to change your physiology. Let me give you a quick example.
My father grew up and lived in Iowa. It turned out that if you actually look at who gets multiple sclerosis, it turns out that it’s people in the northern latitudes. Actually, it may very well be that there’s a virus that lives in the northern latitudes that confuses your immune system, and makes your own system attack myelin. So actually, to know that he grew up in Iowa, and when he was there, and what he experienced, would fundamentally change the kind of diagnostic certainty I would have about multiple sclerosis. So it’s your medical record, your genome, plus your exposome.
Eric Just: Exposome is a great concept. Perfect. Thank you. We’re going to go to some audience questions here. We have a question, this is a little bit long but I think it’s a really good question. It says, “A lot of health IT suppliers and buyers have been looking to population health as the way forward for health, for care, and in population health the lens seems to diverge out in looking at populations and-or large patient panels for insights. Now we are increasingly hearing of the promise of precision medicine, which seems to contrast with population health by converging in or down to the level of the individual patient. This seems to be at the opposite side of the philosophical spectrum. Do you see these two domains as complementary, or at odds with each other? Why?”
John Halamka: I see them as complementary. As I tried to say in my introduction, I think of this as a journey. I was trained in medicine in the early eighties. In the early eighties, I was told that Azithromycin was a really great drug for community acquired pneumonia. Why was I told that? It was because I trained in county hospitals, and it was cheap.
But you know that, and if you’ve ever take Azithromycin, it makes you vomit. So wouldn’t it be good to know, at very first cut, for populations Azithromycin causes vomiting? And the precision is, and the patient in front of you does or doesn’t process Azithromycin or macrolide alternative. I think we’ll start with gathering data about populations, focus our care so that we get the protocols, and guidelines, and pathways, that are at least a bit more refined that, “I learned that in 1980, and therefore I will do it, and it’s seemingly based on my anecdotal experience that you get care.” No, it’s based on populations like you, to the, now I can refine it even further and get more specific. I think they’re very complimentary and on the same journey.
Eric Just: Thank you. Another question from the audience is, how do you get the ML results back into the workflows you spoke of?
John Halamka: Whoever asked that is a brilliant person.
Eric Just: That’s Nick Furci by the way, brilliant person.
John Halamka: Okay. The answer is it’s so right. If you have some disconnected app, or disconnected website, and you force a clinician to leave the EHR workflow to go use that disconnected website, or app, they’ll hate you even more. They already are frustrated by their EHR. So look the FHIR clinical decision support, or CDS Hooks spec.
What does that do? It enables to be in an EHR, Epic, Cerner, Meditek, Athena, whatever, and call out to a clinical decision support cloud. That could certainly be a machine learning application, it could be an analytic application, it could be a set rules, all kinds of things, and get a pleasing result back in my EHR, in my workflow.
This, I think, is a big trend as we look to this FHIR API enablement. The idea that I can send data to and from other apps inside a workflow is really key. Because you know, Epic, Cerner, Meditek, Athena, they all innovate. That’s fine, but I think more innovation’s going to be coming from smaller agile companies creating a set of services that can be consumed by these external calls. The FHIR CDS Hooks spec is going to be a really essential linkage back to the workflow.
Eric Just: Great. This next question is about liability management. How do you expect liability management will change at hospitals? That is, do you think they will be in more parallel with research illustration of value in new delivery paradigms, or will it lag, and why?
John Halamka: Sure. Let me give you an example. It turns out about a decade ago one of our major payers in Massachusetts said, “We don’t like paying for MRIs, CTs, or PET scans. So what we’re going to have you do is call a 22 year old triage clerk, and argue for 20 minutes over the appropriateness of every high cost radiology test you ordered.”
We thought, “God, that doesn’t sound right. It sounds like a huge waste of time and energy. Can’t we agree on a community standard, and that will be the American College of Radiology rule sets customized by our cohort of payers and providers in our region? Therefore, if you call out to this cloud-hosted service, and you perform the community standard, you will then have done what is considered appropriate practice.”
I mention this because in the world of malpractice, assertions, really it’s not a question of good outcome, bad outcome. It’s, did you follow the community standard? My experience is we get more auditable, repeatable kinds of protocols and pathways based on community standards that are giving you that information inside your workflow. You’ll actually see … Well maybe not fewer assertions, but fewer malpractice settlements, and potentially a reduction in malpractice premiums. That would be my hope.
Eric Just: Excellent. Thank you. The next question, I’m going to combine two questions. One person said, “Can you comment on pharmacogenomics’ relevance today, if any?” Another person asks, “How do pharmacists get involved in data and analytics and using precision medicine?”
John Halamka: That’s really important. There’s some obvious things. In your genome, do you have a G63PD deficiency, so that you can’t process Sulfa drugs? I mentioned this company that was sold, it was a cloud-hosted genomics analysis company in San Diego. Their idea was, is that, yes, send me your meaningful use common data set, and send me genomic markers or genomes that you may have on the patient, and I will within a few seconds be able to say, “Do not use this particular medication in this particular patient because they’re not able to metabolize it.”
So certainly the idea that metabolic yes and no, based on biomarkers could be important. But it does get more subtle, and that is, my wife, as a Korean female, turns out to process Taxol different than would a Nordic woman. Interesting issue is we had to look at what doses of Taxol would be appropriate for a Korean woman, and the answer is no clinical trial has ever been done. So we used anecdotal evidence, and we looked across all the patients of the Harvard associated hospitals, and determined that cutting her Taxol dose in half seemed like a reasonable thing to do to avoid side effects like neuropathy.
This idea of, yes, getting to the point where we can decide based on race, ethnicity, or some other characteristics about you. How do we avoid side effects, and get a better therapeutic effect, is important. We did it with my wife based on anecdotal data, and there are companies emerging that try to do it a bit more scientifically.
Eric Just: Thank you so much. We’re nearing the top of the hour. Dr Halamka, do you think you could go a few minutes over? We do have some additional questions. If you’re not available, we can just move on, but we probably have another five minutes of questions that we could answer, if you have time.
John Halamka: Go for it.
Eric Just: All right. The question is, the next question is, does precision medicine change the way we underwrite health insurance?
John Halamka: Wow, fascinating question there. The answer is I hope not. I mean, that is, maybe if it makes it cheaper, great, but don’t ever use it as a preexisting condition. Let me give you the quick story.
When I was sequenced, and I had to sign a 35 page consent because this was before the era of the genomic non-discrimination act, or GINA. What do we do? The CEO of my health insurer, Charlie Baker, now the governor of Massachusetts, and my employer, Paul Levy, the CEO of Beth Israel Deaconess, stood on stage with me and said, “I will insure you forever. I will employ you forever. Here’s the genome released, no risk.”
I guess it would be great if being a vegan, with no heart disease or stroke risk, I could get a reduction in my premium. Call it a good eating discount. But I would hate to see people penalized because of genomic risk. I think that would truly be discrimination.
Eric Just: Yeah. That movie, Gattaca, I don’t know if you’ve seen that, but that was-
John Halamka: Yes.
Eric Just: … where that played out. I think that was a good indication of what people’s fears are about all this.
John Halamka: Agreed.
Eric Just: I’m going to combine two questions again. The first question is, how do we as patients get our genome mapped, and use that for future healthcare, especially those of us with chronic conditions? The other person asks, if genomics is a key factor in precision medicine, should we have precision medical care be started at birth? If we do, should physicians get basic data science training?
John Halamka: Wow, all great questions. The personal genome project, which is at, go to personalgenomes.org, is enrolling and creating 100,000 genomes. Francis Collins I hear is going to have a million patients sequenced in the project that he’s currently leading through NIH.
I think you’re seeing, there are, your clinician might get directly involved in certain kinds of sequencing, depending on your disease states, but you’re starting to see as a society this notion of having sequencing do to help us prevent illness in the future, to help us give you better care today. It’s going to be important, and NIH is making a pretty big investment in it.
And yes, I mean ask yourself this interesting question. I am, as I said, 56. I have glaucoma, and I, by the way, inherited glaucoma from my father, who got it from his father. I actually didn’t really get the straight family history on glaucoma, and it was discovered after I had some slight field loss in my left eye.
Wouldn’t it be cheaper for society if at birth we looked at the genetic risks you have by sequencing you, and said, “By the way, you should eat this, exercise this, take this medication, and get yearly eye exams,” so that we can actually avoid costs to society in the future? Given the genome is 500 bucks per person to sequence, I bet we could avoid easily $500 in lifetime cost per person by doing early sequencing near birth.
Eric Just: Great. This next question is something that you’ve kind of answered, but I wanted to ask it to make sure that there’s not additional commentary you could add to this. The question is, will the EHR be the precision medicine … Sorry. Will the EHR be where the precision medicine information, such as genomics, resides for clinicians at the point of care, or will it be in some other type of application?
John Halamka: I use this term, EHR-plus. Which is the EHR will be with us forever, because it is a way of capturing data, and being compliant, and ensuring that there’s data integrity. But the EHR has to be surrounded by an ecosystem, an ecosystem of cloud-hosted services.
For example, at the moment we are not putting internet of things data inside the EHR itself. What we’re doing is putting them in external cloud, and then making insights about the internet of thing’s data available inside the EHR. I think you’re going to see this hybrid, some data in the EHR, but a whole lot of other data, be it genomics, internet of things data, subjective data from the patient, stored elsewhere, but gets linked back to the EHR.
Eric Just: Great. This one is specifically about the BIDMC@Home application. It says, “How is medication adherence in the BIDMC@Home documented? Does the patient input the data, or is there another manner of documentation? How reliable is the data? Patients can document medication administration without actually taking the medication.”
John Halamka: Yeah. I’ll answer this in two ways. We show the patient is a very simple touchscreen based, here are the three things you should take, did you? And the patient or their family member just touches. Yes, yes, no. So sure, could they fake it? Yeah. But really, this is a measure for us, we hope, of a shared medical record between patients and their clinicians, where there is transparency from both for the benefit of both.
Now, sure, there are emerging companies like Proteus. Proteus creates a little bit of an ion in each pill, and gives you a wearable, so that we can actually detect when that ion appears in your blood stream. That is the validator that you really took the pill because it’s in your bloodstream. We’re not quite there yet. We’re just simply asking patients and families through self-report.
Eric Just: Thank you so much. I think we’ll end here, at five minutes over. Thank you so much for going over, and thank you for sharing your thoughts and vision about precision medicine. It’s been a lot of fun working with you.
John Halamka: Excellent. I look forward to the continuing dialogue.