Data for Improving Healthcare vs. Data for Exasperating Healthcare Workers

Add

The phrase “healthcare data” either strikes fear and loathing, or provides understanding and resolve in the minds of administration, clinicians, and nurses everywhere. Which emotion it brings out depends on how the data will be used. Data employed as a weapon for purposes of accountability generates fear. Data used as a teaching instrument for learning inspires trust and confidence.

Not all data for accountability is bad. Data used for prescriptive analytics within a security framework, for example, is necessary to reduce or eliminate fraud and abuse. And data for improvement isn’t without its own faults, such as the tendency to perfect it to the point of inefficiency. But the general culture of collecting data to hold people accountable is counterproductive, while collecting data for learning leads to continuous improvement.

This isn’t a matter of eliminating what some may consider to be bad metrics. It’s a matter of shifting the focus away from using metrics for accountability and toward using them for learning so your hospital can start to collect data for improving healthcare.

Data for Accountability

Data for accountability is a regulatory requirement. It’s time consuming and can be less than perfect. But it has made a difference in some areas. Readmission and cancer rates are decreasing and patient satisfaction scores are increasing.

But an accountability model of results improvement places the focus on people rather than processes. This watchdog approach tends to make many people defensive, resistant to learning, and impedes continuous improvement. It initiates the cycle of fear, as first described by Scherkenbach (Figure 1), where fear of repercussion leads to denial and blame shifting, followed by other negative behaviors that continue to produce a culture of fear. Yet most errors that jumpstart this cycle are the result of flawed processes rather than flawed individuals.

cycle-of-fear

Figure 1: The Cycle of Fear (Scherkenbach, 1991).

Data for accountability can be not only punitive, but also detrimental toward improving outcomes. For example, a well-known CMS core measure is 30-day, all-cause, risk-standardized readmission rate following hospitalization for heart failure. In the process of complying with this metric, it’s possible that patients who should be back in the hospital are not being readmitted. In the process of hitting a metric for administering a beta-blocker therapy, it’s possible to obscure notes to ensure a favorable entry into a patient’s record.

Data collection for accountability is time consuming. According to a survey by Health Affairs, physician practices in four specialty areas spend more than 785 hours a year reporting on quality measures. Staff spends 12.5 hours and physicians spend 2.6 hours per week on this work. And when anticipating reward or punishment based on a metric, healthcare systems dwell excessively on data accuracy. A lot of effort goes into determining which patients should or should not be included in any given metric.

Deming and Over-Emphasis of Numbers

Deming said the right amount of effort focused on a process improves it. The right amount of focus means recognizing the many different factors that play into a number and learning what you can, but not over-emphasizing the importance of that number.

Too much focus sacrifices quality in other areas. For example, say there are five key factors that impact diabetes care, but only one is tied to an accountability report or a bonus metric. The other four factors may suffer because clinicians are focusing on the one key metric, rather than focusing on all five metrics that impact the patient’s health. This is sub-optimization, a negative attribute of collecting data for accountability.

Because of the number of regulatory metrics, many organizations sub-optimize them given their time limitations, even though other metrics have greater impact on cost of care and improving patient health. Organizations can’t focus on improving the more relevant metrics because they sub-optimize on the regulatory metrics that hold them accountable for compliance.

Extreme focus results in gaming the system by manipulating numbers. There are a variety of disingenuous ways to game the system because too much emphasis is placed on the metric versus the improvement: forging documents, stretching the truth, burying notes, refusing to see patients that could negatively impact a metric. Gaming the system goes to the point of cheating. Processes aren’t improved at all, only the reporting is.

If organizations put as much effort into improving the process as they do with documenting their results, they wouldn’t need the regulatory metric in the first place. To avoid penalties down the road, they spend thousands of hours improving a definition of accountability, when the time could be much better spent improving processes.

better-number-better-way

Figure 2: Ways to get a better number (“Healthcare: A Better Way,” 2014).

This is like the student who studies just to pass the test, but never gains any knowledge. He is memorizing answers to specific problems without realizing that the whole reason for the test is to understand physics or philosophy, not just how to get an A. When healthcare workers go back and alter data, it doesn’t improve patient care.

Data for Improving Healthcare

Data for improving healthcare, or data for learning, requires an internal strategy that addresses specific clinical process improvement areas, and quality and cost control areas. While accountability data is driven by external reporting requirements, clinical or operational data for outcomes improvement is driven by a strategy to better understand a process and the root cause of process failures. This data analysis increases understanding of patient populations, as well as clinical and cost outcomes, by understanding process variation that leads to different results.

Data that’s tracked for establishing a baseline, and then for comparing to the baseline to determine whether or not an improvement occurred, is data for learning.

If the focus is on learning versus reporting, then the data can be so much more revealing. Let’s look again at a cohort of heart failure patients. If we measure whether patients are taking a medication simply to track, learn, and increase medication compliance to benefit patients, then the data’s purpose changes. It becomes about learning and improvement.

Data for Learning Has its Drawbacks

The people and processes involved with collecting data for learning can still be subject to some of the pitfalls associated with collecting data for accountability. A lot of time is spent reporting on quality metrics, often at the expense of improving things. Too often, the focus is on simply reporting or achieving a ranking. It’s up to each healthcare organization to use data and metrics effectively to learn about its own processes.

Time is also a factor with data for learning. Too often, perfect data becomes the goal for learning and improvement work, when all that’s needed is sufficient data that can reveal when a new process is better than the old process. Clinicians may sometimes shoot for a level of data accuracy that’s only required of audited financial reports, or a double blinded study, when all they need to know is if a new process is helping a patient more than the old process. Should a new process be leveraged for discharge or is an existing process better? A quick solution might be to setup an A/B test of two quasi-experimental designs, try them in two units for a month, get rough numbers, and then figure out which process works better. A service line could start using those kinds of numbers without being 100 percent accurate. Eighty percent can still be very useful, and much more cost effective.

Developing New Measures for Learning

Eventually, the industry will adopt new measures that matter most to patients, such as how quickly someone can return to work following hip replacement surgery. The International Consortium for Health Outcomes Measurement (ICHOM) focuses on outcomes measures—metrics for learning—and has developed Standard Sets for 13 conditions. They plan to have additional sets that will cover 50 percent of all diseases by 2017. Health Catalyst’s care management applications will be able to capture this type of data, as well as patient-reported outcomes, both subjective and objective.

Accountability vs. Learning: Another Perspective

In January 2003, Brent James, MD, Executive Director of the Intermountain Institute for Healthcare Delivery Research, Don Berwick, MD, the former CMS Administrator, and Molly Coye, MD, the former Chief Innovation Officer for UCLA Health, wrote for the journal, Medical Care, about the connections between quality measurement and improvement. The article describes two pathways for “linking the processes of measurement to the processes of improvement.”

  1. The Selection Pathway (accountability)

The selection pathway is based on ranking and reputation. Patients select clinicians, employers select health plans, rating agencies accredit hospitals, and doctors refer to other doctors all based on measured performance. The purpose behind this pathway is to not only hold people accountable, but to judge whether a physician or hospital is good or bad. Supposedly, when people self-select away from a bad physician to a good physician, or from a bad hospital to a good hospital, that motivates a change in behavior. This pathway doesn’t lead to improvement by underperformers, but the outcomes of care appear to improve because of this self-selection process.

  1. The Change Pathway (learning)

The change pathway goes after understanding processes in order to improve them. People and organizations are not labeled as bad or good; processes are labeled as less or more effective. This way, the data’s purpose becomes how to help an organization learn about the breakdowns and failures in the process, not whether people or organizations are performing poorly.

The article makes this analogy: “A grocery shopper can select the best bananas without having the slightest idea about how bananas are grown or how to grow better bananas. Her job is to choose (Pathway I). Banana growers have quite a different job. If they want better bananas, they have to understand the processes of growing, harvesting, shipping, and so on, and they have to have a way to improve those processes. This is Pathway II.”

punish-the-outliers

Figure 3: In this illustration of accountability (the Selection Pathway), outliers below a minimum standard are identified and reduced, but no real improvement occurs.

effective-approach-to-improvement-better-care

Figure 4: In the more effective approach to improvement, the focus is on better care processes to shift the overall curve.

The Synergy of Data for Accountability and Improvement

Regardless of the intent behind data collection, it must be used responsibly. Extravagant executive dashboards can be an overuse of data, especially if no improvement comes from the effort. The diverted effort has nothing to do with outcomes improvement and distracts from other metrics that deserve attention. A bottom-up approach is preferred, where improvement efforts are designed with actual processes behind them, rather than a top-down approach that creates fire drills and produces wasted work.

As James and others wrote: “‘Pathway I’ (Selection) can be a powerful tool for getting the best out of the current distribution of performance. ‘Pathway II’—improvement through changes in care—can shift the underlying distribution of performance itself.”

We are getting smarter about this as an industry. It’s fair to say that both regulatory and clinical metrics can play on the same team. What needs to change is the focus on accountability and judgment so everyone can achieve the desired goal of learning and improvement.

Loading next article...