The U.S. spends 18 percent of its Gross Domestic Product (GDP) on healthcare, yet, by some measures, is the least healthy of all its peer countries. Approximately $1 trillion is thought to be wasteful spending and 14 percent of that ($140 billion) is due to clinical waste. Healthcare organizations can change this wasteful trajectory by applying quality improvement methods to improve their processes.
Since healthcare is complex, many in the industry believe that the controls and standardization suggested by quality improvement methods are difficult for the industry to adopt. But general quality improvement methods—defining quality, developing improvement measures, identifying variation, using control charts, and running Plan-Do-Study-Act (PDSA) cycles—have been successfully applied to healthcare processes, healthcare, and health.
With the proper application of data and analytics within the appropriate quality improvement framework, healthcare organizations can approach quality control and improvement scientifically—and effectively.
Quality improvement methods have been commonly used in agriculture and manufacturing environments built on processes, but some believe these methods can’t be applied to healthcare because of its craftsmanship nature. Patient care isn’t typically viewed as a process that can be improved. Clinicians rely on their expertise to care for patients, making tailored decisions one case at a time. One of biggest barriers to quality improvement in healthcare is not understanding that systems and processes may coexist with personalized care. With this understanding, quality improvement efforts can center on routines while clinicians still deliver unique patient care.
When applying quality improvement methods and tools to healthcare, there are five guiding principles healthcare organizations should consider.
Simply exposing clinicians to ideas and discussing case studies around quality improvement doesn’t motivate them to adopt improvement initiatives. Quality improvement theory and methodology is better learned through hands-on improvement work—applying it to the actual clinical environment. Identifying an area that is important to clinicians and creating the platform for improvement will facilitate adoption.
Getting agreement on the definition of quality in any particular context establishes what to measure and how to collect data on those measures. The Institute of Medicine (IOM) developed a quality framework around six aims for healthcare systems, but the most salient one for defining quality states that measures should be patient-centered: “Providing care that is respectful of, and responsive to, individual patient preferences, needs, and values and ensuring that patient values guide all clinical decisions.”
Definitions of quality should include what’s important to the patient (patient-reported outcomes, or PROs). Are patients with chronic disease getting the best care? How is their quality of life? Health systems are still learning how to routinely measure PROs to understand if they are focused on improvements that matter to the patient. In addition to focusing on the patient-centered dimension of care, quality improvement efforts also focus on safety, effectiveness, efficiency, and timeliness.
The IOM quality framework also defines quality in terms of healthcare equity: “Providing care that does not vary in quality because of personal characteristics such as gender, ethnicity, geographic location, and socioeconomic status.” A good operational definition of quality extends improvement to all population segments and closes care equity gaps.
Data and measurement power quality improvement, but this is where healthcare is tougher than other industries. When clinicians first hear about quality improvement measures, they equate them with performance measures, which imply accountability. It’s important to separate measures for improvement from measures for accountability.
Measures for accountability are typically converted to percentages. For example, an accountability measure collects data on the percent of ER patients who waited for more than 30 minutes. Management is held accountable for keeping wait times under 30 minutes. An improvement measure collects actual wait time data in minutes to measure system (not people) performance, so a process can be improved. Improvement projects may add to the existing measures workload, but improvement measures create high-value data that leads to dramatic improvement, ultimately saving time and resources.
Several frameworks have been adopted for quality improvement in healthcare:
The Model for Improvement framework proposes that an improvement team should ask three fundamental questions:
To answer these questions, the improvement team sets goals, aims, and interventions in pursuit of high-value improvement measures:
Each intervention goes through a PDSA cycle to test its validity and to adapt it to the specific context.
While it can appear simplistic compared to other methodologies, the PDSA cycle, in a repeated application, is the backbone of quality improvement:
An improvement project usually involves several PDSA cycles. The key to quality improvement success is understanding that PDSA is an iterative process. After each cycle, the improvement team assesses the success of the associated intervention. At some point, the intervention is adopted or abandoned, which indicates the end of PDSA cycles for that intervention. Then the team can move to the next intervention. Reaching the global aim indicates completion of the overall quality improvement project.
This improvement process is well-illustrated in a recent paper by Zafar and others, which documents 36 PDSA cycles over 10 interventions during an improvement project to reduce COPD readmissions. This is an excellent example of properly applying quality improvement in healthcare and shows how the model works with aims, measurements, and change theories.
Just as PDSA cycles power quality improvement, data powers PDSA cycles. Improvement teams study data to learn about problems within a system or process, and then implement improvement steps. Comprehending variation in data is a vital component of this study.
A deep knowledge of the Model for Improvement framework helps the team accomplish the improvement goal. Part of this knowledge comes from understanding variation in data and the causes of that variation.
Healthcare processes involve both intended and unintended variation. Intended variation is purposely deciding to do something a different way. It’s what defines patient-centered care. Clinicians sometimes resist the idea of reducing variation because it’s part of their everyday practice. They purposely prescribe one dosage or treatment to one patient, and another dosage or treatment to the next. Intended variation is a desirable practice and part of the job description.
The theory of variation was proposed for identifying and removing unintended variation. Multiple systems with multiple sets of unintended variation create significant unnecessary costs. Unintended variation occurs when several clinicians in the same practice prescribe different antibiotics for the same problem without a specific rationale for, or awareness of, the variation. If the variation isn’t thoughtful or it’s out of habit or convenience, then it’s unintended. But if each clinician has a rationale behind their individual choices and the variation continues, then it’s intended.
Walter Shewhart developed the concept of common cause and special cause variation. Common causes are an inherent part of a system or process that impact all people and outcomes. Special causes arise from specific circumstances that impact only a subset of people or outcomes. Understanding common cause and special cause variation helps health systems identify the changes they can make to result in improvement. Studying common cause and special cause variation is the cornerstone of improvement because it shows why the variation occurred and suggests the most effective approach to address it. Shewhart’s control chart method provides this insight.
A case study from Cincinnati Children’s Hospital Medical Center shows the impact of special cause variation on catheter-associated bloodstream infections (CA-BSIs). The hospital had multiple improvement projects working on the common causes of variation associated with CA-BSI rates. It was showing significant improvement over an eight-year period, and then rates unexpectedly increased beyond the upper control limit. The hospital conducted a series of investigative studies that indicated special cause variation in two units. It discovered that a new medical device had been introduced that wasn’t part of the system. Because it was monitoring through control charts, it was able to pinpoint when the problem occurred and remove the special cause. Control charts are one of five tools health systems can use to learn from variation in data.
The following tools allow improvement teams to see the status of a whole system and use discoveries—revealed by variation in the data—to review and change processes:
The complexities of healthcare operations and the vast amount of variation and waste in U.S. healthcare make the undertaking of a quality improvement initiative seem like a distant possibility. But healthcare quality improvement is achievable when systems use the five principles outlined in this article as their guide, from getting clinician buy-in to using an improvement framework that’s based on scientific methodology. Health systems can change the dynamic and pace of quality improvement work by starting small—testing on a small scale and learning from those tests.
Would you like to use or share these concepts? Download this presentation highlighting the key main points.