Healthcare Culture: Choosing a Systems-Based Approach Over Punishment and Reward

Add

Most detective fiction follows a standard formula: a crime is committed, several clues are discovered, and, just before the story ends, the last of the evidence falls into place to reveal the villain. It starts with a victim and ends with a perpetrator. Life is rarely so neat. Yet, when something goes wrong or right, it’s safe to bet that the hunt for a scapegoat or a hero has begun.

Organizations in many industries have cultures of punishment and reward. In healthcare culture, this can put patients at risk. Despite efforts to avoid or change a shame-and-blame environment, healthcare still is prone to individual accountability. A team member’s (nurse, physician, surgical tech, etc.) fear of repercussion (e.g., demotion, loss of respect and credibility among coworkers, or job loss) can outweigh their willingness to report an adverse event. This prevents their coworkers and organization from acting to mitigate harm or prevent similar problems in the future.

The Causal Chain of Adverse Events

CSI: Crime Scene Investigation was a popular television drama that aired from 2000 to 2015. An episode early in the show’s run, “Chaos Theory,” breaks from the familiar victim-perpetrator pattern. The victim, Paige Rycoff, is a college student who appears to have vanished from her dorm. 

When Paige’s body turns up, the autopsy reveals that she died from blunt-force trauma and internal bleeding, but it’s unclear what caused the trauma. Soon, however, the team works out a chain of unfortunate events: Before Paige left her dorm room, she had taken out the trash. The trash chute in the hall had snapped shut and pulled the trashcan out of her hands. She had gone to retrieve it from the dumpster in a dark alley behind the apartment building. As she had climbed up the back of the dumpster, a driver sped by, unaware she was there. To avoid a parked car, he had swerved and swiped the dumpster, crushing Paige against the building.

When forensics presents the evidence to Paige’s parents, they refuse to believe it was an accident. They want to blame an individual and are unwilling to accept that a tragic chain of events caused their daughter’s death.

Real Accountability Rests on the System

Just as some unfortunate incidents aren’t crimes, but true accidents involving many factors, medical errors are not likely the sole responsibility of the individual who performed the procedure, ordered the test, or prescribed the medication.

Adverse events in healthcare may often result from a causal chain like those leading to Paige’s death: A nurse doesn’t receive a complete patient profile and is unaware that the patient they’ve admitted for surgery has diabetes (as well as if the condition was managed well before surgery), so they don’t monitor the patient closely after surgery to keep the diabetes controlled. As a result, the patient has a poor outcome—a surgical site infection that requires significant additional care.

Who’s at fault in the above scenario? The nurse, who had no previous contact with the patient? The patient for not notifying hospital staff? Another individual?

It’s more likely the diabetes-related surgical infection was the result of a system that did not make current, thorough patient records immediately accessible to frontline staff, or make preventive protocols immediately accessible.

“Hindsight Bias” Overlooks All Contributing Factors

In the landmark study, To Err Is Human, the National Academy of Medicine (NAM; formerly the Institute of Medicine) writes about the dangers of “hindsight bias” when considering adverse events in healthcare. Hindsight bias, the study explains, “misleads a reviewer into simplifying the causes of an accident, highlighting a single element as the cause and overlooking multiple contributing factors.”

As a result, blame lands on one person or one factor, and the organization is no closer to understanding the real problem, or improving its process. “Hindsight bias makes it easy to arrive at a simple solution or to blame an individual, but difficult to determine what really went wrong,” the report states.

The NAM study urges improvement of the system, not of the individual: “When [accidents] do occur, they represent failures in the way the systems are designed.” This is reminiscent of the work of one of the world’s leading quality improvement experts, W. Edwards Deming.

Deming—who is widely considered one of the greatest influences on Japan’s postwar rise to the second largest economy in the world by the late sixties. His methodology, which looks at outcomes as the result of countless interlinked processes—versus standalone actions—are highly applicable to healthcare improvement.

Under Deming’s theory of management, the Deming System of Profound Knowledge®, improvement means everybody wins (from leadership and team members to consumers). With this systems approach, no individual or lone factor is to blame for failure or success.

The Weakness of a Punishment-and-Reward System

It’s time for the healthcare industry to question the value of punishing or rewarding individual workers for a system they didn’t design and have little power to change. If the system is responsible for most performance, then searching for scapegoats to blame or heroes to praise is irrational at best. 

In the report A Tale of Two Stories, the National Patient Safety Foundation (NPSF) suggests two different views of adverse medical events:

  1. First stories—the ultimate events, which grab attention. Examples of first stories are a wrong-side surgery or the over-administration of a sedative. First stories make the news; they’re immediate and visible, but superficial.
  2. Second stories—the chain of contributing factors that lead to the first stories (which are often overlooked). Second stories are the failure to properly prepare for surgery or a poorly implemented medication-reconciliation process. They’re enmeshed in the culture of the organization and its longstanding beliefs and practices. Second stories are essential to understanding the issue and making effective improvements to the system.

It’s important to remember that even though the NAM and the NPSF studies focused on adverse events, singling out good individual outcomes is just as irrational and just as harmful to the culture of the organization. There are bad first stories, like the surgery gone wrong, but there are also good first stories, like the surgery that goes well. In both cases, it’s important not to overlook the systemic second stories that contributed to the outcome, good or bad.

A Forward-Thinking Healthcare Culture Doesn’t Blame the Individual

An important recommendation in the NAM study is that administrators and managers provide a safe environment for clinicians and other staff to report errors without fear of retribution. As Dr. Lucian Leape of the Harvard School of Public Health said in a conversation with the Institute for Healthcare Improvement, “Creating an atmosphere where people on the front lines trust that management is indeed really interested in understanding errors rather than in just punishing people makes a tremendous difference in how people feel about their work.”

In the years since the publication of the NAM study, healthcare organizations have widely acknowledged the significance of a safe reporting environment, but full acceptance might still be lagging. For example, a 2011 study of Brazilian ICU nurses suggests that even though administrators purport to champion a culture of no punishment, nurses who do muster the courage to report adverse events may still be shamed or punished in more subtle ways.

Why Punishment and Rewards Persist

The manager who punishes or rewards people is assuming, consciously or not, that those they manage would not behave appropriately otherwise. The punishment and the reward are an attempt to manipulate behavior. External reinforcements, such as punishments and rewards, undermine the very psychology of motivation, creativity, and excellence.

One high cost of these external reinforcements is their tendency to dampen people’s intrinsic motivation—the natural wonder and playful engagement that most individuals have as children. That passion might also drive emergency department staff to think fast to save a life.

Few people would argue against the NAM’s call to provide a safe place for clinicians and staff to report their mistakes so they can learn and improve and thus raise the effectiveness of the whole organization. What about when the cost of lower performance is too high, like when a mistake could jeopardize the safety or health of a patient? In these situations, the strong tendency is to assume that trying to force behavior is not only justified but practical.

A clinical administrator recently told of a time when she was managing the emergency department at another hospital. She said that her triage nurse had reported what appeared to be a case of sepsis. It turned out to be a false alarm, and the rapid-response team sternly scolded her nurse for wasting their time. As you can imagine, she was very upset. She wanted her nurses to feel safe enough to gain confidence and be assertive without fear of being second-guessed or shamed for making the inevitable mistakes that everyone makes when learning and innovating. But what if it had been a dangerous medication error instead? Would shame and punishment have been justified then?

Before we answer that question, let’s ask a few more. What was the process? Was there detailed standard work for her nurses to follow? What failsafe measures were incorporated? Were the nurses trained on the job by those who understood the processes thoroughly and could coach them sufficiently before leaving them on their own? Notice that these questions apply equally to the sepsis false alarm and to the serious medication error. Where the risk is higher or the cost of failure is greater, the details of the process may need more precision and greater collaboration and new nurses may need to be trained longer, but this is a difference of degree not method.

Providing a safe working environment includes not only avoiding punishment but avoiding rewards. It’s easy to see why singling out individuals for punishment is bad, though it may not be as obvious why it’s bad to reward individual performance. As human behavior scholar and writer Alfie Kohn explained in an interview a few years ago, “Rewards, like punishments, are ways of doing things to people, whereas what tends to be much more effective is to work with people to try to set up reasonable goals and to solve problems collaboratively.”

Kohn, and other writers, such as Daniel Pink, have helped raise awareness about the hundreds of studies in recent decades that contradict some of our most basic assumptions. These studies have found consistently that rewards and punishments are not only ineffective but that they have a negative effect on performance in the long run.

Kohn points out that attempts to control behavior work temporarily, but, as he explains in his book Punished by Rewards, there’s another reason external reinforcements like these are so pervasive—they’re easy to use:

“Good management…is a matter of solving problems and helping people do their best. This…takes time and effort and thought and patience and talent. Dangling a bonus in front of employees does not. In many workplaces, incentive plans [and punishments] are used as a substitute for management: pay is made contingent on performance and everything else is left to take care of itself.”

It’s not that managers are inherently lazy. It’s just that it takes a profound commitment from all executive management and, as Kohn says, it requires more energy and attention to detail to provide a working environment in which people can flourish naturally, one in which they can let their performance be propelled by their own intrinsic drive without any need for management to resort to the old manipulative reinforcement of blame or praise.

Build a System for Success

Managers need not tolerate deliberately poor behavior, but, as Deming teaches, most poor behavior is not deliberate. Most people want to do good work but are limited by a poorly designed system. 

Leadership is accountable to fund, design, create, and maintain that system. They may delegate the actual work. In fact, they should delegate the details and the refinement of each respective part of it to the team who works in that area and has the deepest familiarity with it.

One of the points Deming emphasizes most in his teachings is the proper accountability of management. In Out of Crisis he writes that: “…the aim of leadership is not merely to find and record failures of [individuals], but to remove the causes of failure: to help people to do a better job with less effort.”

Leave Blame and Shame for the Crime Dramas

While blame-and-shame cultures make for interesting detective novels and TV programs, a systems approach in healthcare is the only way to resolve problems and change outcomes for the better. Effective improvement leadership will embrace the difficult but satisfying work of leading an organization toward improvement by committing to the systems approach (versus falling back on the scapegoat approach), even during the greatest challenges.

Additional Reading

Would you like to learn more about this topic? Here are some articles we suggest:

  1. 7 Features of Highly Effective Outcomes Improvement Projects
  2. The Best Way Hospitals Can Engage Physicians, Nurses, and Staff
  3. Governance in Healthcare: Leadership for Successful Improvement
  4. 5 Principles of Adaptive Leadership and Why It’s a Critical Skill for Healthcare Leaders
  5. Build a Mission-Driven Culture in Healthcare
Loading next article...