Heart surgery programs have their ups and downs. An “up” is the indescribable thrill when cases are coming out of the OR with no complications, require no blood products for bleeding and get out of the hospital quickly. A “down” is the times that patients get into complications. Even top performing programs have a rate of events described as “major morbidity” that is around 10-15%. Beyond the stats, some of the bad cases seem to strike a collective nerve and adversely affect team morale in unexpected ways. Occasional errors, even ones that cause patient harm, are an inevitable and acceptable fact of being engaged in high risk work. A leader’s legitimacy depends on being good at learning and improving after a bad error. This is how team morale does not get derailed. Repeating the same mistakes can cause a crisis that erodes trust in the surgeon and eventually the hospital. During the course of my 8 years of experience leading various cardiac surgery programs, I’ve never found any task harder than leading a heart surgery team through a period of crisis. An approach I use to tackle such a daunting task is to write a summary of my experience and plug in any gaps with what I’ve learned from research in this topic. This helps me learn from my mistakes and improve future performance. I was curious why I couldn’t find any surgeons writing about this topic and that’s the reason for this post.
A bad event or cluster of events does not usually signal a cause for alarm. In most cases, people look past this and continue to see the risk of surgery at that program as predictable, controllable and therefore acceptable. Under standard circumstances, the lead surgeon collaborates with staff in the hospital’s QA/QI department to develop a tool that depicts quality is a comprehensive and easy to understand way. This is often a “dashboard” that incorporates outcome measures such as post-surgical mortality and major morbidity as well as process measures such as the use of recommended evidence-based practices. Safety depends on variables that are less predictable and contingent on context, making it more ambiguous to define. The ability to digest ambiguous data and transmute them into a refined and comprehensive end-product is a hallmark of a good leader. Broad trust that this end-product is a plausible account of the true state of the program is the hallmark of a leader with credibility. The program of this leader is given the benefit of the “law of averages” – a cluster of bad events will be balanced by better outcomes in the future.
As bad outcomes accumulate, feelings of uncertainty emerge. Risks start to seem less predictable and out of control, placing a spotlight on both the program and its leader. A quality-safety dashboard is no longer reassuring. Conventional hospital resources are not helpful to re-establish confidence. QA staff and peer review committees are poorly equipped to understand, evaluate or communicate any problems that occur in the operating room and cardiac surgical ICU. A high risk program like heart surgery is accountable to a wide range of stakeholders – referring physicians, front line staff, members of hospital administration and the board. Usually, these individuals don’t have independent access to reliable information about the program. This creates a volatile world where subjective opinions about quality and safety, largely based on intuition, fill a vacuum created when objective data are no longer suitable.
The dilemma is that objective data are more likely than subjective opinions to be based on reality. We all trust our intuition, but strong support of reality is the best path to the correct decisions. This distinction is important because decisions during a crisis can be really hard. Decades of research from the social sciences shows that decisions under uncertainty are often flawed by our excessive reliance on subjective intuition. This approach triggers logical errors or mental shortcuts at the exact time when rational thinking is needed the most. Our intuitive mind overestimates vivid dangers, falls into ruts, manages multiples pieces of data poorly or is swayed unduly by desire, emotion and even the time of day. It is affected by the order in which information is presented and how problems are framed. Bottom line: its too unreliable to restore sense of control and predictability to a program shrouded by uncertainty.
Hospital response to a crisis reflects the flaws in decision making by its individuals. When uncertain about how to respond to a crisis, organizations often fall back on the one thing they do well and stop doing other things. They respond with a hierarchical and top-down management strategy. This pattern is so common that it has a name in the literature: “threat rigidity”. It restores a sense of control but often at the expense of productivity and sometimes common sense. For example, Chrysler Corporation was faced with the oil crisis of the 1970’s and rising gasoline prices, but continued large but efficient production runs on its most fuel inefficient cars until inventories overflowed. The Saturday Evening Post continued to raise its prices as circulation dropped. These responses reflect the dysfunction caused by threat rigidity.
Another irrational tendency is to treat threats more urgently than opportunities. It improves an individual’s survival advantage to have a well-developed “fight or flight” mechanism tuned into all threats. The disadvantage to hospitals is that it handicaps promising innovations by creating an artificial need for any of its benefits of to exceed overweighted losses. The cost cutting efforts I’ve seen at hospitals where I’ve worked serves as a classic example. These initiatives almost always fail to consider the opportunity costs of revenues that are lost when resources are restrained – preventing a loss is more valued that not achieving a gain.
Irrational decisions are so hard to detect because they are often made unconsciously. As an example, consider how you rate a movie with an ending (the last 1%) that was stupid or annoying but the prior 99% of the movie was otherwise very entertaining. Many would rate this movie poorly. We judge movies this way because the ending influences how we remember the overall story. This is analogous to judging a cardiac surgical program with excellent overall results as poor quality on the basis of its last bad outcome. The thought process in both cases feels quite natural. A bad outcome of a movie or a recent surgical case plays a disproportionate role in our memory of quality, but neither is able to erase an otherwise favorable past. From a rational, economic point of view of utility gained, rating 99% of an experience as favorable should result in a good/excellent overall rating. The downside of missing any potential danger to patients can be lethal, so it is fair for hospitals to use any means to root them out. In the case of heart surgery, using “gut feelings” about bad outcomes can serve as a warning for potential dangers that might exist in the program. This type of signal is valuable, but it is not a rational way to uncover the root causes of any quality problems. Quality is not a concept that lends itself to such a superficial approach.
Getting at root causes starts out simple. In response to an adverse event or other similar concerning issue, the broad array of involved team members open up a dialogue about all relevant details, often in the format of a debrief. A systematic and intensive discussion continues until everyone understands what happened the same from their own multidisciplinary perspective. If there is respect among the staff, it is my experience that healthy conflict eventually concedes to a feeling of order and clarity about what happened. If uncertainty persists, often the best approach is to just get the team moving in some general direction consistent with the goals of the program and hospital. As things start moving, the insightful leader points out the cues created by these actions so the team more accurately learns where they were, gets a better idea of where they are and where they want to be. These cues better inform the group’s thoughts about root causes, which leads to more accurate conclusions about the program and more helpful action plans for improvement.
As these initial insights are gained, a complex organizational dynamic takes over: the rumor mill. A notion inherent to the idea of accountability in cardiac surgery is that important decisions are made by an extended group of stakeholders that often have incomplete information about the program. One or a few well-articulated opinions can rapidly cascade between individuals. This can lock in a collective point of view that blinds people to contrary evidence. New information is withheld to avoid going against this momentum. Soon, it becomes difficult to question the underlying assumptions of apparently obvious conventional wisdom. This phenomenon is an important cause of incorrect conclusions about a cardiac surgical program with recent bad outcomes.
Leaders are needed to cut off false cascades, maintain team morale and get past a crisis. The ones that succeed are usually skilled at a process described as sensemaking. One of the best sensemakers was Ernest Shackleton. During his legendary exploration of the Anarctic from 1914 to 1916, Shackleton’s ship became trapped in ice-pack with no way to break free and return home. At the point of greatest despair, he knew that his true enemy was not the weather but anxiety and slow-burning pessimism in his crew. The team desperately needed a sense of purpose and belief that things would work out OK, which Shackelton provided. His deft use of sensemaking to leverage his team’s performance is widely believed to be why his whole team survived in the face of seemingly insurmountable obstacles.
A member of any team takes an intellectual risk by speaking up. It is easier to do this in a culture where people feel free to share their true opinions and there is a strong sense of curiosity about how to improve outcomes. This culture can take months/years to develop. Without a skilled sensemaker, it can be wiped out by single week of poor outcomes. Even the most battle-hardened surgical teams can experience guilt, culpability, anxiety, alienation, detachment from a bad week. The collateral injury is damage to the team’s resilience and psychological defenses, which makes them worry that their only “reward” for speaking up will be retaliation. The doctor that referred the patient for surgery suffers from the thought: “If I’d only referred the patient somewhere else…” The surgeon carries the added guilt from feeling directly responsible for someone else’s death. Volatile times like these can distract us all from noticing the subtle and ambiguous cues to what is actually going on wrong.
Feeling guilty after a patient death is normal (only a psychopath would not experience at least some guilt), but admitting to it is a gross violation of the surgeon’s credo: Sometimes wrong, never in doubt. In fact, the qualities of a psychopath – fearless, confident, charming, ruthless, focused – was a norm of behavior for many of the cardiac surgeons that I trained under. It does not take a psychoanalyst to show that the overconfident, misanthropic persona long associated with heart surgeons masks many insecurities and hidden pain. Denial only amplifies the guilt, enhances its collateral trauma to the surgeon and leads it to emerge in poor treatment of other health professionals, hindering teamwork or treating patients with disrespect. Modern cardiac surgery moved on from its heyday of the psychopathic surgeon.
A problem occurs when surgeons ruminate over this guilt and let it hinder their ability to lead the team in their time of need. Resolving a crisis quickly requires these feelings to be put aside. Shackelton’s personal diary described his crushing guilt from placing his entire team at risk. His diary acknowledged that he ignored advice and warnings about the potential hazards he was facing. The morning after his ship was crushed by the ice, he assembled the team and announced briefly and calmly: “Ship and stores have gone – so now we go home.” With brilliant simplicity, he veiled his own sense of guilt and despair in order to empower his team and described current reality while remaining optimistic about survival.
The best weapon against false cascades is strong social relationships. This creates an ease of idea flow between team members. When teams provide each other with respect and support, avoid placing blame, forgive mistakes, inspire one another, and show gratitude, there is a synergy that delivers far more than the contributions of individual members. Over 2000 years ago, Aristotle recognized the value of this type of teamwork. But Aristotle’s ideal stands in stark contrast to the impasse of hospital employees chronically frustrated by rules and routines that deplete their initiative and managers frustrated by the lack of ingenuity in their workforce.
Teamwork among physicians is hindered by a more pervasive problem: the tendency of hospitals to place the most value on star doctors who outperform others. The trouble with this approach was best modeled in an evolutionary biology study of chickens. Productivity (measured in eggs/week) in one average flock of chickens was compared to a second flock of highly productive chickens that had only the most productive selected for breeding. After six generations, the first group actually appeared to thrive while almost all were dead in the second group from being pecked to death. Chickens in the wild have a natural pecking order where the dominant/most aggressive chicken is able to succeed by suppressing the less dominant. Breeding chickens in captivity succeeds when selecting less aggressive chickens that are able to collaborate in the group. Hospitals often select physicians using the failed superchicken model, hoping that patient outcomes will improve by picking superstars and giving them the resources. The result in times of crisis is often the same as the chicken experiment – aggression, dysfunction, waste and poor collaboration. Sensemaking isn’t widely used in heart surgery because its value is grossly underestimated by surgeon superstars that just can’t see the point.
It is possible that the end of the sensemaking journey may not uncover a safe way for the program to move forward. A harsh truth is that some cardiac surgery programs are unsafe and need to be shut down. Highly publicized stories about these programs (e.g. Bristol Heart Surgery) often find system-wide problems with the ability to verify quality of care. Effective clinical governance means determining when a run of bad outcomes signals a cardiac surgery program with serious, uncorrectable quality problems. Patient lives and a surgeon’s career depends on this determination. Hospitals have a moral obligation to make sure this decision is not flawed.
It becomes easier to form honest conclusions about the cardiac surgery program when the hospital takes down the constructs that support false cascades. First of all, hospitals should get rid of their standard pecking order and replace it with a culture that emphasizes teamwork. Recruiting people with a track record at teamwork helps. It is difficult to train someone to be passionate about this. Second, performance of physicians should be measured in a way that does not lead to destructive competition. Any business like healthcare that is dependent on teamwork should be wary of this type of competition. It is as wrongheaded for hospitals to do this as it would be for parents to rank their own children against each other. Third, hospitals influence teamwork through the composition of surgical teams. Frequent changes in team composition have a negative impact on situational awareness, information base within the team, and team cohesiveness. Strong teams have pervasive standard operating procedures, homogenous training requirements, and well-defined norms.
If sensemaking is built on the foundation of well informed decisions, teamwork serves as the “bricks” and multidisciplinary team meetings are the “mortar”. Frequent meetings to discuss the program helps increase the number of individuals that can evaluate the program. These independent evaluations generate new information that is not available from a centralized decision. Redundancy of correct information is the key. We cannot know who will have an opportunity to provide new insights into a program in trouble that will prove influential at breaking a false cascade.
A cardiac surgical team in a crisis requires the right decisions when the course of action or current state of affairs is unclear or not routine. The essence of being effective at these moments is the ability to improvise. Failure of the program may not indicate weakness or frailty in its lead surgeon, but merely an occupational hazard of working in systems that often underappreciate, devalue, and disenfranchise anyone that tries to improvise. I am proud to say that I currently work for a hospital that has al the necessary bricks and mortar and provides the right support. That doesn’t make the task any easier. Putting aside personal feelings to make decisions under uncertainty in the context of rapidly spreading false information is one of the most challenging roles anyone could ever perform. The reason I want to get good at this is it certainly isn’t going to work out better if left up to the superchickens.