Life, liberty, and the pursuit of the right to speak up

8985496669_8dd78af2ca_kSix months before the Space Shuttle Challenger disintegrated over Florida in 1986, engineer Roger Boisjoly wrote a portentous memo. In it, he warned that if the weather was too cold, O-rings in the solid rocket boosters could fail. It was the job of these O-rings to seal the joints between the segments of the SRB’s—two huge, towering silos of rockets made by contractor Thiokol in Utah, that helped lift the Shuttle into space. The memo was, in a sense, a cri de coeur, emerging from a taskforce that had been formed in part on Boisjoly’s recommendation to study the effects of cold on the seals and the boosters. But the taskforce’s efforts had become bogged down in paperwork, bureaucratic meddling, procurement delays and ultimately governed by bosses who probably did not want to hear bad news. Boisjoly realized that the task force had no power, no resources, and no management support.

The memo was ignored.

On January 28, 1986, in the first moments after ignition, the O-rings failed to seal, and were burned away, showing a black puff of smoke during lift-off. This left only a layer of insulating putty to seal the joint. At 59 seconds after launch, the putty, too, gave way. Hot gases surged out of the joint in a visible plume. At 73 seconds, an adjacent strut holding the SRB gave way and the entire assembly disintegrated, killing all seven occupants. Boisjoly, who had been watching the launch on television, was initially relieved. He had feared and predicted that a catastrophic failure would happen on initial lift-off. 73 seconds into the launch however, his worst fears were confirmed after all. He left the room and went directly to his office, he said later, where he spent the rest of the day in shock.

Well into the subsequent investigation, attention turned to a meeting at Thiokol and a teleconference with NASA on the eve of the launch. Thiokol managers, on the urging of Roger Biosjoly, initially recommended NASA to delay the launch. After NASA came back to them strongly, Thiokol managers asked for a five-minute caucus off-line. During it, Roger and his colleague Arnie Thompson argued strenuously for a delay, but were unsuccessful. Once back online, Thiokol managers, the only ones talking, told NASA that they did not have conclusive data that suggested they’d have to delay the launch. NASA asked if there were any objections or other points of view. Roger did not speak up.

A few years later, I invited Roger to come lecture for my students at Linkoping University in Sweden. As he retold the events of that evening, and his watching the subsequent launch, he teared up and could no longer speak.
Roger had become shunned by his former colleagues and managers. He had stayed on at Thiokol as a senior engineer involved in redesigning the faulty seals. But he was not allowed to interact with the client, NASA, nor was he given much responsibility by his bosses. He told, instead, of being put through hell on a day-to-day basis. He became sick and depressed, left Willard, the one-company town he and his wife had chosen to make into their ultimate retirement dream, and was diagnosed with PTSD, or posttraumatic stress disorder.

Challenger put American corporate culture on trial. The country whose constitution declared to respect life and liberty of all, rose to greatness on the back of rugged capitalism, relentless innovation, hard work and an emphasis on individual responsibility. That much is known, often celebrated, and occasionally copied the world over. The country has a robust civil society, and is the source not only of my doctorate, but the origin of my best friends, and my Best Man. But, as Michael Moore would point out in Bowling for Columbine and Capitalism: A love story, it promoted corporate cultures of strict hierarchies, of undemocratic practices, of fire-breathing bosses unwilling to lose face in public, of watered-down languages euphemizing risk because of liability concerns, of workers hunkered down and bullied into silence, anxious of losing a job and its healthcare benefits.

The Challenger accident inspired a scientific literature larger than most other high-visibility disasters. Analyses ranged from those—in predictable American order—that emphasized engineers’ individual responsibility to speak up; to subtle and nuanced assessments of the gradual finetuning and revisions of risk as normal, daily engineering work; to a comparison of the NASA organization with a psychiatric patient—decaying, but dramatizing its own ideal character while managers narcissistically lost sight of reality. Basically all analyses spoke of, or alluded to, ‘organizational failure.’ Why, the investigation had asked, did NASA continue to fly with known problems in the years before, and why, on that particular flight, did managers decide to overrule the concerns of their engineers and keep going?

In 2003, the same questions had to be asked again. Space Shuttle Columbia burned up on reentry into the atmosphere. The sort of foam strike that ultimately sealed its fate had been a recurring problem, much like O-ring erosion. Not getting through to the hierarcy was not a problem unique to NASA, or aerospace. Two years after Columbia, Amber Marie Rose was killed in a car accident in Maryland. Her death became the first to be linked to ignition switches in the Chevrolet Cobalt that spontaneously cut off power. American corporate culture was on trial again. Engineers in General Motors had known about the problems for a long time, but had been coached to not write memos with words like “safety” and “risk” in them. Or even “defect” or “problem.” Rather, they were counselled to use euphemisms like “issue, condition, matter.” In an incisive and acerbic 1946 essay called Politics and the English Language, George Orwell explained how language could be “designed to make lies sound truthful and murder respectable.” Thanks for pointing that out, George. Amber’s parents lost their daughter to an “issue.”

I was in Korea a few weeks ago. My experience there, however fleeting, can confirm that the deference to hierarchy there, much maligned by European and American colleagues, can indeed be rather stifling. Not much later, I was in America again, this time addressing a corporation’s leadership, with a smattering of trainers and lower-level managers in the room as well. Before the day-long meeting, I sat with the chief, telling him that if he wanted the meeting to be successful, we had to move beyond job titles and rank. This was not a corporation, by the way, that does something as innocent as making flip-flops for a living. No, they do dangerous stuff. What mattered, I pleaded, was whether people knew what they were talking about. In other words, I was making the case for deference to expertise. The chief agreed wholeheartedly. Absolutely, he said, absolutely. He agreed that a good safety culture is one that allows the boss to hear bad news. He wanted such a culture. He might even have believed he had one already.

In introductory remarks to the gathered people, spurred on by me, he said as much in the meeting itself. But as the day developed, good intentions withered. I ended up not so much talking with the group about accountability, as being lectured about accountability by the corporation’s managers. Accountability was critical, the chief said, and he was not going to let the day be concluded in a way that risked watering down accountability. Accountability was all about holding people accountable, he said.

Right.

Then the day was over, and the managers left. As I was packing up, various people came over to me, timidly, like civilians emerging from their shelters after a bombardment, blinking into the daylight and wondering what was left to salvage.
They had been anonymous all day, all but crouching in the shadows of the room, never saying a word, weary even to have their existence or presence acknowledged. But once safely huddled around me, story after story came out of them. How event outcome is a huge determinant in managerial judgments of worker behavior despite professed claims to the contrary; how the hindsight bias is alive and well, and goes unrecognized; how signals of potential danger are not always going up the line; how managers firmly believe work-as-imagined is work-as-done; how rules are ultimately seen as more important than expertise. They told of a culture where people get to speak because they are the boss, not because they know what they are talking about. Eerily, in echoes of science’s reflections on NASA, they painted a picture—confirmed during the meeting—of senior managers dramatizing and fictionalizing the organization’s ideal character as one of upstanding, straight-shooting, rule-abiding, conformant workers, even while the same managers were narcissistically blinded to a much messier, more nuanced, less linear and more complex reality.

These “practices are not anomalous,” Gretchen Morgenson wrote in the New York Times on 8 June 2014. “In fact, they seem to have become commonplace elsewhere in corporate America: an indifference to misconduct, a drive to avoid personal responsibility, and a scorched-earth retaliation against critics or adversaries.” What an interesting observation. Stronger calls for accountability, and firmer commitments to hold people accountable, lead to anything but accountability. People do not tell their accounts when and where and to whom it matters, they shirk responsibility and are bullied to the point where they dare to come out only when the threat has passed.

It was whispered to me, afterward, that the people who came over after the meeting were, “like, ten levels down” from the chief in the corporation. Why anyone would need more than ten levels of managerial hierarchy to run a corporation is either a perverse absurdity or a true mystery to me, by the way. But it should not have mattered. That was the whole point of the day, and an agreed one at that. Here it was, coming out of its hiding: the bad news that the bosses never heard. Almost thirty years after Roger Boisjoly’s frustrated and traumatic attempts, here it was again. Or still. In a country that proudly claims to protect free speech like no other. Yet it is a country where such speech can often be deemed to fall outside the limited bandwidth of political correctness or continued employability; a country where the 43rd president moved aggressively to limit the rights and protections of people who speak up, of whistleblowers like Roger Boisjoly. Korea might have a problem with juniors who are not legitimated to speak up, with a stifling deference to hierarchy, with seniors who cannot afford to lose face.

Sure. And the pot called the kettle black.

I once ran a large research center. It had problems, it needed an economic turnaround, there were legacy “issues” (to speak with GM) and entitlement expectations that could not be met. My leadership style was quickly branded as “Dutch.” Which apparently meant not being afraid of confrontation and conflict, and not shy to expose bad news. In debriefing me upon changing positions in the universtiy, the Pro Vice Chancellor told me I had been “too honest.” I have wondered about that ever since. How can you be too honest? Can you, instead, be a little bit honest? That is like being a little bit dead. Should we not have the aspiration to turn honesty into one of those rare binary categories of the human experience? You are either honest or you are not honest. The right to speak up can never become real without accepting such honesty—throughout corporate hierarchies and beyond.

Lives, if not liberty, depend on it.

 

Photo by Rebecca Barray/CC

16 Comments

  1. Tanya Hewitt Reply

    This post reminds me of a conference I went to last fall.

    The keynote speaker described a case study of a patient safety issue at that hospital. A key factor was the changeover of residency at the beginning of July, which co-incidentally matches with staff vacations. Overall, there was a complex network of relationships, learning exercises, and unfortunately unclear prescription hierarchy (if I recall correctly). The speaker went into such ideas as the silos in the hospital not effectively communicating when transferring patients between them, medical hierarchies competing amongst themselves to give the best treatment according to their discipline, and the stifling of necessary expertise (the nurses, and the patient and patients’ families). It also went into quality tools, ideas of continual improvement, and how some small projects had shown how these ideas could help that hospital. The speaker intentionally used the case study as a leverage point to talk about all sorts of necessary patient safety concepts, not all specifically related to the post mortem of the incident. This was a difficult presentation, as it exposed a poll that revealed than many of those who filled it out would not want to have themselves/their family members treated at that institution, and as the speaker walked the audience through some uncomfortable topics (rarely spoken of), it seemed as though a new conversation could take place to get at the heart of patient safety.

    Until the question period.

    The first question went back to the case study, and wanted to know who was the attending, what drug was administered at what time by whom, and what nurse had not done the medication reconciliation, as part of administering the drug.

    Other questions followed – fixated on individual responsibility, on accountability, on deference to a medical hierarchy, and the incompetence of the practitioners involved.

    The case study was meant to get the audience’s attention. The talk was about systemic issues in that hospital that give rise to problems – that are rarely addressed, and never truly spoken of, even in conferences on patient safety. The question period was entirely on the case study, and focussed exclusively on blame, and the lack of medical heroism.

    I was just saddened, and was reminded of how difficult this safety differently journey truly is.

    1. Dave Christenson Reply

      Thank you Tanya for your very perceptive reply and great example.

      It seems that others would rather critique the technical aspects that they are comfortable with commenting on, revert to the accountability of individuals, or critique the tactics used by practitioners, often through a hindsight biased lens and counter-factual reasoning. “Systemic issues that give rise to problems” are often identified using reliability or resilience analysis, but it seems that we are uncomfortable taking next steps to enrich our cultures with interventions that support collective learning, mindfulness and resilience. Identification of issues and how we got there is just a beginning.

      We began our change efforts in the wildland firefighting domain by getting the mantra, “It’s not about who, but about what,” out to get people to refocus on learning rather than blaming when trying to introduce HRO. They still often defaulted to critique of tactics, equipment, technology, etc. we believed because they were uncomfortable with discussing human factors, sense making, or their beliefs. The science of human factors was introduced by leadership development and they locked onto the Swiss Cheese Model to push accountability away from the sharp end up to the blunt end and still often refuse to look at other models as this one satisfices so well.

      Yes, working with people can often be difficult. Meeting the challenge can often be very rewarding though “when strategies that support people in organizations in developing better capabilities to make reinterpretations of their belief systems, their sense making processes, their learning, to ‘see broader’ the influences of their acting on other interpretations and actors, etc.” actually effectively changes and continuously improves an organization. I am learning about these intervention strategies from a Dutch consultancy, Apollo 13, and the quote is from recent communications with Bert Slagmolen, PhD, that leads this group.

      The processes of HRO, Resilience Engineering, and others that help us see the issues and label them, are portals we need to have the courage to walk through and learn how we can best help people inside those organizations. Appreciative (Cooperrider) and Humble Inquiry (Schein) may be most helpful in generating Positive Organizing (Sutcliffe) next steps.

      In the 1979 second edition of The Social Psychology of Organizing, by Karl Weick, he writes on page 12, “I feel there is a need for a dialectic between criticism and affirmation as modes of apprehending organizations. At the moment we are heavily into criticism. A balancing of affirmation would lead to more activity of this kind:
      “The critic (of poetry or art) more commonly looks for interpretations that discover aspects of an artistic expressions making it more interesting or more beautiful than when first observed, or developing the uncertainties of simultaneous attraction and repulsion. Truly distinguished pieces of criticism are almost always ones in which a critic enlarges our appreciation of the beauties and complexities of art that is loved (March 1976, p. 18).””

      We have real artists in our midst in many organizations that are creative, innovative and strong relational leaders with exemplary emotional/social/cognitive intelligence. Engaging them through compassionate coaching will inspire constructive self-designed solutions to issues in addition to emergent expressions of their genius.

      Cheer up Tanya. We have a lot to look forward to!

  2. Spacedout Reply

    STS 3 primary o-ring blow, it also had a defective case segment which was discoved by J Newman, AF Inspector. What is missing hee is that it was the big tank that blew up, the large tank had leaks, by which the excaping flame from the booster rocket ignited when the pilot pulled back on the throttle.

      1. William r. Corcoran, PhD, PE Reply

        Plotting the data would have made the problem transparent.

        Engineering 101

        If you have data plot it.
        Think about the most significant variable dependencies.
        Make it easy for your bosses to make the right decision.

        OBTW: Who had the power to ask Roger to plot the data?

  3. Chris Reply

    All that has been said is so so true.
    In business there are conflicting priorities and most often our leaders define the real values of the organisation when the organisation is under conflict with these priorities. In the aftermath of major incidents and the inquiries that typically occur, are those that set the environment for the decisions that were eventually made ever really held accountable. When one considers the likelihood of a major incident the most common outcome of a production first decision is that production is achieved and there is no incident. This just reinforces the decision making process by those that are often disconnected from direct consequence. Would they make the same decision if it was their brother, their wife, their child who would suffer the direct consequences.

  4. William r. Corcoran, PhD, PE Reply

    The engineers among us often miss the basic fundamental engineering error of Challenger.

    The Safe Operating Envelope (SOE) of the O-ring was not compatible with the SOE of the system.

    The SOE of the system allowed low temperature launches for which the O-ring was not capable.

    “This is not Rocket Science.”

    The SOE of a system should forbid operation outside the SOE of any safety critical/ mission critical sub-item.

    The SOE of every safety critical/ mission critical item should include all states allowed by the SOE of the system.

    Who teaches this in engineering courses?

    1. William r. Corcoran, PhD, PE Reply

      Engineers and non-engineers should know the following inescapable:

      When an item fails, either it was operated outside its safe operating envelope (SOE) or its SOE was wrong or both.

      It’s easier to keep a hardware fix fixed than it is to keep a brainwave fix fixed.

  5. Rob Long Reply

    Unfortunately people are not absolutely honest and human decision making is not binary. There are many reasons why we don’t tell the truth and many complex reasons why we even lie to ourselves. I find it amusing in the engineering and systems approach to safety that human non-rational decision making is never discussed. Everything binary, black and white, mechanistic and clear. Yet, human decision making about risk is a ‘wicked’ problem and not just about ‘o’ rings but rather cultures and sub-cultures of social-political practice.

      1. William r. Corcoran, PhD, PE Reply

        Some inescapables of transparency

        There are inescapable statements about transparency. Some of them may be disputed by some people, but the dispute does not alter the inescapability. Here are some of the inescapables:

        • Any harmful item (condition, behavior, action, or inaction) that was not detected was insufficiently transparent under the circumstances of nondetection.

        • The above applies to each and every activity during which there was an opportunity for detection.

        • One measure for making a previously undetectable (nontransparent) item detectable is to increase its transparency.

        • One measure for making a previously undetectable (nontransparent) item detectable is to alter the circumstances for detection such that the item becomes transparent as it was.

        • Nontransparency or insufficient transparency is seldom, if ever, a root cause since it usually stems from more important underlying conditions, behaviors, actions, and/or inactions.

    1. Ron Gantt Reply

      Great question William. I would recommend Diane Vaughn’s book on the Challenger Launch Decision, which looks at the working environment that contributed to the decision to launch the shuttle despite the raised concerns. It likely would answer your question most effectively.

Leave a Reply

Your email address will not be published. Required fields are marked *