Danger was the safest thing in the world if you went about it right

Stunt pilotThis seemingly paradoxical statement was penned by Annie Dillard. She isn’t a safety professional nor a line manager steeped in safety experiences. Annie is a writer who in her book The Writing Life became fascinated by a stunt pilot, Dave Rahm.

“The air show announcer hushed. He had been squawking all day, and now he quit. The crowd stilled. Even the children watched dumbstruck as the slow, black biplane buzzed its way around the air. Rahm made beauty with his whole body; it was pure pattern, and you could watch it happen. The plane moved every way a line can move, and it controlled three dimensions, so the line carved massive and subtle slits in the air like sculptures. The plane looped the loop, seeming to arch its back like a gymnast; it stalled, dropped, and spun out of it climbing; it spiraled and knifed west on one side’s wings and back east on another; it turned cartwheels, which must be physically impossible; it played with its own line like a cat with yarn.”

When Rahm wasn’t entertaining the audience on the ground, he was entertaining students as a geology professor at Western Washington State College. His fame to “do it right “ in aerobatics led to King Hussein recruiting him to teach the art and science to the Royal Jordanian stunt flying team. While in Jordan performing a maneuver, Rahm in his plane plummeted to the ground and burst into flames. The royal family and Rahm’s wife and son were watching. Dave Rahm was instantly killed.

After years and years of doing it right, something went wrong for Dave Rahm. How could have this happen? How can danger be the safest thing? Let’s turn our attention towards Resilience Engineering and the concept of Emergent Systems. By viewing Safety as an emergent property of a complex adaptive system, Dillard’s statement begins to make sense.

Clearly a stunt pilots pushes the envelope by taking calculated risks. He gets the job done which is to thrill the audience below. Rahm’s maneuver called “headache” was startling as the plane stalled and spun towards earth seemingly out of control. He then adjusted his performance to varying conditions to bring the plane safely under control. He wasn’t pre-occupied with what to avoid and what not to do. He knew in his mind what was the right thing to do.

Operating pointWe can apply Richard Cook’s modified Rasmussen diagram to characterize this deliberate moving the operating point towards failure but taking action to pull back from the edge of failure. As the op point moves closer to failure, conditions change enabling danger as a system property to emerge. To Annie Dillard this aggressive head into, pulling back action was how Danger was the safest thing in the world if you went about it right.

“Rahm did everything his plane could do: tailspins, four-point rolls, flat spins, figure 8’s, snap rolls, and hammerheads. He did pirouettes on the plane’s tail. The other pilots could do these stunts, too, skillfully, one at a time. But Rahm used the plane inexhaustibly, like a brush marking thin air.”

The job was to thrill people with acts that appeared dangerous. And show after show Dave Rahm pleased the crowd and got the job done. However, on his fatal ride, Rahm and his plane somehow reached a non-linear complexity phenomenon called the tipping point, a point of no return, and sadly paid the final price.

Have you encountered workers who behave like stunt pilots? A stunt pilot will take risks and fly as close to the edge as possible. If you were responsible for their safety or a consultant asked to make recommendations, what would you do? Would you issue a “cease and desist” safety bulletin? Add a new “safety first…”rule to remove any glimmers of workplace creativity? Order more compliance checking and inspections? Offer whistle-blowing protection? Punish stunt pilots?

On the other hand, you could appreciate a worker’s willingness to take risks, to adjust performance when faced with unexpected variations in everyday work. You could treat a completed task as a learning experience and encourage the worker to share her story. By showing Richard Cook’s video you could make stunt pilots very aware of the complacency zone and over time, how one can drift into failure. This could lead to an engaging conversation about at-risk vs. reckless behaviour.

How would you deal with workers who act as stunt pilots? Command & control? Educate & empower? Would you do either/or? Or do both/and?

Exploring an anthrocomplexity-based approach to Safety

30 Comments

  1. Shane Durdin Reply

    I think in highly performing organisations, or even those industries that are considered high risk there is already an element of understanding the point of failure, or at least the tipping point; however that ‘point’ implies that one thing ‘goes wrong’ or fails and then the rest of the system collapses. Understanding precursors to failure is important I believe, but none more important than identifying them through listening, and then communicating them. As Holnagel better explains, it is about the resonance of the organisation (of the system). There will always be resonance, drift, precursors (call it what you will), but it comes down to is the level of resilience the system or organisation has to absorb the resonance and curtail the drift before the ‘tipping point’ is reached. Are are times where we ask our workers to be ‘stunt pilots’ and take risks because sometimes you have to take a risk to achieve something. We have to be careful here not to imply that people are the problem, because they are stunt pilots and take risks. There is more at play here such as the system environment, physical environment, system capability, people’s knowledge and competence all of which are interconnected. It may not be the stunt pilot who fails, but the capability of the system to handle what the stunt pilot is was trying to achieve, or (to go one further), what the organisation (the audience) wanted the stunt pilot to achieve.

    1. Gary Wong Post author Reply

      Good points, Shane. I totally agree understanding precursors to failure through listening and communicating is paramount. In a future blog I’ll describe how we can use narrative (i.e., stories) and convert stories into data points. The data can then be plotted to create a 3D landscape with peaks, valleys, and outliers. The picture is a representation of the present safety culture and helps to see where interventions ought to be directed.

  2. William R. Corcoran, PhD, PE Reply

    Does the Generalized Peter Principle apply?

    The Generalized Peter Principle:

    Anything that works is used in more and more challenging situations until it is involved in a disaster.

    Is this a Lesson of Challenger? Columbia? San Onofre Replacement Steam Generators?

      1. William R. Corcoran, PhD, PE Reply

        Bob Latino,

        Thanks for the question.

        Normalization of Deviance can be one manifestation of the Generalized Peter Principle (GPP).

        The GPP is a much richer, more fundamental phenomenon.

        The GPP is built into the human condition.

        It is as real as the blind spot and the negative after image.

        If something does not work there is an inclination not to use it again.

        If something does work, there is an inclination to use it again for the same purpose and for perceived similar purposes without exploring the drawbacks in much detail.

        The more it works, the more we trust it to work.

        We get away with it without realizing that we are headed over the cliff.

        Do you have any examples or counterexamples?

        1. Bob Latino Reply

          Thanks as always Bill.

          Take something as simple as routine hand washing by caregivers before and after every interaction with a patient. While intellectually we know it is the right thing to do in order to reduce the risks of infection, emotionally we see it as a time-consuming burden considering the number of patients we have per day, and the limited time we have to tend to each of them.

          So we routinely take a short-cut due to the time pressures. When we take this short-cut, nothing bad happens (no bad consequences). As a result, we set a new lower standard. This behavior continues until an HAI (hospital acquired infection) does result, and then all of a sudden the behavior is high visibility and unacceptable. Using discipline is levied at this stage (which is hypocritical to me).

          However, in a non-failed state, it was status quo and observed and accepted by Leadership, as there were no consequences for the negative behavior.

          I often use this as an example Normalization of Deviance and was curious if it fit the Peter Principle paradigm as well?

          Good to hear from you Bill, take care.

          Bob Latino

          1. William R. Corcoran, PhD, PE Reply

            Bob,

            Another example of the Generalized Peter Principle is using natural gas to blow foreign material from new systems.

            Removing foreign material by natural gas blowdown worked and was done over and over until it was involved in an explosion. See http://www.courant.com/community/middletown/hc-gas-blow-regs-0928-20110927,0,6904217.story

            The construction companies knew how to work the blow, but didn’t know how the blow worked. Eventually they did the blow at a time of low wind velocity with a discharge not an enclosed courtyard.

            They knew how to work the thing without knowing how the thing worked.

            Other examples?

            1. Bob Latino Reply

              Isn’t this true of most any new technology?

              We may know how to use the technology (or aspects of it), but we likely do not know how that technology works (i.e. – iPads, iPods, iPhones, etc.).

              I know I often find where industrial mechanics are well-versed in how to fix a component (repeatedly:-), but lack the understanding of that component’s purpose when looking at the overall system.

              I may know how to fix the braking mechanism but do not know the function of that brake on the overall drive mechanism, should the brake fail. I find this ‘silo’ mentality often.

  3. Joe Evola Reply

    1. The Peter Principle seems apt.
    2. The old ‘Safety 1’ process has never fully satisfied me because it seems that it doesn’t actually prevent accidents — although I’m sure it prevents some and reduces the consequences of others. I’m intrigued by the new ‘Safety 2′ (resilience) model, but I’m struggling with seeing how it will overcome the same human nature that fools and convinces itself that what it’s doing is reasonable and safe. If it doesn’t, then it will be just another academic exercise.
    3. The Rasmussen model has the acceptable performance boundary on the left. I’d be inclined to draw the safety limits to the top or to the right. (I know, odd comment…)
    4. The whole cycle reminds me of the old saying: If it moves, tax it; if it keeps moving, regulate it; if it stops moving, subsidize it. In the systems’ view if it’s operating, squeeze resources; if it operates near the safety limits, add more rules; if you have an accident, throw money at it.

    1. William R. Corcoran, PhD, PE Reply

      Safety II (Safety 2, Safety Too, or whatever) creates a new line of inquiry, a new dysfunctional barrier, and a new corrective action topic.

      It used to be that the investigators and regulators would beat up on the victim organization for not learning from adverse experience.

      Now the victim organizations can be chastised for not learning from beneficial experience.

      How is this going to play out?

      What’s in the pipeline?

      Who is going to be first to benefit (Besides the merchants of resilience)?

  4. Richard Cook, MD Reply

    Comments raise difficult questions. My understanding is that the marginal boundary is essentially the most prominent feature of daily ops. Routine ops are mostly inside the marginal boundary, with occasional periods of out-of-boundary activity.

    In contrast, operators have little reliable information about the actual location of the acceptable performance boundary — that information comes from accidents themselves which are, thankfully, relatively rare in most ops settings. In my presentations, the acceptable performance boundary fades out on the screen while the marginal boundary remains clear.

    Experience with ops outside the marginal boundary that do not produce overt accidents tends to convince everyone (operators and managers) that the marginal boundary is too conservative and that there is productivity to be gained by redrawing that boundary further ‘out’ away from the economic and workload failure boundaries. Indeed the episodes of ops ourside the marginal boundary are experiments with the system intended to discover the actual location of the acceptable performance boundary.

    My belief is that the number and variety of such experiments grows as the overt accident rate falls. [NB: This refers to the recognized accident rate, not the actual one. The recognized rate may be lower than the actual if information flow in the system is poor or supressed, e.g. if accidents are routinely discounted or hidden.] It is natural for expensive systems to emit such experiments when the acceptable performance boundary location is uncertain.

    Paradoxically, the “super-safe” systems such as commercial aviation and some nuclear power plants the intentionally great marginal distance works to undermine the basis for safety that is implicit in the model. The marginal boundary for these systems was intentionally set a generous distance from the acceptable performance boundary. This was done, in part, because it was essential to do this to get regulatory and political acceptance for the creation of these systems. The economic pressure gradient for these systems is sometimes high enough that the system owners will rationalize operating outside the boundary because it is so conservative. The decision to extend the lifetime of some reactors or operate at greater than design power are examples.

    Rasmussen’s model is not normative. It does not tell us what to do. It is descriptive. It is a model of how the world (seems to) work. It was developed not as a means for getting safety but in order to better show how hazard and pressures combine to generate behaviors within complex systems. It can reveal some of the difficulties that the real world presents those who seek to manage those systems but it says nothing about what those people should do.

    1. William R. Corcoran, PhD, PE Reply

      Dr. Richard,

      Thanks.

      One of the problems is that operators often know how to work the thing without knowing how the thing works. Rasmussen talks in terms of skill-based performance, rule-based performance, and knowledge-based performance.

      Knowledge-based performance is notoriously unreliable. Nevertheless, much of the operator action that goes to resilience is inherently knowledge-based.

      The same thinking applies to engineers and managers as to operators.

      1. Richard Cook, MD Reply

        For many of processes, operators are the only people who know much about the system. Researchers are continually impressed with how much operators know about the processes they work with and their ability to detect, analyze, and correct faults in these systems — often in spite of the poor design of the instruments and controls and the conflicts and pressures that define modern workplaces.

        Knowledge-based performance is not unreliable.

        SRK and the boundary model are both models of performance — one for individual cognitive agents and the other for complex systems. Just as one can build bad finite element models in structural engineering, one can build bad agent and system models in cognitive and systems engineering. Like other models, their use requires expertise to gain anything of value.

        Neither SRK nor the boundary model explicitly involve resilience. Rasmussen is, of course, from the pre-resilience era and we cannot now ask him about how SRK relates to resilience. Knowing him as I have, though, I imagine that he would be find resilience a bit “squishy”. Rasmussen was a rather hard-edged engineer who had little patience with speculation and great respect for data and measurement.

  5. William R. Corcoran, PhD, PE Reply

    Dr. Richard,

    One of the troubles with squishy approaches is that they are difficult to capture in training materials. Many high hazard industries have requirements that people be trained to do what they do.

    Can you point me to any resilience training materials?

    All the best,

    Bill

    1. Gary Wong Post author Reply

      Bill: Regarding resilience training materials, you may wish to check out Erik Hollnagel’s Resilience Analysis Grid (RAG). http://bit.ly/18fAxuL

      It’s a good starting point to learn more about Resilience Engineering and the 4 cornerstones of resilience: respond, learn, anticipate, learn.

  6. Gary Wong Post author Reply

    Bill and Joe: David van Valkenburg has written an excellent article in Drilling Contractor magazine describing how they applied a Safety-II approach in the oil industry. He explains by using the Functional Resonance Analysis Method (FRAM) for their investigation they were able to move attention away from humans as the problem (i.e., Safety-I) and focus on the process and why workers did not follow rules and adjusted behaviour (Safety-II). http://www.drillingcontractor.org/?p=32256

      1. Gary Wong Post author Reply

        Bill: From I read about BSEE on wikipedia: “The Bureau was established in 2011 in response to the regulatory failure of Minerals Management Service (MMS) in the Deepwater Horizon oil spill to replace the Bureau of Ocean Energy Management, Regulation and Enforcement and MMS, which existed since 1985. The agency exercise…the authority to inspect, investigate, summon witnesses and produce evidence, levy penalties, cancel or suspend activities…”

        I could be wrong but I envision a lot of Safety-I command & control behaviour and looking for people to blame when things go wrong.

  7. Gary Wong Reply

    Bill: Regarding regulatory agencies, I have some familiarity with Workers Compensation organizations. Where I live in British Columbia, since 1917 employers have funded a system of compensation and prevention in exchange for protection from lawsuit from workers injured or killed in work‐ related incidents. Last year I was in New Zealand observing the rollout of Worksafe NZ which was formed in December 2013.

    There is a perception held by critics that WCBs are not in the business of safety but are really monopolistic insurance companies. Their role is to broker the premiums received from employees and the money given to workers in the form of wage compensation, medical aid and rehabilitation.

    I may bit overgeneralizing by saying WCBs primarily aim at preventing workplace injury ‐ particularly serious injury, illness and disease; the focus is on what goes wrong, i.e., Safety-I. They have the legislative authority to monitor compliance with occupational health and safety law and regulation; investigate serious incidents; and, in certain cases, levy financial penalties or other sanctions against employers for safety infractions. A command & control mentality coupled with people as the problem points to Safety-I.

    WorkSafe NZ literature states “we will support, monitor success, and enforce the law against wrongdoing.” Success will be measured by a 25% reduction in workplace fatalities and injuries by 2020. Reinforcing the Safety-I paradigm.

    But it’s not all doom and gloom in my mind. I’m also aware of WCB employees who “get it” and wish to advance Safety-II thinking. Their internal challenge is cultural, struggling to change an organization so dedicated to Safety-I. We need to help them tell Safety-II stories and go for the butterfly effect.

    NZ and AUS readers: Please correct me if I’m wrong but I understand WorkSafe NZ’s statute is based after the Australian Model Law (AML). It introduces a higher penalty regime aimed at the PCBU (person conducting a business or undertaking). The bottom line: If a frontline worker is killed, a director or senior manager could be punished for lack of due diligence.

    So how does a CEO stay on top of what is happening way down below in the ranks? Can a CEO rely on safety reports which come out a month later and are often filtered and cleansed by line managers and safety professionals? “Do you want to meet the 25% reduction goal? Hey! We can do that! We just manipulate the Safety-I data.”

    There is an alternative solution – implement a Human Sensor Network that connects Safety-I and Safety-II stories told by frontline workers to the C-suite. I can further outline this resiliency idea in another Safety Differently blog.

  8. Randy Cadieux Reply

    Great article, Gary! This reminds me of my time in the US Marine Corps several years back when we deployed a new aircraft into combat. We had been training for years. We knew how to operate the aircraft as a crew. We knew our tactics. However, the complex adaptive system of training in a controlled military operating area is much different than the complex adaptive system of a combat theater of operations. We realized that our tactics had to be adapted and adjusted in real-time to meet the operational demands of combat operations. These were experiments to see what needed to be improved. We also had an inherent understanding of safety because 1) most pilots and aircrew like to come back home at the end of the mission, and 2) a safety mindset and Operational Risk Management were viewed as a professional way of doing business. So, in a way the convergence of operational creativity and a periodic check on safety boundaries helped enable us to be successful at our mission. After we would get back from a mission we would debrief, we would share stories and we would tell others about what we were doing. Then others would understand and use the same controlled adaptations, and we could then expand our success while staying within our safety margins.

    The stories were a powerful force. In general, military pilots (and aircrew) love to tell stories, and I believe storytelling is an underutilized technique in most industries. I just read your paper “The Power of Storytelling in Mining” and found it very powerful. Leaders and managers can gain a lot by letting workers share their stories and using that data to improve organizational success.

Leave a Reply

Your email address will not be published. Required fields are marked *