Zero pessimism

file000637574664The Enlightenment once suggested that if we are smart, if we think harder about a problem with those minds we can trust, then we can make the world a better place, we can constantly improve it. Modernism says that technical-scientific rationality can create that better, safer, more predictable, more controllable world for us. We might achieve workplaces without injuries, incidents or accidents. If, for example, we plan the work carefully, if we design well and train, discipline, supervise and monitor the people who are going to execute the work (just like Frederick Taylor recommended), we can eventually live in a world without human error. This ideology of constant improvement, and the vision of an immaculate “city on the hill,” is deeply embedded in the zero-visions of many industries and organizations around the world, from road traffic to construction. Networks or forums for vision zero exist in countries around the world. Membership in these things, and the commitment it implies, gets organizations to realize safety improvements because they need to back up their commitment with resources. But these were already very safe and committed companies. Being a high achiever partly explains one’s membership in such a group. Very little is typically known, however, about the exact activities and mechanisms that lie underneath the reductions in harm that committed companies have witnessed, and little research has been conducted into this.

One important reason for this is that the goal, the zero vision, was never driven by safety theory or research. It has grown out of a practical commitment and a faith in its morality. It is defined by its dependent variable, not its manipulated variables. In typical scientific work the experimenter gets to manipulate one or a number of variables (called the independent or manipulated variables). These are in turn presumed to have an effect on one or a number of dependent variables. In this, safety is always the dependent variable—it is influenced by a lot of other things (independent or manipulated variables). Increases in production pressure and resource shortages (independent variables), for example, pushes the operating state closer to the marginal boundary, leading to a reduction in safety margins (the dependent variable). A decrease in the transparency of interactions and interconnections (the independent variable) can increase the likelihood of a systems accident (the dependent variable). Structural secrecy and communication failures associated with bureaucratic organization (independent variables) can drive the accumulation of unnoticed safety problems (the dependent variable). Managerial visibility on work sites (an independent variable) can have an impact on worker procedural compliance rates (the dependent variable).

Zero vision has got this upside-down. It tells managers to manipulate a dependent variable. But safety research is mostly about manipulated variables, even though it often considers which dependent variables to look for (e.g. are incident counts meaningful dependent variables to measure? Can we develop new indicators of resilience?). But mostly, theories tend to specify the kinds of things that engineers, experts, managers, directors, supervisors and workers need to do to organize work, communicate about it, write standards for it. What they need to manipulate, in other words. Outcomes (measured in terms of incidents or accidents, or in terms of indicators of resilience) then are what they are. Zero vision turns all of this on its head. Managers are expected to manipulate a dependent variable—a complete oxymoron. Manipulating a dependent variable is something that science considers to be either experimentally impossible or professionally unethical. And the latter is what zero vision can become as well. With a focus on the dependent variable—in terms of how bonuses are paid, contracts are awarded, promotions are earned—fraudulent manipulation of the dependent variable (which is, after all, a variable that literally depends on a lot of things not under one’s control) becomes a logical response.

Not suprisingly, there is no evidence that zero vision has an impact on safety that is any greater than the next safety intervention. This may not matter, however, because zero visions are a strong instrument of what is known as bureaucratic enterpreneurialism. It allows people involved in safety to say two things simultaneously: they can claim that great things have been accomplished already because of their work, but that more work is necessary because zero has not yet been reached. And because it never will, or because the organizational fear of backsliding away from zero can be maintained, safety people will stay relevant, employed, contracted, funded. Whether people in these positions genuinely believe that injuries and accidents can be fully expunged is hard to know. But they have to be seen to believe it—in order to attract investments, work, federal grants, contracts, regulatory approval, and affordable insurance.

Does a zero vision have practical benefits though? Defining a goal by its dependent variable tends to leave organizations in the dark about what to do (which variables to manipulate) to get to that goal. Workers, too, can become skeptical about zero sloganeering without evidence of tangible change in local resources or practices. It is easily seen as leadership double-speak. Not only is the vision itself unable to practically engage workers, there is nothing actionable (no manipulable variables) in a mere call to zero that they can identify and work with. A zero vision also tends to stigmatize workers involved in an incident. One of the most deeply rooted instances of this can be found in medicine, which has had its own version of vision zero handed down through decades, centuries even. Many there are still battling the very idea that errors don’t occur. They are faced daily with a world where errors are considered to be shameful lapses, moral failures, or failures of character in a practice that should aim to be perfect. Errors are not seen as the systematic byproduct of the complexity and organization and machinery of care, but as caused by human ineptitude; as a result of some people lacking the “strength of character to be virtuous”. The conviction is that if we all pay attention and apply our human reasoning like our Enlightenment forebears, we too can make the world a better place. The 2000 Institute of Medicine report was accompanied by a political call to action to obtain a 50% reduction in medical mistakes over five years. This was not quite a zero-vision, but halfway there. And commit to it we must: it would essentially be our moral duty as reasonable humans. It may have exacerbated, in medicine and elsewhere, feelings of shame and guilt when failures do happen, and led to underreporting and fudged numbers and stifled learning. For many industries in Australia and elsewhere to move in an exact opposite direction (by basically declaring they want zero injuries or incidents) of where many safety and human factors people want medicine to go (acknowledging that errors and failures are a normal, though undesirable, part of being in that business) is quite befuddling.

Investigative resources are easily wasted too: if zero is assumed to be achievable, then everything is preventable. And if everything is preventable, everything needs to be investigated, including minor sprains and papercuts. And if an organization doesn’t investigate, it can even have direct legal implications. A documented organizational commitment to zero harm can lead a prosecutor to claim that if the organization and its managers and directors really believed that all harm was preventable, then such prevention was reasonable practicable. They are liable if harm occurs after all, since they or their workers must have failed to take all reasonably practicable steps to prevent it. Accidents are evidence that managerial control was lost; that a particular risk was not managed well enough. Such failures of risk management open the door to look for somebody who was responsible, on whose account we can put the failure, including that of managers and directors. The 2011 harmonized OHS legislation gives prosecutors precisely that power (even though it has not been tested in court yet).

A zero vision is a commitment. It is a modernist commitment, inspired by Enlightenment thinking, that is driven by the moral appeal of not wanting to do harm and making the world a better place. It is also driven by the modernist belief that progress is always possible, that we can continually improve, always make things better. Past successes of modernism are taken as a reason for such confidence in progress. After all, modernism has helped us achieve remarkable increases in life expectancy, create fantastic technologies, and reduce all kinds of injuries and illnesses. With even more of the same efforts and commitments, we should be able to achieve more of the same results, ever better! But a commitment should never be mistaken for a statistical probability. The statistical probability of failure in a complex, resource-constrained world—both empirically, and in terms of the predictions made by the theory—simply rules out zero. In fact, safety theorizing of almost any pedigree is too pessimistic to allow for an incident- and accident-free organization. Look at man-made disaster theory, for example. On the basis of empirical research on a number of high-visibility disasters, it has concluded that “despite the best intentions of all involved, the objective of safely operating technological systems could be subverted by some very familiar and ‘normal’ processes of organizational life”. Such “subversion” occurs through usual organizational phenomena such as information not being fully appreciated, information not correctly assembled, or information conflicting with prior understandings of risk. Barry Turner, father of man-made disaster theory, noted that people were prone to discount, neglect or not take into discussion relevant information. So no matter what vision managers, directors, workers or other organization members commit to, there will always to be erroneous assumptions and misunderstandings, rigidities of human belief and perception, disregard of complaints or warning signals from outsiders and a reluctance to imagine worst outcomes—as the normal products of bureaucratically organizing work.

Not much later, Perrow suggested in his work on Normal Accidents Theory that accident risk is a structural property of the systems we build and operate. The extent of their interactive complexity and coupling is directly related to the possibility of a systems accident. Interactive complexity makes it difficult for humans to trace and understand how failures propagate, proliferate and interact, and tight coupling means that the effects of single failures reverberate through a system—sometimes so rapidly or on such a massive scale that intervention is impossible, too late, or futile. The only way to achieve a zero vision in such a system is to dismantle it, and not use it altogether. Which is what Perrow essentially recommended societies to do with nuclear power generation. Some would argue that Perrow’s prediction has not been borne out quantitatively since the theory was first publicized in 1984. Perrow’s epitome of extremely complex and highly coupled systems—nuclear power generation—has produced only a few accidents, after all. Yet the 2011 earthquake-related disaster at Fukushima closely followed a Perrowian script. The resulting tsunami flooded low-lying rooms at the Japanese nuclear plant, which contained its emergency generators. This cut power to the coolant water pumps, resulting in reactor overheating and hydrogen-air chemical explosions and the spread of radiation. Also, increasingly coupled and complex systems like military operations, spaceflight and air traffic control have all produced Perrowian accidents since 1984. Zero seems out of the question.

Diane Vaughan’s analysis of the 1986 Space Shuttle Challenger launch decision reified what is known as the banality-of-accidents thesis. Similar to man-made disaster theory, it says that the potential for having an accident grows as a normal by-product of doing business under normal pressures of resource scarcity and competition. Telling people not to have accidents, to try to get them to behave in ways that make having one less likely, is not a very promising remedy. The potential for mistake and disaster is socially organized: it comes from the very structures and processes that organizations implement to make them less likely. Through cultures of production, through the structural secrecy associated with bureaucratic organizations, and a gradual acceptance of risk as bad consequences are kept at bay, the potential for an accident actually grows underneath the very activities an organization engages in to model risk and get it under control. Even high-reliability organization (HRO) theory is so ambitious in its requirements for leadership and organizational design, that a reduction of accidents to zero is all but out of reach. Leadership safety objectives, maintenance of relatively closed operational systems, functional decentralization, the creation of a safety culture, redundancy of equipment and personnel, and systematic learning are all on the required menu for achieving HRO status. While some organizations may hew more closely to some of these ideals than others, there is none that has closed the gap perfectly, and there are no guarantees that manipulating and tweaking these attributes will bring an organization at zero or keep it there.

The call to industry should be this—don’t worry about the dependent variable. It is what it is. Worry instead about the manipulable variables, and proudly talk about those. Compare yourselves on what you do, not on what the results are.

Please see also: Donaldson, C. (2013). Zero harm: Infallible or ineffectual. OHS Professional. Melbourne, Safety Institute of Australia: 22-27.

11 Comments

  1. Andrew Townsend Reply

    Sidney – In my experience ‘Zero Harm’ is counter productive to improving the probability of not having accidents. Any disagreement or opinion about an imperfection in the system is inhibited (self censorship or deliberately stifled) from travelling upwards through an organisation. One is branded as a heretic or not a team player for saying “Hang on a moment fellas, this won’t work because….”. No system is, or ever will be, perfect. Yet for those on high to admit imperfection risks being crucified by the media and politicians. Fear of being seen to be imperfect puts the whole system into stasis and, ironically, the system becomes more imperfect because of it.

    This problem goes beyond just the organisational level. It is societal. Try telling the average beneficiary of fossil fuels is that the only way not to have accidents is not to drive, fly, have electricity, televisions (yuk), health care etc… They will give you an incredulous stare. They cannot compute that the demand for cheap gasoline risks another Gulf of Mexico or Piper Alpha. Somehow we have to get this message of perfection being an unattainable illusion to more than the OHS community.

  2. John Culvenor Reply

    The “vision”, if it is just that, is no more than barracking for safety.

    Akin to cheering on a horse in a race. It’s not affecting the race.

    Many programs (safe culture programs, leadership programs) have the potential to descend into barracking.

    This then manifests itself in “leaders” walking around “showing their commitment”.

    How do they show their commitment?

    (a) By doing their job well? No. Firstly that is too hard. Secondly they don’t know how to in regard to safety. Thirdly, no one would see it.

    (b) By walking around telling other people “at the coal face” who’s jobs they don’t understand, what they are doing wrong? Yes, that’s easier and it is visible.

  3. John Culvenor Reply

    Sidney: The “newness” of prosecution powers in this part of the article could do with some exploration: “Such failures of risk management open the door to look for somebody who was responsible, on whose account we can put the failure, including that of managers and directors. The 2011 harmonized OHS legislation gives prosecutors precisely that power (even though it has not been tested in court yet).”

    However, I can’t think of an example where a person responsible for a problem could be prosecuted under the “new” laws but could not have been under the existing laws.

    http://safedesign.wordpress.com/2013/06/23/the-australian-non-revolution-in-safety-laws/

  4. mikebehm Reply

    I recall Sydney’s slide at SIA conference of the worker in a hospital bed with a computer and his tongue-in-cheek message was that this is how to avoid lost time accidents. The multinational I worked for in the 90’s would collect the dependent variables on every facility worldwide in each division every month and order them from best to worst. As a corporate staffer, I got the opportunity to learn about the North American facilities and particularly the 40+ in the Division where I mostly supported. It took me about a year or so to figure out that the ones at the top were manipulating the numbers. They worried about the DV and how they ‘looked’ to corporate and trade associations. One day the Divisional CEO asked me which facilities should he expect a late night phone call from due to some major EHS issue. I told him he should worry about the ones at the top of the monthly stats chart. He was puzzled, so I explained that the ones at the bottom are transparent and are learning about their deficiencies. They’re concerned about what their employees think, whereas the ones at the top are too concerned about what you think. The lowly corporate staffer did not get fired. We had a good relationship, and he understood what I was saying.

    1. Peter Sinclair Reply

      Absolutely agree; and unfortunately have experienced this.
      Organisations, particularly management need to understand that without the transperancy and truth in safety they can not hope to progress

  5. Rob Robson Reply

    A very interesting article written by Mary Rowe, then Ombuds at MIT in Boston on the topic of Zero Barriers complements this idea of Zero Pessimism quite nicely. Although it was written to address the very common “craze” of Zero Tolerance for certain behaviours in healthcare as a way of promoting meaningful and effective conflict management there are some important parallels and lessons for system safety practitioners. Mary, and her co-author Corinne Bendersky propose Zero Barriers for workers to provide accounts of circumstances that may encourage or facilitate behaviours (this is the essence of accountability – creating conditions where workers feel safe to tell their stories), instead of the popular, and totally unsuccessful approach of Zero Tolerance based on multiple rules, regulations and constraints.

    The article is entitled Workplace Justice, Zero Tolerance and Zero Barriers. I am not sure if I can post it on the blog.

  6. Andrew Rae Reply

    Sidney: Thanks for this article. It articulates clearly the potential problems with “zero x” as a slogan. It would be good to have empirical evidence of the practical effects of adopting zero harm as a goal/slogan/policy but adopting zero is more likely to be effect than cause itself.

    My suspicion is that it does have an effect by encouraging elimination bias – excessive focus on some sources of risk to the detriment of others.

    Sadly the original Balfour Beatty “Zero Harm by 2012” WAS focussed on actions as much as outcomes, details lost in the spread of the idea.

  7. Stavros Prineas Reply

    ‘Zero vision’ is a bit like ‘waging war’ on error, or terror, or drugs. Americans seem to be especially fond of ‘stretch goals’ while ignoring the personal and ethical cost of the belligerent perfectionism that often accompanies them.

  8. Pete Smith Reply

    The idea of the immaculate ‘city on the hill’ is a powerful metaphor.

    Like all utopian ideas, it is a tempting vision strongly dependent on the notion of ‘everyone being just like me’.

    Where it fails is in its underestimation of human nature.

    In terms of the increasingly bureaucratic healthcare environment, the divide between management and clinicians is real and worstening as offices become more sterile and removed from the clinical interface.

    Therefore it is easier to imagine everybody being compliant (just like me) to issued mandates, and easier to get offended when they don’t.

    The goal of Zero Harm can be approached through sheer luck or good management, and it is important to not confuse the two.

    Clinical improvements from reflective learning can deliver safety results, whether that exposure is from simulation or real-life events, but it can only do so to best effect when dominant belief patterns on both sides of the divide are broken down to the extent that the dialogue between decision makers and those providing care becomes rational, exploratory and collaborative in nature.

    But there I go again. My utopia assumes that everyone is ‘just like me’, whereas we all bring to work our different types of brains.

Leave a Reply

Your email address will not be published. Required fields are marked *