My grandfather was a Military Policeman in the Royal Air Force during the Second World War. My father was a Royal Marine. I remember as a child seeing a clipping of an old article in a national newspaper with a large photograph of him captioning a story of their deployment in the Middle East. This made me very proud, but it never made me want to join up. My brother thought differently and he went on to serve in the Royal Navy for almost 20 years.
In thinking about why the military life was not for me, one thing has always stood out – I don’t really like being told what to do. This, I’m sure, would have been generally seen as a career handicap in that environment, so I believe I made the right choice. I understand the need to have obedience in a live conflict, where questioning orders may be life-threatening, particularly as one rarely has all the facts to hand to make an informed choice. Yet there are examples of commanding officers who gave wrong orders with disastrous results – General Custer, for example, or pretty much everyone involved with trench warfare in WWI. On balance, the military (at least the better-trained ones) probably get it right more often than not and, as Taleb points out in Anti-Fragile, those approaches that have proven to be well-founded over many years of stress-testing should be respected, even if open to challenge.
Many organisations also use rules and compliance as their approach to safety management. But in the workplace where risks are, generally, less clear and obvious and where a loss of life is never an acceptable outcome, is an environment of rules and compliance appropriate or necessary? In an era where businesses are more open than ever; where employee engagement requires buy-in and understanding of corporate strategy; and where teams with a greater degree of control over their work are shown to perform better, why does safety largely remain in a hierarchical rules-based world?
Fundamentally, rules are about control. If we can control what is happening, we can determine outcomes. This is usually well-meaning, but just as no battle plan survives its first encounter with the enemy, no plan can fully anticipate every potential outcome in an ever-changing work environment, particularly once we throw human variability into the mix.
Rules attempt to remove this variability, but in doing so, can also remove with it innovation, entrepreneurialism and responsiveness. It lobotomises the organisation. When an unusual situation occurs, we no longer have the capacity to respond unilaterally. Many major accident investigations point out opportunities to have prevented the situation escalating had the people involved had either the risk awareness to identify it (Deepwater Horizon), or felt confident enough to act without someone else’s authority (Piper Alpha). Both mind sets are quashed by compliance cultures.
Chief among the compliance requirements are the so-called golden rules, life-saving rules or some similar variant. Again, well-meaning, these are typically based on those activities most likely to have caused fatal accidents. However, it is overly simplistic to believe people will stop doing something life-threatening because there is a rule in place. If the threat to life wasn’t enough to prevent it, is a threat of dismissal going to?
Like many bureaucratic systems over time, rules become more important than that which they were intended to protect. I have seen someone punished for non-compliance with a seatbelt rule while reversing a vehicle at very slow speed. Conversely, I have seen use of a live ignition source in a flammable atmosphere deemed not to be an offence because the rule in place related to smoking in designated areas, not to use of the lighter. If the outcome is completely at odds with the risk, something is broken.
People respond better to being cared for than being told what to do. A fatal risk programme that identifies those major risks and provides information to help workers make better risk informed decisions will contain essentially the same information as a set of rules. But much more buy-in is achieved from a message that says, “We care about your wellbeing – please remember these principles, it could save your life” rather than “Don’t do this or we’ll fire you.”
The wording, tone and symbols used dramatically changes the impact of the message. An example where this is used to great effect can be found in the safety principles and safety habits posters here.
So do we need rules at all? Clearly it is unreasonable to expect workers to make every single decision based on a detailed risk assessment of the circumstances at hand. In these instances rules can be helpful to provide a rapid solution. Yet there are other occasions where existence of the rule implies safety right up to the point where the rule is breached, when this may not be the case if there are additional risk factors involved.
As in the military example, rules are most beneficial when not all the information is known (or even knowable) and people cannot make risk-informed decisions. This is typically the case in complex systems with high hazard potential where a quality decision can only be reached by careful consideration of all factors by a multi-disciplined team pooling their knowledge. In a nuclear waste facility dealing with plutonium contaminated material, for example, there is a safe upper limit for the surface density of an array of stored material. It is not possible for a process operator to make risk-informed real-time decisions about the structure of the array, but (in combination with other controls elsewhere) a simple rule can be established about the number of containers allowed in a stack.
Limiting rules to certain circumstances where risk is higher has the benefit of emphasising the importance and so making compliance more likely. They must also be quite specific. The broader the rule – always wear a seatbelt – the more likely it is to be seen as inappropriate in some circumstances and therefore optional. After all, most of us have broken the speed limit because we know it isn’t realistic in all situations and there are times when it can be broken without high likelihood of accident (with apologies to all traffic police).
Professor Andrew Hopkins has stated (Working Paper 72 – National Research Centre for OSH Regulation) that the control pendulum has swung too far towards risk management and needs to swing back to more rule-compliance, arguing that operationally workers need the simplicity provided by rules. He does, however, recognise the need for balance between the two. As ever, there is no black and white, right or wrong in safety. How do we find the balance in the grey zone?
- Be careful of phrasing. Couch requirements in terms of supporting safe action, not in the language of absolutes and threats;
- Impose rules only where risk is high, to emphasis their importance;
- Impose rules for specific, usually complex situations, where local decision making is difficult;
- Use the rules to build a framework within which workers are given the licence to use their core skills to change, adapt and improve;
- When the framework becomes challenged or changed, involve the workers in consideration of the implications.
Rules are the power tools of safety. They are labour saving devices that do most of the work, but have their own significant risks if mishandled and fail when it comes to the precision needed for a fine finish. For that we need to overlay the hand tools of carefully applied risk management. It’s slower, it takes more focus and more expertise but it can achieve that final few percent of improvement.
What are the alternatives to rule-based behavior?
In my experience, safety (behavioral) rules are great to help develop automatons or droids; decision making skill training is more productive to help develop adaptive capabilities to cope with any variable situations or contingency, if for no other reason than rule makers can not anticipate every necessary response for every aspect of every contingency.
Hello Craig, nice read and one of a few subject items I speak on when selling Safety in a different way. Like anything and broadly speaking there needs to be a healthy level of flexability otherwise things break. When i speak of rules I relate to the enoprmous collection of rules we have created and continue to create, especially when there is an incident or occurance in the world, suddenly a regulation pops up to address it. This put s us all into the same mould, personally I appreciate and enjoy how we are all so wonderfully different.
So in nutshell, my motivational talk revolves around the fact that we “are” independent free thinking beings and when you look at the statistics (road accidents for example) rules are not the complete answer – we break rules because we’re told we have to.
I have a saying that I use in my talks – “Have to Need to Want to”. In closing – as grown ups we only really do what we want to so the idea as part of selling safety rules (which I embrace) is that they only have true value when the work community, what ever that may be, want to follow them. Wanting is owning the concept.
The ubiquity of road rule breaking is always a good example
Permits, pergolas, fishing spots, trailer registrations, burning rubbish, using water, traveling with fruit, the things we do in our garage, the list goes on and its all about having to need to want to. ????
congratulations for your text. Nice insights about rules. With simplicity, you questioned the real value of rules: are rules applied to all circumstances? However, you conclude that rules are good under some contexts and should be enforced, because those who designed them are smarter than operators involved in complex and difficult decision making situations. The riskier the situation, the more procedures can help. Too loose is bad, too tight is good. At this point I disagree. As an aviation practitioner and researcher about rules/procedures for abnormal and emergencies situations available in the cockpit, I can assure than the more uncertain a situation is, the less pilots use procedures. Actually, they only consult the checklist to get some insights that can help them to understand, diagnose or solve the problem. If they don’t find useful information in the checklist, other sources are deployed, such as maintenance records, maintenance technicians or other manuals. These results are in line with procedures as resources for action approach: procedures are only one among many different resources of information that support people to cope or avoid some constraints. If we do see procedures as supportive tools, then there is no right or wrong.
Thanks for your comments. My final bullet list noted rules for complex, but specific, situations. This emphasis on ‘specific’ was deliberately intended to exclude emergency situations as these cannot be predicted and therefore prescription can lead to wrong choices. By reducing reliance upon compliance we help people better respond to emergency situations and airline pilots is an excellent example. I don’t have a lot of knowledge of the aviation field, but it will be interesting to see if the more broad adoption of safety management systems breeds more of a compliance culture
Nice. Matthieu Weggeman wrote in his 2007 book that (paraphrasing somewhat) making many rules is a great solution for environments with many asocial people, or people who aren’t particularly fond of thinking.
Thought provoking post, Craig. I’d like to share my view:
Safety-I view: Rules are necessary because they create governing and controlling constraints. Workers are expected to stay within the clear boundaries imposed.
Safety-II view: Rules are acceptable if they allow a worker to proactively respond to varying conditions and adjust performance to get the job done.
Complexity-based view (Safety-III?): Rules create the conditions that enable safety to emerge as a property of a complex adaptive system. However, adding more and more rules can lead to a tipping point that allows danger to emerge. The challenge is not knowing where the tipping point is. Over time a worker enters the zone of complacency and begins to drift closer and closer to the brittleness boundary. When s/he falls over the edge, failure occurs and there is a plunge into chaos.
We should stop forcing workers to remember a ton of rules and carry a thick rule book. Instead, replace them with heuristics, a few simple rules to help them deal with uncertainty, volatility, unpredictability, ambiguity. In the chaos of the battlefield, US soldiers follow 3 heuristics to increase the chances of staying alive: Keep moving, head to higher ground, stay in communication.
I think my major theme principally aligns with your Safety II interpretation, but there is probably an overlap into complexity where we work together to amend the boundary conditions if the envelope starts to become challenged. With safety as an emergent property of a complex system, I am still trying to determine if there is a way we can practically control routine properties to manage what emerges (it’s not like we have a Mandelbrot set for safety). I’m not sure if rules come into this – it is possibly more about developing processes that allow emergence. Although this depends on whether we are talking about hard and fast rules, or heuristics, as you say.