During engineering school in the late 1960s I was taught to ignore friction as a force and use first order approximate linear models. When measuring distance, ignore the curvature of earth and treat as a straight line. Break things down into its parts, analyze each component, fix it, and then put it all back together. In the 1990s another paradigm coined Systems Thinking came about we jumped from Taylorism to embrace the Fifth Discipline, social-technical systems, Business Process Reengineering. When human issues arose, we bolted on Change Management to support huge advances in information technology. All industries have benefited and been disrupted by business and technological breakthroughs. Safety as an industry is no exception.
In January 2000, Stephen Hawking stated: “I think the next century will be the century of complexity.” In the spirit of safety differently, let’s explore safety from a non-linear complexity science perspective. Safety is no longer viewed as a static product or a service but as an emergent property in a complex adaptive system (CAS). Emergence is a real world phenomenon that System Thinking does not address or perhaps, chooses to ignore to keep matters simple. Glenda Eoyang defines a CAS as “a collection of individual agents who have the freedom to act in unpredictable ways, and whose actions are interconnected such that one agent’s actions changes the context for other agents.”
As the eras of business have evolved from scientific management to systems thinking, so has safety in parallel. The graphic below is a modification of an Erik Hollnagel slide presented at the 2012 Resilience Learning Lab in Vancouver and extends beyond to an Age of Cognitive Complexity.
- “Different is more” which means the greater the diversity of agents, the greater the distributed cognition. Think wisdom of crowds, crowdfunding, crowdsourcing.
- “More is different” which means when you put the pieces of a complex system together you get behavior that is only understandable and explainable by understanding how the pieces work in concert (see Ron Gantt’s enlightening posting). In a CAS, doing the same thing over and over again can lead a different result.
- ”Different is order within unorder” which means in a complex environment full of confusion and unpredictability, order can be found in the form of hidden patterns. Think of a meeting agenda that shapes the orderly flow of discussion and contributions of individuals in a meeting. In nature, think of fractals that can be found everywhere.
When working in a Newtonian-Cartesian linear system, you can craft an idealistic Future state and develop a safety plan to get there. However, in a CAS, predicting the future is essentially a waste of time. The key is to make sense of the current conditions and focus on the evolutionary potential of the Present.
Is the shift to complexity-based safety thinking sufficient to warrant a new label? Dare we call this different paradigm Safety-III? It can be a container for the application of cognition and complexity concepts and language to safety: Adaptive safety, Abductive safety reasoning, Exaptive safety innovation, Viral safety communication to build trust, Autopoietic SMS, Dialogic safety investigation, Heuristics in safety assessment, Self-organizing role-based crew structures, Strange attractors as safety values, Cognitive activation using sensemaking safety rituals, Feedback loops in safety best practices, Brittleness in life saving rules, Swarm intelligent emergency response, Human sensor networks, Narrative safe-to-fail experiments, Attitude real-time monitoring as a safety lead indicator, Cynefin safety dynamics. Over time I’d like to open up an exploratory dialogue on some of these on the safetydifferently.com website.
- ‘Safety is clearly an emergent property of systems.’
- ‘It is not possible to take a single system component, like a software module, in isolation and assess its safety. A component that is perfectly safe in one system may not be when used in another.’
- ‘When accidents are seen as complex phenomena, there is no longer an obvious relationship between the behavior of parts in the system (or their malfunctioning, e.g. ‘‘human errors’’) and system-level outcomes.’
- ‘Investigations that embrace complexity, then, might stop looking for the ‘‘causes’’ of failure or success. Instead, they gather multiple narratives from different perspectives inside of the complex system, which give partially overlapping and partially contradictory accounts of how emergent outcomes come about.
- ‘The complexity perspective dispenses with the notion that there are easy answers to a complex systems event—supposedly within reach of the one with the best method or most objective investigative viewpoint. It allows us to invite more voices into the conver- sation, and to celebrate their diversity and contributions.’
- By taking complexity theory ideas like the butterfly effect, unruly technology, tipping points, diversity, we can understand that failure emerges opportunistically, non-randomly, from the very webs of relationships that breed success and that are supposed to protect organizations from disaster.
- ‘Safety is an emergent property, and its erosion is not about the breakage or lack of quality of single components.’
- ‘Drifting into failure is not so much about breakdowns or malfunctioning of components, as it is about an organization not adapting effectively to cope with the complexity of its own structure and environment.’
- ‘Karl Weick in a 1987 California Management Review article introduced the idea of reliability as a dynamic non-event. This has often been paraphrased to define safety as a ‘dynamic non-event’…even though it may be a slight misinterpretation.’
- ‘Safety-I defines safety as a condition where the number of adverse outcomes (accidents/incidents/near misses) is a low as possible.’
- When there is an absence of an adverse outcome, it becomes a non-event which people take for granted. When people see nothing, they presume that nothing is happening and that nothing will continue to happen if they continue to act as before.
- ‘Safety-II is defined as a condition where as much as possible goes right.’
- ‘In Safety-II the absence of failures is a result of active engagement. This is not safety as a non-event but safety as something that happens. Because it is something that happens, it can be observed, measured, and managed.’
- Safety-III is observing, measuring, maybe managing but definitely influencing changes in the conditions that enables safety to happen in a CAS. In addition, it’s active engagement in observing, measuring, maybe managing but definitely influencing changes in the complex conditions that prevent danger from emerging.
- 18:29 ‘If we could start an emerging dialogue amongst our workers around the topic of conditions, we accelerate the learning in a really unique way.’
- ‘We started off with Safety-I, Safety-II. That was our original model. What we really recognized rather quickly was that there was a Safety-III.’
- ‘Safety-III was developing this concept of expertise around recognition of changes in the environment or changes in the conditions.’
- ‘The fast pace of change in business today arguably requires a taste for chaos and an ability to cope well with high uncertainty.’
- ‘What is called ‘innovation’ may actually be just coping with an ever-faster pace of change: anticipating the changes on the horizon and adapting promptly to the changes that are already occurring. This sort of flexible adaptability is genuinely antithetical to orderly, rule-governed, stable behavioral protocols.’
- ‘Safety has traditionally been prone to orderly, rule-governed, stable behavioral protocols. For the sorts of organizational cultures that succeed in today’s environment, we may need to create a more flexible, adaptable, innovative, chaos-tolerating approach to safety.’
- ‘With the advent of ‘big data,’ business today is ever-more analytic. I don’t know whether it’s true that safety people tend to be intuitive/empathic – but if that’s the case, then safety people may be increasingly out of step. And safety may need to evolve in a more analytic direction. That needn’t mean caring less about others, of course – just using a different skill set to understand why others are the way they are.’
- I suggest that different skill set will be based on a complexity-based safety approach.
- Four essential abilities that a system or an organisation must have: Respond, Anticipate, Monitor, Learn. Below is how I see the fit with complexity principles.
- We respond by focusing on the Present. Typically it’s an action to quickly recover from a failure. However, it can also be seizing opportunity that serendipitously emerged. Carpe Diem. Because you can’t predict the future in a CAS, having a ready-made set of emergency responses won’t help if unknowable and unimaginable Black Swans occur. Heuristics and complex swarming strategies are required to cope with the actual.
- We anticipate by raising our awareness of potential tipping points. We don’t predict the future but practice spotting weak signals of emerging trends as early as we can. Our acute alertness may sense we’re in the zone of complacency and need to pull back the operating point.
- We learn by making sense of the present and adapting to co-evolve the system. We call out Safety-I myths and fallacies proved to be in error by facts based on complexity science. We realize the importance of praxis (co-evolving theory with practice).
- We monitor the emergence of safety or danger as adjustments are made to varying conditions. We monitor the margins of maneuver and whether “cast in stone” routines and habits are building resilience or increasing brittleness.
I like the thinking and I think it is a fascinating discussion point. My frustration lies with the never ending need to make things sound complex and intelligent. Yes the topics are complex but the method of communication does not need to be. What’s missing? Plain English…
Character
An inescapable fact is that the character of an industry is largely a mirror image of the character of its regulators, and, of course, vice versa. Competence, integrity, compliance, and transparency or their lack seldom exists on only one side of an industry-regulatory interface .
Amen to plain english! I tend to read the reviews first. That helps a lot in some cases.
• Elementary Failures
An inescapable fact is that the competent investigation of every harmful event reveals that the causation of the harm includes the failure to apply elementary principles of design, human factors, engineering, science, operations, communications, administration, and/or management. Often there is a contemptuous/ dismissive disregard for even needing to know what these elementary principles are, much less to flow them down to where they might need to be applied.
Examples?
Counter-examples?
I think that your article is a really helpful contribution and brings together a number of key themes: systems theory, complex systems, resilience, cognitive complexity, measuring safety, etc. I am reading your article whilst at a conference about Process Safety, and I see a lot that the process safety community could gain from these thoughts. A key issue of importance moving forward is how to investigate and translate these concepts about complexity etc into pragmatic tools or methods for application to the socio-technical systems that are so in need of them. Not to be too pragmatic too quickly so as to completely remove the key characteristics of complexity that we are trying to investigate, but practical enough so that industry will start trying them out. I think that any such tools or methods should clearly include human factors approaches, to consider people, but also include the other parts of these systems in the analysis: equipment, material and energy streams, control systems and procedures. All of these components form the complex system and they should all be investigated and explored together.
What do other people think?
Ben
This was my first thought, too. There is a wealth of good cutting edge information out there, as evidenced by the many good articles on this site. But out in the working world, little progress has been made. The vast majority of accident investigations, site audits, safety management reviews etc carried out simply do not have the time, the budget, the capability or, unfortunately in many cases, the will, to incorporate this degree of complex thinking. It’s OK applying it to Deepwater Horizon, but not to someone losing an arm on a construction site. There is a disconnect between academia and industry that is more important to solve than working out what the next safety number should be (although that does not detract from the excellence of this particular blog).
“And it ought to be remembered that there is nothing more difficult to take in hand, more perilous to conduct, or more uncertain in its success, than to take the lead in the introduction of a new order of things, because the innovator has for enemies all those who have done well under the old conditions, and lukewarm defenders in those who may do well under the new. This coolness arises partly from fear of the opponents, who have the laws on their side, and partly from the incredulity of men, who do not readily believe in new things until they have had a long experience of them. Thus it happens that whenever those who are hostile have the opportunity to attack they do it like partisans, whilst the others defend lukewarmly, in such wise that the prince is endangered along with them.”
Nicolo Macchiavelli
“The Prince”
Chapter VI
http://www.gutenberg.org/files/1232/1232-h/1232-h.htm
1. Causation and Effect:
Every effect (with all of its attributes/ properties/ characteristics) is the result of its causation.
2. Sensitivity:
Any item, that if eliminated, negated, or otherwise changed affects an attribute/ property/ characteristic of the effect, was part of the effect’s causation.
3. Indifference:
Any item that can be eliminated, negated, or otherwise changed without affecting an attribute/ property/ characteristic of the effect was not part of the effect’s causation.
4. Totality of Conditions, Behaviors, Actions, and/or Inactions:
An effect is not the result of one “cause”, but rather is the result of a totality/ summation/ aggregation of conditions, behaviors, actions, and/or inactions.
• Elementary Failures
An inescapable fact is that the investigation of every harmful event reveals that the causation of the harm includes the failure to apply elementary principles of design, engineering, science, operations, communications, administration, and/or management. Often there is a contemptuous disregard for even needing to know what these elementary principles are, much less to flow them down to where they might need to be applied.
Thanks for your insights Gary. I agree that complexity is getting increasing attention in safety and in a broader scope as well. In my opinion we are moving toward perspectives that put safety in its place in the whole instead of looking at safety in isolation. Therefore at some point in time, we will abandon the word safety in the ‘title’ of the perspective and it will be some sort of organisational model which brings together a number of schools of thought. And I don’t have a fancy example ;).
And I notice that the name can make a big difference in the use and abuse of the theories behind the label so we have to be careful about that.
Safety is part of quality.
Safety is part of fitness for intended purpose.
Safety is part of performing satisfactorily in service.
Safety is part of do it right the first time.
Safety is part of meeting requirements.
Hi Gary
Grøtan (2013) compiled an interesting model where he contrast Safety from a Compliance vs Resilience Perspective that incorporate a contrast between a rational approach that counter pathogenesis vs an emergence approach that facilitate salutogenesis. His focus seem to be safety of engineered systems rather than safety amidst social complexity.
Grøtan, TO (2013) To Rule, or Not To Rule is Not the Question (For Organizing Change towards Resilience in an Integrated World). In I. Herrera et al., eds. 5th Symposium on Resilience Engineering, Managing Trade-Offs. Soesterberg, The Netherlands; Resilience Engineering Association, pp. 43–48.
Thanks for the reference, Liza. Although “salutogenesis” isn’t a ‘plain English’ word, I’m all for introducing new language into the world of safety if it helps to explain. When Antonovsky coined this term in 1979, he described the relation between health, stress, and coping. I particularly like his focus on factors supporting health and well-being rather than disease. I connect health and well-being with Safety-II and disease with Safety-I. Safety-III then could be influencing conditions that allow salutogenesis to emerge, the evolution from disease and health care to health creation and sustainability.
Earlier this month I attended the Qantas Group Safety Conference.
Amongst the speakers, Kelvin Genn, Dr Andrew Hopkins, Dr Rob Lee and Lt. Col. Martin Levey to name a few, all providing very strong thought-provoking notions that a ‘safety III’ is emergent. Each presented insight slightly different to the next with respect of their own world view, however very much consistent with the principles that set it apart from safety I & II.
The term ‘Continual Improvement’ is widely used to describe evolution with regard to system elements, CAS could very well be the ‘evolution’ of how we adapt and innovate the way in which we work and investigate our successes along the way.
How would these ideas be used to extract the lessons to be learned from from recent safety fubar fiascos such as the Afghan Hospital Bombing, the GM Ignition Switches, and the Takata Air Bags?
Bill: I look at your 3 fiascos and one word comes to mind: culture. Culture is a crucial part of most organizations’ success or failure. It’s also one of the most difficult things to influence. Culture is a complex issue. Because safety is an emergent property, we can’t create, control, nor manage a safety culture. So we need to apply the principles of complexity science to crack the safety culture nut.
Consumer Reports states: “Nailing down the root cause and determining which of Takata’s several inflator designs is implicated has been tough for Takata, the automakers, and independent investigators to establish.” Of course it is! Finding root cause is Safety-I. It’s nailing a screw using a hammer.
So what would we do differently in a complexity-based safety approach? The safety culture, people’s attitudes and the conversations they have when they interact are all interwoven – co-evolving together. We can’t command nor control what’s happening but we can shape and shift.
1. The starting point is making sense of an organization’s narratives. Narratives are the experiences, anecdotes, and observations that happen every day in offices, crew meetings, work breaks and on the job. Written stories are the most common form; pictures and audio recordings can be used to capture qualitative data from field workers. We now have the capability using software to convert qualitative narratives into quant data that can be mapped into 2D graphs or 3D landscapes.
2. The maps help us to visually search for emerging patterns across the narratives. The observations and experiences underlying these patterns indicate where and how change efforts should be focused. The question now becomes: “How might we shift the stories from here to over there?”
3. Modulators influence and shape a culture. Modulators are different that Drivers. Drivers are cause & effect oriented (if we do this, we get that) whereas Modulators are catalysts we deliberately place into a system to perturb it. We cannot predict what will happen. It sounds a bit crazy but it’s done all the time. Think of a restaurant experimenting with a new menu item. The chef doesn’t actually know if it will be a success. Even if it fails, he will learn more about clientele food preferences. We call these safe-to-fail experiments where the knowledge gained is more than the cost of the probe.
Regarding comments on plain English, I understand the point made. When chatting with executives, I rarely mention complexity, Safety-I,I,III. With front-line workers, I stay away from words like culture, intervention, root cause analysis, zero harm, or “I’m here to listen to you…”
• Culture as Insufficient Causation
An inescapable fact is that when safety culture is blamed as causation there are many other similarly harmfully dysfunctional conditions, behaviors, actions, and/ or inactions that have equal or better claim to be included in the causation.
Gary, regarding further publications I have attempted to demonstrate the application of complexity ideas to accident prevention in a couple of articles. Both received quite a bit of criticism. These are posted on my website so they are free:
http://carrilloconsultants.com/wordpress/wp-content/uploads/Complexity-Safety-NSC-Journal1.pdf
http://carrilloconsultants.com/wordpress/wp-content/uploads/2012/12/Relationship-based-Safety5.pdf
Thanks, Rosa! One of the reasons for my post was to gather more publications that connect complexity and safety. These two are terrific additions to my library.
Regarding your comment on receiving quite a bit of criticism, what were the negatives? What I hear most often is it’s challenging the prevailing safety paradigm in which a lot of time and money have been invested. As a counter, I show the Berkana Two Loops Systems Change video http://berkana.org/about/our-theory-of-change/.
Rosa and Gary,
The link below takes one to the report on the EPA FUBAR at the Gold King Mine resulting in severe contamination of the Animas River system. What are the strengths and weakness of the report relative to this discussion?
http://www.usbr.gov/docs/goldkingminereport.pdf
Bill: The Gold King Mine report properly describes the technical causes of the failure; that is a strength. However, it is at best a partial representation of the mine’s complex adaptive system and is thus limited in value; that is a weakness.
A full investigation applying a complexity-based approach would include (as the USACE peer reviewer pointed out) exploring EPA internal communications, administrative authorities, decision paths taken, and interactions with the onsite contractor. In other words, exploring the EPA culture as it pertains to this incident.
Perhaps the evaluation team recognized this would be a messy undertaking and chose to waive by stating they did not believe it was requested to perform an investigation into a “finding of fault,” and that those separate investigative efforts would be performed by others more suitable to that undertaking.
Gary,
Thanks ever so much.
Would you be so kind as to provide a link to the best complexity-based investigation report you can readily locate?
What are the principles of causation that apply to complexity-based analysis?
Bill. One report is one I published with a colleague and presented at a conference in 2010. I can’t say it’s the best but it was the first one using the Cynefin Framework in a safety investigation. It’s easily accessible from my website: http://gswong.com/?wpfb_dl=5
I don’t have an exhaustive list of principles but here are guidelines I follow:
– Cause & effect relationships may or may not exist. You just don’t know.
– Feedback loops render “If-then” linear statements meaningless.
– Stories provide context and allow correlations to be analyzed. But don’t confuse correlation with causation.
– Searching for patterns can lead to the discovery of “strange attractors”.
Marg Wheatley in her book Leadership and the New Science wrote: “And human networks always organize around shared meaning. Individuals respond to the same issue or cause and join together to advance that cause. For humans, meaning is a “strange attractor”—a coherent force that holds seemingly random behaviors within a boundary. What emerges is coordinated behaviors without control, and leaderless organizations that are far more effective in accomplishing their goals.”
So instead of looking for cause, let’s invest the time finding the strange attractors that offer better reasons why people behave the way they do.
Gary,
Thanks.
Would you be so kind as to elaborate on your statement quoted below?
“Cause & effect relationships may or may not exist. You just don’t know.”
I’m taking some advice from Stephen Covey: “Seek first to understand.”
Good see a reference to Habit 5, one that helps us to go from Independency to Interdependency on the Maturity Continuum!
There are 3 basic systems: Ordered, Chaotic, Complex. C&E “if-then” relationships exist in the Ordered system. Everything is random is the Chaotic system and no C&E relationships exist. And then we have the Complex system where there is “order within unorder.”
The order in a Complex system comes from C&E relationships that form patterns. Some patterns are hidden and some are quite visible. Think of the sky as a complex system with wind, water vapour, air contaminants. When the conditions are right, visible patterns called clouds emerge.
Cloud are formed due to cause & effect relationships. If air containing water vapor is cooled below the dew point, then moisture condenses into droplets on dust particles. If droplets clump together, then clouds or ice crystals are created.
Since we’re on the subject of C&E relationships, a huge problem I see is how people confuse correlation with causation and jump to conclusions.
Example:
Observation: When there is a fire, firefighters always appear.
Conclusion: Firefighters cause fires.
Observation: When there is a big fire, many firefighters appear.
Conclusion: Many firefighters cause bigger fires.
Sounds ludicrous but it happens way too often. Two very popular business books “In Search of Excellence” and “Good to Great” have been criticized for confusing correlation with causation. If you do these things, then you too will be excellent.
When dealing with complexity, I think it’s better to err on the side of “I just don’t know” rather than jump to a cause & effect conclusion.
Ok. Safety 3, fair enough.
But just for the record, about one year ago, in discussion about Safety “insert numeral” with my pal Mike Behm, he invented “Safety 3.14159265359” or “Safety Pi” “The full circle of safety”.
So, no going any further than 3, it’s taken!
When I read and lament how some safety regulators and business execs deal with safety, I sometimes think it is a full circle. Unfortunately we’re going round and round and end up back at square 1: Safety-I…!
XXVII
Myself when young did eagerly frequent
Doctor and Saint, and heard great argument
About it and about: but evermore
Came out by the same door where in I went.
http://classics.mit.edu/Khayyam/rubaiyat.html
Agree, I would not want to promote a complete circle of constant failure
🙂
A little Learning is a dang’rous Thing;
Drink deep, or taste not the Pierian Spring:
There shallow Draughts intoxicate the Brain,
And drinking largely sobers us again.
Fir’d at first Sight with what the Muse imparts,
In fearless Youth we tempt the Heights of Arts,
While from the bounded Level of our Mind,
Short Views we take, nor see the lengths behind,
But more advanc’d, behold with strange Surprize
New, distant Scenes of endless Science rise!
So pleas’d at first, the towring Alps we try,
Mount o’er the Vales, and seem to tread the Sky;
Th’ Eternal Snows appear already past,
And the first Clouds and Mountains seem the last:
But those attain’d, we tremble to survey
The growing Labours of the lengthen’d Way,
Th’ increasing Prospect tires our wandering Eyes,
Hills peep o’er Hills, and Alps on Alps arise!
http://poetry.eserver.org/essay-on-criticism.html
Hi Gary
Can I suggest Alistair Mant (1997) Intelligent Leadership? Dr Mant describes two systems – one a bicycle- you can take it apart and reconstruct, or re set it to work efficiently again. The other system is a frog – which cannot be dissembled and rebuilt, and must be understood by its complexity and environment.
Working in healthcare I find it useful to ask “is this system a frog or a bicycle?’ Some systems for safety can have a simple approach and some require much more. Frogs and bicycles help us identify what we are dealing with.
This is the best question I have heard in a long time and I intend to use it frequently. Sounds like a great title for a conference presentation
Great analogies, Fiona. I also like using a bicycle to describe a system where we can break things into its parts, analyze, fix, and put everything back together again. Besides a frog, I’ll talk about mayonnaise which can’t be separated into its ingredients. Unfortunately, there are some who believe you can break down a frog into its components, or in the picture I used for the blog, a duck into mechanical parts.
To help people identify and make sense of the current situation, I use the Cynefin Framework. In this YouTube video https://youtu.be/N7oz366X0-8, Dave Snowden describes 3 systems – ordered, complex, and chaotic that underpin the Cynefin Framework.
The bicycle resides in the Cynefin Complicated domain and the frog in the Complex domain. In a healthcare setting, I’ll show a bandaid in the Simple (now renamed Obvious) domain which everyone knows how to use. In the Complicated domain I might show a MRI scanner. It’s technologically more challenging and requires an expert to use. In the Complex domain I would show a fully staffed operating room in action and in the Chaotic domain an ER facility dealing with a terrorist bombing.
What are some of the recognized and generally accepted good investigation practices (RAGAGIPs) for investigating/ analyzing adverse events involving complex adaptive Systems (CAS)?
Will anyone provide the first RAGAGIP?
I hope that the “unhoppy” frog does not render this discussion “hopless.”
Hi Gary,
In response to your original question asking for additional publications I offer “The fifth age of safety: the adaptive age” by David Borys et al
Interestingly I an involved in another ‘conversation’ about how to communicate/integrate Safety II into the risk management of financial institutions who see ‘no business case’ for its application.
I am suggesting a ‘stealthy’ approach using Hollnagel’s analogues of Reactive (Safety I) and Proactive (Safety II) Risk Management – it seems to work well with the COSO Control Framework and Simon’s Levers of Control Model
Ivan Pupulidy (July 2015) comment about Safety III was made in the context of understanding the ‘Margin of Manoeuvre’ (see Margin of Manoeuvre: A Safe Space for Emergency and Disaster Responders by Gibson and Ivan Pupulidy) or what could also be described as the ‘Safety Space’ (Reason) or the ‘Discretionary Space’ (Dekker), the size of which is contingent on the level of interactive complexity which is present but which also offers a very rich opportunity for developing a more humanistic and positive approach to dealing with the people aspects of safety at work.
Hope this helps
Hi, Richard. Thanks for adding to library! I find it most educational there are different theories describing the progression and growth of workplace safety.
Hale and Hovden (1998) put forth the original argument that safety has evolved through 3 ages: Technical, Human Factors, Management Systems. Glendon et al. in 2006 suggested safety was heading into the Integration Age or the Fourth Age of Safety. My sense he was in a systems thinking paradigm and reinforcing the human aspects of a socio-technical system. He advocated an integrationist role empowering workers rather than confrontational one in labour relations
As you noted, Borys et al in 2009 offered the fifth age of safety, the ‘adaptive age’; an age which transcends rather than replaces Safety-I and adds Safety-II: “The adaptive age embraces adaptive cultures and resilience engineering and requires a change in perspective from human variability as a liability and in need of control, to human variability as an asset and important for safety. In the adaptive age learning from successful performance variability is as important as learning from failure.”
I saw the first 3 ages in Erik Hollnagel’s 2010 presentation and extended it with an Age of Cognitive Complexity. We are also transcending and not replacing (except for outing myths and fallacies) and making sense of a real world that includes both Order (obvious, complicated) and Unorder (complex, chaos) systems. Instead of heading down the safety risk management path, we are exploring safety risk and resilience from a complexity perspective as explained in Dave Snowden’s video: https://youtu.be/2Hhu0ihG3kY
Having spent a lot of time developing business cases, I can imagine what your ‘conversation might be like with financial folks! I’d like to suggest a different response: “You’re right. There is no business case. That’s because a business case is the wrong tool for safety.” Here’s why:
Safety is a dynamic non-event so it doesn’t show up as a line item in a budget.
A business case is appropriate in a Newtonian-Cartesian linear world that is stable, predictable, and cause & effect “if-then” relationships exist. However, it doesn’t address complex phenomena like tipping points, butterfly effect, black swans, serendipity.
Safety is an emergent property of a complex adaptive system which most industries are including financial institutions. It’s not a product nor a service we make, create, nor account for on a balance sheet, income statement, cashflow report. An analogy would be accounting for aroma that emerges from a cup of coffee.
Because we must deal with complexity, the business case is the wrong tool. You can’t predict the future so any forecasted benefits are suspect. Risk mitigation strategies only deal with known risks. By definition, a business case has to ignore risks due to unknowables and unimaginables that can emerge from varying conditions coming together as a perfect storm.
We will measure the creating of conditions that enable safety to emerge.
We will be held accountable for what what we can control: our execution efforts including expenses for both Safety-I and Safety-II.
We can’t be held responsible for safety results because these are consequences that emerge from our complex adaptive system.
As financial experts, you face a similar challenge in meeting revenue targets that are our customers’ decision, not yours. Tough to be accountable for things you can’t control. Tougher to make it a business case.
Hi Gary, I do not consider myself a safety expert, but would like to introduce another angle to the discussion. I have seen a lot of activity around introducing a safety culture in mining in South Africa. Over the last 5 years the Government has launched a major campaign at improving safety, with severe penalties involved where safety incidents occur. So from a management point of view there is tremendous incentives to improve.
And yet the results are often not great, in spite of a significant emphasis on safety,and ample resources being made available.
There is a deeper problem at work in many cases. Under pressure from poor financial results (downwards commodity cycle) many mines are cutting costs, but cost cutting usually just displaces costs to an area where it is more difficult to measure. Management now redoubles efforts to reduce costs and start to try and manage the parts of a mining system more efficiently. Professor Russel Ackoff said many years ago that “when you optimize the parts, the overall system is not optimized any more, or when you optimize the overall system, there are parts that are not optimized” In doing this they try to force certainty on things that are inherently uncertain. Every aspect of the production process now needs attention and managers become like the boy at the dyke, pushing their fingers and toes into the holes, but soon finding they do not have enough fingers and toes. So the managerial span of attention becomes overloaded and time and energy for conversations with employees around safety reduces. Managers now try to cover their backsides by commissioning thick safety manuals.
We have introduced a method of working which increases the productivity of mines drastically. We have observed in almost all cases that as production through the mine starts to increase and flow effortlessly, managers and workers are able to exit the firefighting mode they were operating in. And with this safety improves automatically. Typically within 6 months the time needed for a morning production flow meeting reduces from an hour to 20 mins. And 10 minutes of this is used for discussing a safety issue in depth. Conversely, when the safety incidents start to increase there is a high probability that the quarter’s production will be below target.
I have come to believe that managers and employees are able to deal effectively with safety issues through conversation, but not in an environment where firefighting (caused by financial considerations ) is prevalent. And since firefighting in mining is primarily caused by efficiency measurements ideal for the obvious environment, but misplaced for a complicated environment (where much of mining operates) we have lots of this.
More on this at http:/stratflow.com/a-leadership-intervention-for-mining-talk-presented-at-wits-business-school-2015.
In a pressurized environment the best safety interventions will battle to perform well, so perhaps Safety 3 needs to take into account the degree of firefighting in the company in question also. The employee engagement sores of managers and employees may correlate with this. In the ideal situation an intervention would try to address both the worldview of managers on how a good business should be run as well as the safety culture.
These are just some ideas to get the conversation going.
Regards
Hendrik
Background: Hendrik and I have an ongoing conversation about connecting his work on SCRUM Production Flow and my exploration into complexity thinking and Dialogic OD. http://www.dialogicod.net
Henrik: I agree with Ackoff’s systems thinking view that optimizing parts leads to sub-optimality. “Safety First”, “The customer is always right”, “People before Profits”, “No margin, no mission” are examples of optimizing one part at the expense of others. In complexity thinking, it’s not just about the parts but the relationships amongst the parts. The key is creating the conditions for autonomous agents to adapt, co-evolve the system, and enable desirable properties like Safety to emerge. BTW, non-human concepts like profit, mission, policies, manuals are on my agent list because their presence influences the behaviour of other agents.
Regarding working in a pressurized environment with lots of firefighting going on, what if we presented Rasmussen’s model and asked employees to indicate where they believe is the company’s present operating point? http://wp.me/p2WOka-FH I presume the CFO would mark near the economic failure boundary, production supervisor near the unacceptable workload boundary, and frontline workers close to the accident boundary. I would hope the CEO’s mark would be somewhere in the middle. Anywhere else would be quite revealing.
As Richard Cook commented, the model is descriptive; it doesn’t tell you what to do. However, it can help to open eyes and start a dialogue on where should the operating be and what might we do to get there. This is why Hendrik and I are curious about Dialogic OD. For those familiar with the Cynefin Framework, Dialogic OD is what we do in the Complex domain while the more recognized Diagnostic OD is performed in the Complicated and Obvious domains.
One more thought. At a daily SCRUM meeting, start by asking where the operating point is. If it has moved from the previous day, gather a few stories to understand why the change. At the end of the meeting, ask where will today’s planned actions move the operating point. Moving the operating point would be equivalent to moving the needle on a dashboard gauge.
Professor Hollnagel recently shared his thoughts on Safety III at http://safetysynthesis.com/safetysynthesis-facets/safety-i-and-safety-ii/and%20what%20about%20safety-iii.html
The road of progress is littered with the rusting hulks of attractive nonsense that drove into the ditch and abandoned wisdom in the breakdown lane from insufficient traction.
A thought:
Safety is a divergent, not a convergent problem; there is no one solution to getting work done safely – it is part ‘bicycle’ but mostly ‘frog’
As a consequence safety is full of paradoxes to manage:
– production versus protection
– stability versus change
– people versus systems
Perhaps Safety I and Safety II are simply another paradox which cannot be resolved but simply transcended by higher levels of thinking and acting (Torbert, Graves, etc etc) – which is notionally Safety III (or simply both Safety I and Safety II held in dynamic tension)
Another paradox – Perhaps safety is both the emergent property of a system and a process that can be improved, measured and influenced (and maybe managed).
I am influenced by this quotation from Hollnagel:
“Instead of defining safety as a system property, we argue that safety should be seen as a process. Safety is something that a company does, rather than something that it has. Safety is dynamic, it is under constant negotiation, and the safety process varies continuously in response to and anticipation of changes in operating conditions.”
If safety is a process then this means measuring safety performance through accident rates and other measures of failure is inadequate. What we need is a ‘balanced scorecard’ of leading and lagging measures to monitor the performance of the dynamic process of organisational effectiveness.
Hollnagel goes on say: “Safety should rather be tied to indicators of the dynamic stability of an organisation. An alternative measurement of safety would be one that accounts for the various parameters it actually relates to: technical control of the process at hand, available resources, social acceptance, and so on. Or as proposed by resilience engineering, the ability to respond, to monitor, to anticipate, and to learn.”
Any thoughts?
Safety is part of quality.
Safety is part of fitness for intended purpose.
Safety is part of performing satisfactorily in service.
Safety is part of do it right the first time.
Safety is part of meeting requirements.
Every system is perfectly designed to produce what it produces.
Every quality management system is perfectly designed to produce the quality that it produces.
Every safety management system is perfectly designed to produce the safety that it produces.
Thanks for your thought-provoking comments, Richard. Paradoxes are indeed intriguing and fascinating.
In safety, we inadvertently create goal-conflicting conditions that force workers to make trade-offs and oscillate – Hollnagel’s ETTO principle and as you noted on your blog Reason’s the knot in the elastic band model . Besides bimodal either-or decisions (dyads), I include 3-way balancing acts (triads). An example is the Rasmussen operating point model balancing safety, productivity, and economy. Other triads include quality /resources/schedule, people/process/technology, compliance/confidence/caution, experience/rational/emotional, risk/reward/ethical.
Rather than a paradox, I think it’s plausible to see safety as both the emergent property of a system and a process. I liken it to looking into a box from two sides. An analogy would be treating light energy either as a particle or as a wave. Both models are useful because for the right situation they can explain real-world phenomena.
A limitation of the process view is how it is depicted as a linear flow: inputs -> actions -> outputs -> outcomes. One can be led to assume “if-then” cause & effect relationships exist. This has been the essence of Safety-I and, correct me if I’m wrong, Safety-II.
We also need to consider non-linear feedback loops (i.e., the butterfly effect). We can by perceiving Safety as something that a company has but not in terms of an owned possession but something that emerges from the conditions. Emergence is why the whole is greater than the sum of its parts in a complex system.
Regarding alternative measures of safety, instead of quant stats and numbers, how about we measure Resilience. We monitor the current situation, anticipate varying conditions, respond by adjusting performance, and learn from the stories how workers have resolved paradoxes, dilemmas, conundrums.
Safety professionals could map collected stories on triad diagrams to visually show where the operating point is and thus design effective interventions to restore operating point balance. Finding the time to do this would come from dropping the preparation of Safety-I accident report summaries which nobody really reads.
Gary,
I think there is real mileage in applying the ‘triad diagram’ in the way you propose
Do you know anyone who is applying it in this way?
By the way I’m now seeing ‘triads’ (or lack of!) everywhere I look
Regards
Richard: Yes, we’ve been using triads for several years in narrative inquiry. When we listen to stories, we ask the storytellers to also provide their interpretations and meanings. They do this by marking on triads where the tension point was for the story told. We’re now applying this method to safety and the tension point becomes the Rasmussen operating point for the story told.
We call it safetySCAN http://cognitive-edge.com/?p=6051. To see how the triads are used, we have a demo site at http://bit.ly/1OhMyjx. You’ll also see dyads, stones, and the type of profile data we can collect for analysis.
It’s our belief that the storyteller is the only one who really knows what the meaning and message is behind the story. Any subsequent interpretation by a consultant or analyst reading the story is a guess, possibly educated but still blinded by the reader’s cognitive biases.
Thank you all for another excellent article and discussion. I never leave this site without learning and appreciate everyone’s contributions.
From my perspective as practitioner, I found Dr. Bill’s quotation from Machiavelli’s “The Prince” particularly appropriate. Many of the “decision-makers” in organizations, or those who assume risk into the process, ascended to their positions largely without exposure or discussions like the above. For them, the old paradigms worked and the influence of the Fundamental Attribution Error supports the belief that accidents begin and end with poor behavior and flawed decision-making without consideration of the network of influences that resulted in causation. It’s simpler and far less challenging.
As I read the original piece and everyone’s comments, I couldn’t help but recognize the pervasiveness of those elements of any reasonable complex adaptive system that ultimately influence safety. Sadly, I think, at least in the experience of my working career, safety has been late to the professional evolutionary process of the industrial age. I think Dr. Hollnagel’s timeline is a great representation of that reality. So much so that safety isn’t, in large part, brought in until after the fact and then applied to the process rather than being designed into the process.
I’m not one who believes that a “safety culture” is a unique and separate thing. I think culture is culture and whether or not that culture is one that can be described as “safe” requires an analysis of how well they learn, how open they are to discussion and reporting and how “just” their processes of accountability are.
I think, as professionals, we recognize the rationality that businesses and systems have constraints that influence and impact other subsystems and processes within those systems. If safety truly is “First,” it should be part of the discussions from the Board Room to the Break Room and all points in between. When risk is injected into the system, it should be tracked and consciously accepted as it moves downstream to the point where it may involve bone, blood and body. Risk Management implies that we’re managing the risks and that, to my mind, involves at least a recognition that those risks are present. For example, if we’re cutting training dollars or reducing the fixed costs of people, there should be a conscious consideration of how those decisions will ultimately affect the safety and available responsive/adaptive bandwidth of the process vs. person interface.
Again, thank you to all that have commented and continue to. The broader the discussions, the greater the audience, the greater my hope that someday this site will be simply called “safety” for what it is and what it requires without being different from the norm.
Happy Holidays to everyone!
Erik Hollnagel just suggested I include this link to compliment the conversation:
http://safetysynthesis.com/safetysynthesis-facets/safety-i-and-safety-ii/and%20what%20about%20safety-iii.html
Thanks for the article. I am a real world practitioner. The answer to your question is that calling it Safety III is just a fad. It has always been an issue involving environment, systems, and humans so it has always been a complex adaptive issue. We typically do work in imperfect environments, with imperfect systems, and with people that make mistakes. If it helps others develop interest in safety issues by dividing a timeline in to safety I, safety II, and Safety III, then so be it. However, it has always been a complex, adaptive challenge requiring complex, adaptive leadership
Thanks for your comment, Joseph. Using “Safety-III” was just to get juices flowing. It seemed to work with over 50 comments posted.
I am using “Adaptive Safety” as the title for my 2-day workshops: https://www.southpacinternational.com/hoplab_content/adaptive-safety-2-day-masterclass-complex-adaptive-systems/