We have long struggled with trying to capture the facts associated with an accident in order to prevent the next one. This has been largely effective when applied to machines. In mechanical systems things are measurable, observable and more objective.
The information we get from people is always subjective. There are always issues of memory, shame, fear and politics that influence the ever-changing stories we tell. If you think about it, you have probably altered a story to make a point or to protect something that you value. Stories are altered both consciously and unconsciously – we are influenced by a myriad of things including: what we have seen in the past, other stories, and what we think the listener may want to know or hear.
Memory is truly malleable. People think in ‘story’ and these stories are not accurate recordings of an event, rather they are linked memories that help us make sense of events. Our experiences set us up to make sense of a situation, for ourselves. Daniel Kahneman tells us that we actually approach the present by already thinking of the anticipated memory we want to keep. This anticipation is based on our experiences. We then edit, reconstruct, change or alter the memories and stories naturally – and it does not stop there. The future literally changes how we remember the past. Everything is influenced by preconceptions, stereotypes, and biases.
Most formal schools of accident investigation openly admit that witness statements are notoriously inaccurate. Yet, most investigative processes use participant’s memories to build a “factual” narrative. Once written down, the nature of the process changes memory into fact and the credibility of the process that created the narrative can confer even more power to a story. Much like propaganda, these stories can be intentionally used as a trigger to learning or behavioral change. History is replete with stories that are designed to create such changes.
However, in today’s society, “Fact checking” by even casual readers is quite normal. A plausible story may elicit a Google search on a personal device that could challenge the contention of the organizationally approved story. More importantly, the community of practitioners, familiar with the work environment, often challenges the accuracy of the story and the conclusions. A story may not be enough to change old behaviors or to create new ones.
This realization has driven an unexpected shift to a different kind of story. The new story focuses less on the narrative and actions or decisions of people, and more on the conditions that influenced those actions or decisions. The practice that emerged[1] is designed to map the network of influences, which has two main purposes – First, to increase field operator awareness of the influencing conditions and how they can recognize changing conditions. Second, identification of conditions that make the system more brittle, which forms a starting point for leadership actions designed to improve the likelihood of workforce success.
[1] The practice is called a “Learning Review,” which is currently being used by the US Forest Service and several high-risk industries throughout the world.
Editor’s note – This will be the first in a series of posts from Ivan on his developing better learning and investigation practices. Keep an eye out for more from Ivan on this topic in the future!
Excellent summary Ivan. Thank you!
Awesome! Thank you for sharing this Sahika
The myth that ‘mechanical systems things are measurable, observable and more objective’ assumes that such are perceived apart from the human who looks at them. All meaning is attributed by humans who create significance according to their ideology/worldview, including a mechanistic interpretation of reality. The interpretation of conditions is no more reliable that the testimony of a witness and any shift to conditions as somehow less subjective is again a shift away from the ‘too hard basket’ and promotes the elevation of objects over subjects. Could it be that Safety doesn’t have the mental equipment required to deal with the challenges of subjectivity and would prefer the myth of objectivity in another form?
Right on target Rob! Another question emerges from your response, “Could it be that constructing safety as a static element, removes the ability to be flexible enough to address complex systems?”
Ivan, I don’t think there is anything static about safety. All learning requires movement so risk cannot be static, because all learning requires risk.
Ivan: Your reply to Rob reinforces the paradigm that safety is an emergent property of a complex adaptive system. In this mindset it makes sense to focus on conditions and be on alert for changes that bring the operating point closer to a tipping point. Being safe then means staying in the Present and navigating what uncontrollable consequences naturally emerge.
How do consequences ’emerge’? Do they have a life of their own? or do consequences emerge because life is random and people are fallible? Can safety be a ‘property’? and too can that ’emerge’? and if the system has a life of its own, then it acts like and archetype just as Safety does? What if it is more than just a complex problem but rather a ‘wicked’ problem? How then can a focus on systems and conditions possibly help with understanding risk? Is it not the people who change and then the conditions emerge? Otherwise conditions would also have to act like an archetype.
Good questions, Rob. Emergence is a phenomenon of complex adaptive systems (CAS). Consequences can be positive (i.e., serendipity) or extremely negative (Black Swans). The idea of safety as a ‘property’ comes from various sources which I referenced in http://bit.ly/1ZFJUwP
I try not to see Safety as a problem but a state that emerges when conditions change. It’s conceivable that a state we call Danger could emerge from the same condition changes.
There certainly is a connection with ‘Wicked’ problems. These have no permanent solution so best to manage the evolutionary potential of the Present rather than some idealistic Future. In other words, be direction/vector-based rather than outcome-based.
Safety emerges out of relationships and interactions in which different ‘agents’ of the CAS learn with and from one another and take the other into account in their own decisions and actions.
We need to be mindful of changes in anything that acts on or within a CAS: an Object (bright sunlight, sudden wind, unexpected rain), an Event (earthquake, mudslide), an Idea (Brexit, Terrorism, Trumpcare).
Well said Gary – Have you considered that safety is as much an emergent property in a CAS as Uncertainty? If that is true, what does that say about safe systems?
Thanks for the response Gary, interesting that the word ‘human’ or ‘person’ is not appear in your language. Yet unless a system has a life of its own (an archetype) all of these conditions would only be emergent because fallible people triggered something new. Does this mean a system can create something? Can a system invoke something apart from the humans in it?
And if you are aware of wicked problems and only focus on the present, how does one ‘imagine’ the future or perceive a potential in the present? Indeed, how does one tackle risk in the present without an imagination about the future?
Just a quick question if you don’t mind to define what you mean by ‘property’? Thanks.
Rob: It’s deliberate on my part not to use or overuse ‘human’ or ‘person’ to wean myself off the learned paradigm to blame someone. Humans certainly are in the mix of CAS agents and I would say the most unpredictable.
If ‘creating’ means a change in state, then a system can create or invoke something apart from humans. An example would be water constantly dripping into a bucket on a dry floor. This is a system composed of 3 agents. The water level will rise and eventually the bucket will reach capacity (the tipping point) and water will spill onto the floor. The system created something new – a wet floor and it emerged without any human intervention.
Another example is the weather system where air as an agent interacts with water. We monitor the system by the change in its properties (temperature, barometric pressure, humidity, etc.) When the “right” conditions exist, sunshine, cloudiness, rain shower, snowfall , a hurricane, a tornado, etc. emerges, is created, invoked.
I don’t have any special definition for ‘property’. In my mind, a property describes the attributes or characteristics of a substance, or system in our chat. It helps us to understand how that system will behave in a situation. The view I’m playing with perceives safety as an emergent property of a system. Safety isn’t created or made – it’s the conditions that enable this property to emerge. Humans can control some conditions with rules, PPE, and so on but can’t control everything due to this thing we call Complexity.
I sense that many in safety do not distinguish between “tame” and “wicked” problems. The majority of our accident investigation methods assume a tame problem. They are solvable. Wicked problems, however, are the ones that keep you awake at night because you can’t find a solution.
Imagining the Future is necessary since it sets a direction, a heading. I caution people though when they establish the desired future as a goal, an outcome. It gets worse when they work from End in Mind back to Present to develop a linear project plan. This is a typical roadmapping exercise. It looks great on paper but if the environment is not stable and repeatable, conditions will inevitably change rendering the map somewhat useless. What’s really painful to watch is the poor project manager not understanding what’s happening and working hard to control deviations and stay on track to achieve irrelevant goals.
My suggested approach is ‘naturalistic’, not idealistic. You still imagine a Future but it’s fuzzier, like World Peace. In safety, we have Zero Harm. It’s the converse of a wicked problem without a permanent solution. It’s a direction that has the same chances to be achieved as World Peace. But it still serves a useful “True North” purpose.
How do we perceive a potential in the Present? We like working with known knowns. We see Risk as a known unknown. We start by doing experiments. We introduce a probe or catalyst like a safety rule modification, new piece of equipment, a different method. We then monitor what emerges and if positive, we amplify and accelerate. If negative, we dampen or extinguish. Essentially we co-evolve the system with the people being impacted. The potential for change doesn’t come from the probe or catalyst but from the interactions amongst people. The data we collect is in the form of stories they tell about their experience. Stories might about the catalyst or potentially some new ideas that surprisingly emerged. Cerebral popcorn!
BTW, the experiments are small in size. Why small? Because we can’t predict what unknown unknowns, unknowables, and unimaginables will emerge. We call these “safe-to-fail” experiments. If they do fail, the amount of pain is less than the amount of learning gained.
Thanks Gary, that’s interesting because I intentionally use the language of humans and people all the time and never think of blame. Unfortunately, our words, language and discourse ‘frame’ our semiosis and leaving such important language out of our discourse locks the safety industry into a semiotic of objects. For example, the language of absolutes (zero harm) regardless of intent or aspiration, sets a discourse and language in direct conflict with the realities of people and the world (fallible and random). This creates a discourse of denial which is most unhealthy for any culture and drives people to blame. So, ZH can never present a ‘true’ anything, rather the power of such language is anti-learning and anti-human. BTW, who put the bucket there?