I recently listened to one of Drew Rae’s DisasterCast Episodes on Interlocks. During this outstanding Podcast he described how interlocks can be used as a safety mechanism to prevent certain accidents. While I don’t want to go into great levels of detail about Interlocks with this post, to me it was very interesting when he got into the ways humans are involved with systems, how they work to create safety interlocks in a system, and how (sometimes) these can fail. To me the failure part is interesting, but success is even more interesting to me because as humans we create safety and successful outcomes a lot of the time and I think that the amazing capacity of humans to create safety and success can be lost in the noise surrounding a hierarchy of hazard controls.
Before we go further, perhaps I should emphasize that I believe the use of a hierarchy of hazard controls is extremely important. In many cases higher order controls are more effective and take the emphasis off the employee and in many cases may help optimize the operational environment for the worker. However, we should not get lulled into a false sense of security simply because someone has told us that we can eliminate all risks (if you hear this, please challenge the statement with inquiry and dialogue because zero risk is not possible to achieve). Even if we are able to eliminate risks in some areas or substitute less hazardous work methods, we will likely trade one risk for another. (Oh yes, we must be careful of unintended consequences and oftentimes we are blind to them in our zeal to do something). So, we must be aware of the entire risk control process and manage it wisely.
In so many cases when an accident or incident occurs there is a push by the organization to do something to prevent that incident from happening again. However, we must be cautious and guard against the “Do Something Syndrome,” which is described here in The Farnam Street Blog. We really need to understand what the types of controls are in the hierarchy, the purpose of each one, the specific contexts and system interdependencies which can facilitate their successful implementation (and potential failure), and that most of the time for every choice we make we are giving something up and in many cases this can have impacts on successful (and safe) operational performance.
To show a specific example, I need to take you back in time by a few years. It was during my time as an Instructor Pilot in the Naval Aviation Training Command (Marine Aviators often serve as Instructor Pilots in Navy Training units). I remember years ago while I was Director of Safety and Standardization for a U.S. Navy flight training squadron we were undergoing the transition to a more modern training aircraft, which was much faster (and arguably by some, prettier) than our current aircraft. Both aircraft were designed to be used in training brand new student aviators (you know, the kind with little to no experience, which meant as Instructor Pilots we had our work cut out for us). During the transition there was a potential for aircraft positions to conflict with each other during flights. I am not sure how well experiments with two types of high velocity metal occupying the same space work in labs, but with airborne aircraft the result typically isn’t good. So, there was a concern about mid-air collisions.
To address this issue I suggested that we move more towards a stricter form of procedural control where aircraft would be assigned block takeoff and return times and be required to meet those takeoff and landing windows. Great thinking, right? This way we would KNOW who was going to be where at what time and things would be highly controlled, right? It was at this point that somebody with more experience reminded me of our need to maintain adaptability and flexibility, which is a hallmark of Navy and Marine Corps aviation. My option would have placed too much control and too many restrictions and may have resulted in the cancelation of so many missions that it would have reduced our production numbers. Safety has to work as a mission enabler and overly restrictive rules might improve safety, but at an excessive detriment to production flexibility, and ultimately organizational performance. We still needed to preserve the flexibility that was afforded to us through our procedural controls, which allowed for adequate margins of safety.
So, how did we balance the use of procedural interlocks with the need for flexibility and production output? The organization created additional procedural controls for the new aircraft and we went back to the basics that had worked in the past for the older aircraft. We reemphasized the need for continuous communication with other aircraft in our training areas and used the procedural controls the way they were designed. This allowed us the ability to maintain adequate margins of safety while preserving the adaptability and flexibility needed to help the Instructor Pilots accomplish the training missions based on the changing dynamics of the operating environment (such as individual student needs, weather, and congestion at our outlying training airfields). This process allowed us to safely accomplish our production goals and train student aviators who understood the value of adaptability and procedural controls. I am not saying that doing things the way an organization has always done them is the best way to create safety. In fact “we have always done it this way” approaches can be terribly flawed in many cases, such as when there is a need to make sacrifice decisions to preserve safety over production. However, in an organization’s zeal to change perhaps sometimes decision-makers overlook some of the unintended consequences of decisions.
So, what is the best way to protect people and other operational assets while meeting production goals? This post may not provide a clear answer, but using a system approach to analyze problems may help key leaders and decision-makers develop strategies. Using a diverse audience of experienced workers and experts, including those who may be affected by the changes and decisions may serve as a useful approach because oftentimes they may have some of the critical information necessary to actively create safety during production work. The hierarchy of controls is important. In many cases there are ways to eliminate some risks, and in other cases, creating engineering controls in the form of interlocks may be useful. In some cases, using a combination of controls, such as interlocks with Personal Protective Equipment (PPE) may be required. A lot of the decisions will be based on the level of acceptable risk. An important point to emphasize is that when deciding the types of controls to put in place for risk reduction, experts and experienced workers, managers, and leaders should talk about the potential benefits and associated negative consequences before selecting a course of action, because if this conversation does not happen workers may end up having to create workarounds to address unforeseen negative consequences associated with safety interventions. A system approach may never be perfect, but it may help. Leaders and decision-makers will never be perfect, but they can learn and grow. During the process they can help lead their organizations into success.
Great post. Thanks for sharing your experience from the aviation space. More often than not, humans think in a structured way and the saying ‘creatures of habit’ also rings true in many instances. In construction, the comfort for humans performing a task the same way is there is little complexity and though process that needs to be applied, therefore a continuation of a sequence that may be efficient, safe and cost effective, a proven track record. Although the hierarchy of control is a value tool, the limitation lies in the humans capacity to re-calibrate their thinking away from the norm, afraid of something different or sense of unrest because they are not used to a new method of performing works. Just yesterday, an electrician was cutting out penetrations in a plasterboard soffit, he was using an attachment on his cordless drill that cut the penetration easier and more efficiently, whilst encapsulating the moving parts and dust. A wonderful example of a safety engineered control, which removed the over-reliance on a lower control, P.P.E. Is this the beginning of a new norm, where solutions, like interlocks re-calibrate human thinking around safety solutions instead of administrative controls and rules?
Thanks for your comments! Thanks for sharing the example of the engineering control as well. I like your points about recalibrating thinking. It reminds me of a story from several years ago… I used to fly the KC-130 Hercules F and R models. They were older aircraft and they had a form of Ground Proximity Warning System (GPWS), which basically squawks at the aircrew if they get too low to the ground. The thing never seemed to work right. Seemed like it would pass the test and then not function properly (like false positives) or the test would fail. The norm was to simply pull the circuit breaker on it so we wouldn’t have to listen to it. Years later when we transitioned to the KC-130 J model of the aircraft, the plane had an awesome system called a Ground Collision Avoidance System (GCAS). It worked well. We had to “recalibrate” ourselves to being used to a form of warning system that actually worked. It was a great system that helped us tremendously. Both the legacy KC-130 and the J models had a Radio Altimeter (RADALT), which basically beams a signal to the ground and back to the aircraft to tell you how high you are above the terrain. I later flew the UC-12B (basically a King Air 200), which had a GPWS and RADALT that also worked well. Again, it really helped. Then I started flying the T-34C again as an Instructor Pilot. That aircraft (shown in photo with this post) was very rudimentary and had no GPWS or RADALT. That placed much more emphasis on the aircrew to manage altitude margins. Having flown numerous aircraft and seen challenging situations, my preference was to have the engineering and warning controls, but to use them as a layered defense on top of solid airmanship and the adaptability required to effectively operate military aircraft. Thanks again for your comments!
Ben and Randy,
How does “situational awareness” apply?
The Navy seems to be pushing it.
Thanks for the question. Regarding Situational Awareness (SA) and how it applies, I am not sure if you were asking how it applied in the story in my post or in general. I will attempt to address both. Generally speaking I think the Navy’s description of Situational Awareness essentially has to do with how closely one’s perception of reality matches reality and how accurately one can project activities into the future. Basically, we can ask, “What is going on now and how will actions unfold as time progresses?”
Relating SA to the story I mentioned really has to do with an understanding of the procedural controls in place and how aircrew would be required to understand the procedures and communication techniques as well as understand how their particular actions while executing those procedures might unfold several minutes into the future.
I don’t think simply using Situational Awareness as an accident prevention tool is effective at all. Simply saying that workers need better SA fails to take into consideration error-provocative environments and how complex situations can develop in high-risk work. It is like telling a worker to simply pay more attention and think about what they are doing. Without addressing the underlying problems associated with real hazardous work simple SA strategies will only be effective up to a point. You can use a hierarchy of controls to eliminate some risks, substitute less hazardous work methods, implement engineering controls, warnings, administrative controls, and PPE. However, how can you implement better Situational Awareness? I do not believe SA is a safety control, but it is how we perceive and act on information. I also think it is easy to simplify SA when SA really isn’t simple.
Great article Randy. Yours was another reminder of James Thurber’s observation that “there are no exceptions to the rule that every rule has an exception.” Perhaps it’s a lesson learned in aviation based on need but the concept of “creating safety” is a critical characteristic in operations that can be described as “safe.” While we certainly have to simplify for explanation and teaching purposes, I agree with the high-reliability tenet of resisting simplification. The modern world doesn’t come at us in an “If-Then” linear method and our ability to adapt to emerging conditions, often concurrent, is key to resilience and long-term reliability. As safety professionals, I think one of our greatest challenges is finding that sweet-spot in systems structure that actively maintains safety while enhancing performance reliability and organizational resilience. Resilience, or our ability to respond to and recover from adversity, is at the heart of creating safety. It does no good to reach the summit of performance if we can’t return to tell the tale. Thanks again for a great post.
Hi Ron, thanks for the comments! It sure can be easy to try to simplify things and I also think applying the Weick and Sutcliffe HRO Principle “Reluctance to Simplify” can be helpful. Sure there may be times when dynamic operations are unfolding in high-risk situations with operational crews and fast judgments and decisions must be made. In those situations, maybe some simple heuristics are what is needed. However, I think there are times when planners have time to work things out and in the haste to “fix” things it can be easy to oversimplify the situation. I like your point about the “sweet spot” in systems, and I see that as a sort of dynamic balancing. Thanks again for the comments!
Thanks for the help.
The event in my link was a mid-air of two Navy carrier jets.
I was a bit puzzled by the admiral’s focus on situational awareness, which is just one of many ineffective barriers.
When a mid-air occurs all of the barriers to the conflicting flight paths were missing or ineffective.
How come the admiral got locked-in on just one?
How come he decided to lock-in on situational awareness?
What was going on?
SA is a top priority for military aviators, and LSA – Loss of SA – is frequently cited in near-miss, shoot-down, and accidents.
It is a useful construct, but is rarely optimal. At the end of the briefing and the beginning of the mission, team SA is high, and probably optimal, but can decline as conditions change. One of the real reasons to practice air combat is to assess and improve SA: “lose sight, lose the fight” for instance. But, while eyeballs are locked on the adversary, the pilot must still track every other variable, such as, other team aircraft, bad guy aircraft, missile launches, and most importantly the ground, or artificial floor, typically 5 or 10k’.
Virtually all military mid-air collisions are with one’s own “section mates”, not the adversary or hapless civilian. It is an oversimplification, but LSA is the primary reason. Ironically, one can buy and use a FLARM device in gliders that notifies you of other nearby gliders, but no such 360deg tech is in any fighter, AFAIK. (Air-Air radar is supremely capable of detecting any moving object, but only in the designated forward search area.)
I would also argue that the concept of SA subsumes all of the HFACS 4 layers of conditions, at least for the relevant future time interval in which one is operating.