Flaps, Coffee Cups and NVG’s: A Tale of Two Safeties

 

Adapt and OvercomeI recently read an aircraft accident report where a C-130J aircraft crashed shortly after takeoff. Reports state the accident was caused by a Night Vision Goggle (NVG) carrying case being placed in front of the aircraft control column to raise the elevator during cargo loading. Essentially, the crew was trying to raise a portion of the tail of the aircraft to make it easier to load bulky cargo. Apparently the crew forgot to remove the case before takeoff, which led to a loss of control. The pilot may have been trying to create a form of safety for loading operations, while improving operational effectiveness and efficiency. The reports state that he had been holding the yoke back, but placed the case in front of the yoke to assist this manual method. There is a longer account of the story here and if you watch the associated video you will hear the reporter cite pilot error as a cause.

But before we jump to conclusions and simply blame the pilot, perhaps we should take a step back, examine the context and ask if his actions made sense at the time. What were the how’s, and why’s behind his actions? Examining the context of work may help us understand his actions. In fact, as I read this story, I was not really all that surprised about the pilot’s actions to proactively improve the operational process. In fact, I believe much of the time similar things happen in all sorts of organizations where workers will actively attempt to improve operations and in some cases may trade one form of safety for another. In a way this seems like a tale of two “safeties” where operators may try to maintain two types of safeties simultaneously, such as the safety of their own crew, team and/or equipment as well as the safety of another crew, team, equipment or materiel.

I will recount a specific example with this personal story:

I am a former US Marine Corps Aviator. I used to fly the KC-130 Hercules, among other aircraft. There was a time when I learned to fly the newer, more automated version, called the KC-130J Hercules (similar to the one described above). There was a steep learning curve, but we made it through the process and eventually deployed the aircraft into combat operations. One of the challenges many operational personnel will likely face if they work in a job long enough is the need for adaptability. There may be times when textbook procedures and checklists may not work exactly as written under new operational conditions and additional constraints, and crews must adapt to accomplish the goals of the organization. In fact it is often natural for people and teams to adapt, and much of the world we live in is comprised of complex adaptive systems, as Shane Parrish describes here. 1  However, sometimes during adaptation errors can occur, and I experienced this first hand. In the KC-130 there is a checklist item for checking and setting the flaps. Typically the flaps are checked and set for takeoff and then the aircraft is moved towards the runway. Okay, this is normally fine and works well most of the time, but what happens when something interrupts this sequence and gets us out of order or changes things in a checklist?

That is exactly what happened in this story. In this aircraft, which is propelled by turbo-prop engines, when the flaps are set at the takeoff position (50%), the wind blast from the propellers can be diverted towards the ground after flowing past the flaps, which can then create a wind blast on passengers and forklift drivers who may be loading into the back of the aircraft. So, a technique we used to help protect the passengers and forklift operators was to raise the flaps up before they moved behind the aircraft. This seemed to reduce the level of wind blast they were exposed to. This was done to preserve their safety and well-being and this was something I was taught by more senior pilots and mentors in my career. This was in no way a violation of any procedure and seemed like a helpful and normal technique. In a way, we were trading one safety for another safety. However, in the effort to create safety in some areas we can create vulnerabilities in other areas. This adapted procedure had been working fine until one day when my crew and I were in a huge rush for a high-profile mission. It was “push, push, push, hurry up” to get going. After we loaded the aircraft we were rushing to take off. All the checklists were nearly complete and we were positioning the aircraft for takeoff. Unfortunately because there was no additional checklist item to remind us to recheck the flaps, in the operational push, we didn’t realize the flaps were still up. Fortunately, through the use of Crew Resource Management we caught the error; our loadmaster, using Functional Leadership pointed out the error before we lined up for takeoff and we reset the flaps to 50% before takeoff. The plane was also designed with a warning system to alert the crew of the error, if we had not caught it ourselves.

I think this story is illustrative of the real world and I would imagine that many who lead others and work in high-risk industries can relate. In the busyness of trying to meet production demands and actively create safety in one area we nearly ended up reducing other levels of safety unintentionally. This is not unique to USMC Aviation and I believe others have stories like this. In fact, Sidney Dekker addressed the need for operational workarounds in his keynote address at the 2014 American Society of Safety Engineers Professional Development Conference in Orlando, Florida. During his presentation he described how workers “finish the design” and make up for the shortcomings designers may not have realized during the system design, construction and deployment process. On page 158 of the Third Edition of The Field Guide to Understanding Human Error he describes how pilots placed a paper cup on the flap handle of a commercial airliner so as to not forget to place the flaps in the correct position.2  Sometimes designers and planners don’t foresee every circumstance where humans may be required to adapt to the operational environment. Sure, designers and planners can (and should) attempt to develop a hierarchy of hazard controls to optimize the system for human performance, but in some cases the need for specific controls themselves may not be understood at the time the system is designed or deployed. Alternatively, they may actually design hazard controls into the system, but those controls may still be bypassed (intentionally or unintentionally) as workers perform their tasks and make what Erik Hollnagel describes as Efficiency-Thoroughness Trade-Offs.In some cases workers may even adapt procedures in an attempt to make operations safer, given their perspective and the operational context. This holds true with multiple forms of hazard controls and performance tools, such as checklists.

However, these tools and checklists cannot account for every possibility. In fact, when describing the C-30J accident mentioned in the beginning of this article, leaders mentioned that there was no checklist to cover the situation, nor was the procedure prohibited because it was a non-standard procedure. Designers and checklist developers cannot imagine every scenario. Neither can they (nor should they) write a checklist for every conceivable technique and neither can they foresee every potential workaround or bricolage that might be necessary to meet operational demands. Operational teams will often make due as necessary to meet the demands of the job. Sometimes this means creating two safeties and there may not be a checklist or other safety tool to handle every single scenario.

I also realized this in a visceral way last year. In 2015 through my company, V-Speed, LLC I worked as a subcontractor on a very large safety consulting project and part of that project involved conducting focus groups and a benchmarking project to find out about safety culture. During the course of the interviews and focus groups we came to realize that in many cases workers will have to contend with what we called “competing safeties,” where they must make tradeoffs between one type of safety over another, such as public or customer safety over personal safety. I realized how hard the teams felt they were working to create multiple safeties while getting the job done. The creation of multiple safeties, however, often requires workarounds or bricolage.

Bricolage often works until it doesn’t, but should we chastise workers for their ad-hoc workarounds if they are successful more often than not? How can we balance resourcefulness, creativity, and tinkering for continuous improvement, prioritize the appropriate safeties at the right time and learn and grow as an organization? I don’t think there is a perfect answer, I think the wrong approach is to chastise employees for doing something that was likely taught to them along the way and something that was perhaps even tacitly permitted by supervisors or managers because it worked and didn’t seem like a big safety issue at the time. As the saying goes, “what we permit, we promote.” We need to engage workers so we can learn what they are doing to be successful and in the process help them to do these things in a safer manner. Leaders also need to provide teams with decision-making tools.

Here are a few lessons-learned that should be applicable to most operations and industries (not just military aviation).4

  1. System safety engineers must be brought in early during the design process. Hazard analysis and corrective actions should be an iterative process and line operator input should be sought when designing controls. Over time controls should be improved after user feedback is obtained. The process should be similar to a product development lifecycle?
  1. Understand that the best planners and designers will never get the system perfect and line operational leaders and crews will make real-time adjustments in the field to get the job done. Rather than creating a fear-based culture that squashes user feedback about deficiencies and workarounds requiring workers to adapt, leaders should provide a clear process to learn from these field adaptations to 1) make sure they are safe and 2) determine if these may be innovations to be implemented elsewhere.
  1. When mistakes happen, avoid chastising the operational crews. Reprimanding crews and operators and telling them to pay more attention and follow the procedures in the future does little to solve any problems and fails to reveal chinks in the system armor. Additionally, simply using admonitions, such as “you need better Situational Awareness” fails to provide any tangible process for improvement. Instead organizations should create a process for learning from error. Here are some suggestions.
  1. An ongoing dialogue is necessary to reduce the gap between plans and procedures as they are written and the ways crews have to actually implement them in the field, on the production floor, or wherever their operational environment may exist. Learning is better than punishment and understanding the system is better than simply spot-correcting performance deficiencies.

Planners will never be perfect and neither will line operational crews. Additionally, leaders and mangers at the blunt end will never be perfect either. If organizational leaders wish to understand how workers adapt so that safety and operational performance may be improved they should emphasize the overall important goals and then create a climate conducive to learning. That way leaders may understand why cups are placed on flap handles, why flaps are raised to protect forklift drivers loading aircraft, why NVG cases may be placed in front of aircraft yokes, and why modified tools may be found out in the field. Additionally, by gaining a better understanding of the various types of safety goals faced by line operational teams, leaders and managers may begin to understand the different types of safety employees work to create on a regular basis. This understanding is the beginning of system improvement. Isn’t it better to learn and improve than stick our heads in the sand and pretend these challenges don’t exist?

Footnotes

  1. For an interesting perspective on complex adaptive systems, see Parrish, Shane. “Mental Model: Complex Adaptive Systems.” Farnam Street. Farnam Street, 22 Apr. 2014. Web. 14 July 2015. <https://www.farnamstreetblog.com/2014/04/mental-model-complex-adaptive-systems/>.
  2. For a description on how workers “finish the design” see Dekker, Sidney. The Field Guide to Understanding Human Error 3rd Burlington : Ashgate Publishing Company, 2014.
  3. For a detailed explanation of the ERRO Principle see Hollnagel, Erik. The ETTO Principle Efficiency-thoroughness Trade-off : Why Things That Go Right Sometimes Go Wrong. Farnham, England: Ashgate, 2009. Print.
  4. For more information on balancing safety design with operational safety see Cadieux, Randy E. Team Leadership in High-Hazard Environments Performance, Safety and Risk Management Strategies for Operational Teams. Burlington: Gower Publishing Company, 2014.

Leave a Reply

Your email address will not be published. Required fields are marked *