- January 20, 2020: How Boeing’s Responsibility in a Deadly Crash ‘Got Buried’
- January 21, 2020: A Decade Later, Dutch Officials publish a Study Critical of Boeing
- UPDATE February 20, 2020: Boeing Refuses to Cooperate With New Inquiry Into Deadly Crash
Share
Jeff Lyth
Husband, father, grandfather. Big fan of doing safety differently, and proud caretaker of this website.
Great post!
Instead of designing for success, I suspect they should have designed for every possible failure. “How can we make this plane fail?” I tend to believe it is only then that we can built a resilient plane, one that is capable of flight operations even with multiple system or component failures. The rush to design, test, build, and bring to market may have missed this important step.
I seem to recall Sidney Dekker saying that the answer to complexity is transparency. If he didn’t, he probably thought it. The Dutch aviation safety board’s decision to hold Dekker’s study of the 737NF accident in house, rather than publish is an example of “lessons to be learned”. This was a phrase I learned from Bill Corcoran. In other words, a lesson learned that hasn’t been transmitted, and hasn’t had any action taken to do something differently is actually not a lesson learned. Organizations are reluctant to publish otherwise well done accident investigations because of a fear of litigation. I’m sure the Dutch were no different. The input of the FAA and Boeing to the Dutch report is pretty normal. The Dutch board would normally send the draft report out to other agencies, including Boeing for comments on facts. This does not mean that they have to incorporate comments verbatim, as the NYT article suggests. Boeing could have been offered the opportunity to submit a minority report if they disagreed strongly with the Dutch board’s report. This is ethically the cleanest way to voice disagreements, leaving it to the readers to decide. Allowing a manufacturer to allegedly rewrite an investigation that makes them look bad, displays moral weakness. I wasn’t there, so I’ll withhold judgement. In any event, Dekker appears to have done his best to highlight the pilot’s errors in the context of a single sensor that sent a strong computer signal when it failed. It’s too bad this message was hidden by the regulator.
I second – “Organizations are reluctant to publish otherwise well done accident investigations because of a fear of litigation. But a lesson learned that hasn’t been transmitted, and hasn’t had any action taken to do something is actually not a lesson learned.”
The awfulness of the Boeing 737 Max 8 story unfolded (for me anyway) against the background the release of the report (NTSB in November) on the Uber autonomous vehicle fatality and reporting on the USS McCain collision (ProPublica in December). One might wonder (that is, if one wanted to be deliberately provocative) whether the human factors profession might have decided to rest on its laurels after the success of the high-mounted brake light – or whether perhaps everyone had decided to follow the money and become ‘UX designers.’
OK – that’s unkind. And untrue. Dekker’s masterful (and thoroughly referenced) analysis/tutorial on TK1951 proves as much. We know quite a bit about human interaction with automation. But then something like the Max 8 story comes to light. Really tragically disappointing.
It is certain that Boeing – and Northrup-Grumman in the case of the McCain (Uber I’m not so sure about) – have significant human engineering capabilities. So why do these things still happen? Perhaps there are organizational considerations.
The U.S. Nuclear Regulatory Commission’s Human Factors Engineering Program Review Model addresses the placement and authority of the human factors engineering team within an organization designing or modifying a complex, safety-significant, system; it notes the following:
– The team should have the authority and organizational placement to reasonably assure that all its areas of responsibility are completed, and to identify problems in establishing the overall plan or modifying its design.
– The team should have the authority to control further processing, delivery, installation, or use of HFE products until the disposition of a nonconformance, deficiency, or unsatisfactory condition is resolved.
The above practices would seem to be applicable beyond just commercial nuclear power. Are they generally adopted in high consequence contexts? That is, are people whose backgrounds allow them to understand certain vulnerabilities actually in positions where they might make a difference? If this is not the typical practice of developers (and expectation of regulators), perhaps we shouldn’t be surprised by reports of events involving ‘automation surprises.’