On Tuesday, 10th December, I came across the following article published on The Australia (fig1).
The incident occurred to an A320 operated by Jetstar Airways that departed from Sydney bound for Ballina/Byron Gateway Airport, Australia. Followed by an unstable visual approach performed by the first officer, the crew conducted a messy go-around but didn’t follow the company’s standard procedures. The second approach was even worse: at about 700 ft, the warning system advised that they had forgotten to extend the landing gear even after (supposedly) completing the landing checklist. They almost landed the plane without the landing gear, endangering all passengers aboard. They went for another go-around and while on the circuit, almost collided with a light aircraft. Luckily, minutes later, they landed safely.
ATSB (2019) investigation revealed that “a number of factors, such as distraction and limited use of aircraft automation, combined to result in the landing gear not being selected to down”, said Dr Stuart Godley, ATSB transport safety director. He continues “this occurrence highlights the importance of adherence to standard operating procedures and correctly monitoring the aircraft’s approach and parameters to provide assurance a visual approach can be safely completed”. Thus, the investigation concludes that “following standard procedures mitigates the risk of the selection of inappropriate auto-flight modes, unexpected developments, or confusion about roles or procedures can contribute to decisions and actions that increase the safety risk to the aircraft and its passengers”. As a result, Jetstar advised that “[…] both flight crew members attended debriefings with flight operations management and were provided with specific simulator and line flying training related to the occurrence”.
After reading and rereading the article and the final report, my head started spinning, full of questions: how can distractions and limited use of the automation be enough to describe such an intricate event? How do debriefing with the big bosses and simulator sessions guarantee safety? Is that all we can conclude from such an event? Really?
I was in shock. And couldn’t sleep too. For some reason, there is a massive gap between the incident narrative and the findings and conclusions. The report describes in detail what happened, but the findings are generic and meaningless still. Perhaps ATSB’s team is resource, time constrained or lacking analytical skills. Or they are so distant from operators that they are satisfied concluding the investigation with the simple label “human error”. What they don’t see is we didn’t learn anything from this investigation at all. At least not from the findings and conclusion. It’s a shame that they’ve spent taxpayer’s money to arrive at conclusions that any of us could have reached with much less information than what the investigators had at their disposal. Unbelievable.
I told to myself: I had a moral obligation to do something. I must tell the second story, unpack the event, make the pilots’ voices heard, provide the local rationale. As an airline pilot, I have been through similar events before. And certainly, all pilots reading this post too. Despite my ego, we would never intentionally do anything to endanger our passengers. It was all normal work, normal flight, full of unexpected problems in a fast-paced environment. I can’t disagree more with ATSB’s findings and conclusion and Jetstar’s safety action. Or should I say, (un)safety action?
Let’s go back before the first approach. The first officer’s (FO) decision to conduct a visual approach manually was an attempt to keep up proficiency. There is a lot of research and many incidents showing that pilots are losing their aircraft handling proficiency because they are using automation all the time. There’s nothing better to maintain these skills than performing visual approaches without the autopilot during regular flights from time to time. It’s much more efficient than waiting for the once a year recurrent simulator training.
On top of that, the circumstances couldn’t have been better. It was a quiet morning in a quiet place under beautiful visual conditions. The captain was a check pilot flying in a non-check flight. The FO may have felt lucky to have one of the best pilots in the company beside him. More than this, check captains are far more familiar with manual flight and visual approaches than the rest of the captains and could help him in case things don’t go as expected. I used to do the same. I felt more confident to fly visual approaches manually in the company of instructors and check pilots. These guys are used to having surprises all the time, particularly those resulting from our mistakes. And they are good at anticipating our mistakes: they know where and when pilots will be trapped.
Unstable approach is when pilots don’t configure the airplane for landing before reaching a certain altitude, known as safety window. A safety window is an operational policy intended to set criteria for a safe approach: at 1,000 ft above the airport elevation, the airplane should be fully configured to land, with the speed within a certain range, flaps set for landing, landing gear down and engines operating above idle. To reach these limits, and reduce speed, they may need to use some devices, such as speedbrakes and even the landing gear much before than intended to add extra drag. Unstable approaches are far more common than we think, particularly when pilots manage the descent path without the aid of computers.
In this event, when they realised a stable approach wouldn’t be possible, it was too late to try anything to create extra drag and reduce the aircraft speed, the captain said. However, they kept descending until reaching a certain altitude to go-around. Why is that? Why waiting to get closer to the ground before go-around despite knowing that you wouldn’t be able to make the land anyway? First, since they were doing a visual flight, a go-around should also follow visual rules. The regulation states they have to enter a visual circuit, in which the altitude is 1,500 ft. Second, you need to tell the airplane automation that you won’t land and want to take off again, to reset the system. Pressing a button, TOGA (Take-off/go around), tells the aeroplane you will take-off again and triggers a lot of changes in the aircraft configuration, including setting the engines to the maximum thrust. So, if you are approaching but still above 1,500 ft, you don’t want the engine in maximum thrust while descending. This could lead to a lot of complications, including overspeed.
Handling a go-around is also a good opportunity to put your skills into practice. So, why not continue with a manual flight during a go-around, the captain may have thought? Of course, a go-around is always problematic since pilots don’t expect it. What we do expect though is to land, always, unless you’re in a simulator session. We can spend a whole year without performing a single go-around. So, it’s not uncommon that pilots get a bit confused in the sequence of actions, or don’t follow exactly the sequence or skip some parts, especially if the last one was conducted a while ago.
The pilots also left the flap set to 3 even after levelling off instead of retracting it to 1, as the normal procedures state. At this point, they should have retracted the flaps and accelerated, but the crew was worried with a possible overspeed. From initiating the go-around with full power until levelling off, not more than 10 seconds passed. The auto-thrust, though, could not react in time to manage the excess of power and avoid exceeding the speed limit for flap 3, despite the overspeed protection. Concerned about this, the first officer instinctively reduced the throttles from the climb setting to idle detent. However, Airbus doesn’t work like any other aeroplane: putting the throttles in the idle position causes the computer to de-activated the auto-thrust system, thus prompting error messages and removing some protections. This added extra work to the captain who had to instruct the first officer to set the throttles back to climb again (yes, climb position, despite being levelled off) before re-engaging the auto-thrust. These little small unexpected distractions were enough to direct the crew’s attention to other aspects of the flight and not retract the flaps.
A visual circuit is a high tempo manoeuvre, particularly in a jet aeroplane, with lots of actions to be performed in a very short time. It becomes even trickier when other issues pull the crew’s attention. A standard circuit in Australia is always performed to the left. However, in Ballina, it should be performed to the right, because of the noise-sensitive area over the city. When the first officer started to turn to the left, the captain requested that he turn to the right instead, as stated by local procedures. Soon after, the first officer offered the captain the controls and reverted to pilot monitoring duties. Why not take some rust off and fly it manually too, the captain may have reflected. I assume the captain accepted because, again, there is no more challenging and enjoyable way to maintain proficiency than flying the circuit manually. Next, the plane got close to the runway during the circuit, and the captain made a slight correction to the left, increasing separation. Lastly, after the first officer completed the after take-off checklist, the crew realised the flaps were still at 3. Too many things had distracted them away from retracting the flaps to 1 position, as mandated by the standard operating procedures. Despite this, the captain opted to leave them there since seconds later they would have to be put it back to 3 anyway. Also, flaps 3 reduces the aircraft speed, helping them to gain some time to prepare for the next approach.
After turning onto the base leg, they commanded flaps full, monitored the flight path and performed the landing checklist. At this point, they completely forgot to extend the landing gear. There are two reasons for this. First, there was a break in the normal sequence of actions to extend the landing gear. In a standard sequence, the landing gear is selected down after flaps 2 and before turning to the base leg. Since, they started the landing configuration from flaps 3, the natural trigger for the landing gear was lost. This may have been potentialized by the high workload from flying manually.
“This is exactly one the explanations why they couldn’t also catch the mistake when actioning the landing checklist (second opportunity). One of the checklist items is to check the ECAM memo, as shown on Fig 2. This function, among other things, helps pilots to check if the landing gear has been extended. However, the ECAM memo only resets if the aircraft climbs above 2,200ft. Since they kept flying at 1,500ft, the memo not only didn’t reset but also wasn’t displayed. The absence of memos may have been perceived as an indication that all actions have been completed.
Despite these two missed opportunities, only when they passed below 800ft did the master warning trigger and draw their attention to the landing gear. At this altitude, the ECAM memo resets and checks for the correct positions of the landing gear, seat belt signs, spoilers and flaps. The warning system worked as designed, albeit a little bit late. Given the alarm, the captain conducted another go-around, now following by the book the procedures stated by the company.
Followed by the second go-around, they entered the visual circuit again and coordinated and negotiated their position with a small aeroplane arriving at the same time. They had to do so because this airport doesn’t have a tower to control traffic. However, seconds later, the aircraft system called TCAS showed that the small plane was too close to them and triggered an aural and visual alarm for “traffic”. This required them to identify the other traffic and monitor its position visually. Despite another distraction, the Jetstar crew managed the third approach and landed safely.
What can we conclude from the event?
- The pilots showed they know how to conduct visual approaches and go-arounds flown manually. They showed this by managing their regular flight duties as well as additional surprises at the same time in the first two approaches and go-around, followed by a successful third approach.
- The pilot-automation agency in the A320 isn’t smooth and creates many opportunities for mistakes. Bringing the thrust levers back to idle actually doesn’t mean reducing the thrust but telling the automation the automatic system is disable. Moreover, the technology can’t reset the ECAM memos if you fly below a certain altitude. Pilots had not only to deal with constraints arose during the flight but also automation surprises. Perhaps, Airbus isn’t designed to be flown manually.
- The crew showed resilience, adapting to fast-changing circumstances while maintaining flight integrity.
- Forgetting to extend the landing gear emerged from a break in the landing configuration sequence, the additional demands from the visual circuit and flying manually, the inability of the automation to check the landing gear status and the inability of the checklist to serve as an independent tool from the automation.
This event could be easily used as a simulator scenario for all pilots, to provide them the chance to experience such a complex and high tempo situation. I see more value in giving all Jetstar’s pilot the opportunity to experience this incident rather than sending the pilots involved in the event back to the simulator or to be “reoriented” by the flight operations management. In fact, these safety actions may hinder safety, since pilots may see them as punishments. Also, if you identify that the pilots involved in the incident were not well trained, then probably all of your pilots haven’t been well trained either. Maybe the company’s training syllabus is problematic and don’t prepare pilots to fly manually visual approaches and go-arounds.
In conclusion, what ATSB sees as a deviation from normal procedures, ineffective use of the automation and distractions leading to pilots forgetting to extend the landing gear, I see pilots trying to keep up with their handling skills when performing visual approaches and go-arounds. Using full automation during flights all the time can actually create vulnerability in the system. I also see a skilful crew facing automation surprises and normal variability while maintaining responsibly flight integrity. As Sidney Dekker remembers us, this is a clear example of normal people doing normal work despite clunky technology in a highly variable context.
ATSB (2019). Incorrect configuration for landing involving Airbus A320, VH-VQK: Ballina/Byron Gateway Airport, New South Wales, on 18 May 2018. Canberra, Australia.
IRONSIDE. R (2019). Jetstar pilots forgot landing gear. The Australian. December 10th.
Great blog. Well done.
This is a great piece. Really instructive on so many levels about the difference between work as planned and work as done. Many thanks!
As I started reading this I thought it must be from the 90’s…we don’t blame the workers these days, that sort of response is a warning sign.. surely not, we know about systemic failures etc. but no, its within the last two years eek. Please tell me Jetstar and other airlines are:
* looking for systemic failures always trying to improve their systems amd technology
* reassuring their greatest asset, their workers, by exploring and responding to their findings in a way that will provide the workers and the people they talk to, the public, with confidence and pride that the national carrier has their best interest (their lives) at the centre of the investigation.
The days of punity should be behind us. It only causes adversity, litigation, resentment, mental illness, divisiveness etc.
Long live jetstar under enlightened, insightful Management.
I can confirm that airlines generally do look for systemic failures rather than just focus on the worker’s actions. At Cathay Pacific we’ve been exploring and implementing systems thinking and resilience for a while now and you can learn about it here: https://www.resilience-engineering-association.org/blog/2020/01/28/cathay-pacific-airways-readies-for-take-off-applying-resilience-engineering/
Unfortunately many of the national air accident investigative bodies, despite stating they don’t apportion blame, still tend to fixate on proximal causes and only look at what the pilots (or other frontline workers) did wrong. Even without explicit blame, this often leads to fix-the-person activities, rather than fix-the-system.
Your analysis is thorough and i agree with all your conclusions. However, i believe you are being insufficiently critical of Airbus design philosopy. An expected checklist does not show, because it was not triggered by sufficient altitude? Auto-throttles that turn off, with no notice, and don’t move the throttles – a critical, 2nd order cue to the automation processing. Happily, no pax died to identify these flaws. Did the FO request the Capt to take over because of excess stress…from insufficient practice?
I am an old fighter pilot who has made 1000s of “closed circutis” visually… end visual, night, formation landings. I have also been “bitten” by the automation in the F-16. Sad that 2 contemporary airline pilots – together- could not perform a routine visual maneuver.
Excellent article, Guido. Regardless of the severity of an event or outcome, we should always be looking to understand the local rationale (why it made sense) of those involved. We should investigate and learn about the system and how it contributed to the incident first. A focus on the system doesn’t mean ignoring the role the front line people played in the event, but it means identifying what needs to be changed at the system level before focusing on the people. Pilots may well need extra training, but this action should be identified after we’ve learned what the system-at-large needs.
Highly informative article well written and presented by someone with experience at the ‘pointy’ end of the stick. Thank you for taking the time to put your thoughts into words.
Which begs a massive question for me; just who are the individuals at the ATSM who conducted the investigation? Before I continue I must stipulate the following, I am not professing I could do better, exactly the opposite in fact. I have zero experience in being able to effectively decide what processes should and shouldn’t have been followed, and when; or how flight situations can affect these processes. But that is exactly my point! It appears whoever wrote the initial report had little appreciation for the ‘work as done’, the complexity of the situation the pilots found themselves in and the system variances that preceded from these. Also, what were we to conclude were the motivations of the pilots? Laziness? Unprofessionalism? Neglect? The controls ‘lead’ us towards concluding a combination of all three; additional training will fix that, right!?!
A public report such as this could have negative ramifications for those mentioned, what about ‘Duty of Care’ for those involved in the incident? It appears the very people who demonstrated resilience, adaptability and professionalism (the pilots) are the only victims of this incident!
A great article Guido. If our investigation reports don’t provide us with anything useful to take away, it’s likely to be a poor investment of resources.
Excellent analysis. Many points to raise a debate. Congratulations on the article.
Very enlightening, thanks Guido.
Your approach in analyzing the event was very educative indeed.
Although many operators mislead some aspects raised due to the challenging manual flight in a fully automated aircraft like Airbus, I would like to point some important aspects that any crew member shall comply with during a non automated flight. Being a check airmen I guess the Capt. should take the opportunity to review with the FO, all different aspects that particularly visual approach in that airport will bring as challenging their operation. In this aspect, all the implications with aircraft handling, such as managing thrust , configuration, checklist during the traffic pattern and mainly a missed approach should be part of their previous briefing. As we really do not know if this was made ( as the full report was not published), I leave this as a practice that could minimize such many mistakes due to the high workload imposed.