Shortly after The Estonian Airbus 320 Smartlynx took off on February 28, 2018, the aircraft’s four flight control computers stopped working. Each performed exactly as expected, disconnecting after (incorrectly) detecting a fault. The problem, discovered later, was an actuator that had been serviced with too viscous oil. A design created to avoid a problem created a problem. Only the skill of the training pilot on board prevented a fatal accident.
Now that the Boeing 737 MAX is returning worldwide after a 21-month grounding, flight training and design are in the spotlight. Securing a secure future for aviation ultimately requires an entirely new approach to automation design using methods based on systems theory, but planes with this technology are 10 to 15 years behind. For now, we need to train pilots to better respond to the many inevitable quirks of automation.
As part of our research on the MAX, Air France 447 and other accidents, we spoke to hundreds of pilots and experts from regulatory agencies, manufacturers and major aviation universities. They agree that the best way to prevent accidents in the short term is to teach pilots to creatively handle more surprises.
The slow response to overdue pilot training and design reform is a persistent problem. In 2016, seven years after the fall of Air France 447 in the South Atlantic, airlines around the world began retraining pilots on a new approach to managing aerodynamic stalls at high altitude. The simulator training that Boeing convinced regulators were not needed for 737 MAX crews only started after the MAXs second crash, in 2019.
These remedies relate only to these two specific scenarios. Hundreds of other unforeseen automation challenges might exist that cannot be anticipated using traditional risk analysis methods, but in the past they have included factors such as a computer preventing the ‘use of reverse thrust when he “thought” that the aircraft had not landed. An effective solution must go beyond the limitations of aircraft designers who are unable to create the perfect fail-safe jet. As Captain Chesley Sullenberger points out, automation will never be a panacea for new, unforeseen situations in training.
Ironically, Sullenberger correctly noted in a recent interview with us, “It takes a lot more training and experience, not less, to fly highly automated airplanes.” Pilots must have a mental model of both the aircraft and its primary systems, as well as how flight automation works.
Contrary to popular myth, pilot error is do not the cause of most accidents. This belief is a manifestation of retrospective bias and the false belief in linear causation. It’s fairer to say that pilots sometimes find themselves in scenarios that overwhelm them. Greater automation can very well mean more overwhelming scenarios. This may be one of the reasons why the fatality rate of large commercial aircraft per million flights in 2020 has increased compared to 2019.
Pilot training today tends to be scripted and based on known and probable scenarios. Unfortunately, in many recent accidents, experienced pilots had not received any training on the system or the simulator for the unexpected challenges they encountered. Why can’t designers anticipate the kinds of anomalies that nearly destroyed the Smartlynk aircraft? One problem is that they use outdated models created before the advent of computers. This approach of anticipating scenarios likely to present risks in flight is limited. Currently, the only available model considering new situations like these is the System’s Theoretical Process Analysis, created by Nancy Leveson at MIT.
Modern jets developed using classical methods lead to scenarios that await the right combination of events. Unlike traditional airplanes built using only basic electrical and mechanical components, the automation of these modern jets uses a complex series of situations to “decide” how to operate.
In most modern airplanes, the software controlling the control response behaves differently depending on speed, whether it is on the ground, in flight, whether the flaps are up and whether the landing gear is up. Each mode can have a different set of rules for the software and can lead to unexpected results if the software does not receive accurate information.
A pilot who understands these nuances might, for example, consider avoiding a mode change by not retracting the flaps. In the case of the MAX crashes, the pilots found themselves in a confusing situation, i.e. the automation was working perfectly, but not as expected. The software received the wrong information.