The System Was Meant to Help. It Lied About What It Was Doing.
Between October 2018 and March 2019, two Boeing 737 MAX aircraft crashed within five months of each other. Lion Air Flight 610 crashed near Jakarta on October 29, 2018, killing 189 people. Ethiopian Airlines Flight 302 crashed near Addis Ababa on March 10, 2019, killing 157 people. The technical cause was identical in both cases: a system called MCAS (Maneuvering Characteristics Augmentation System) repeatedly pushed the nose of the aircraft downward, and the pilots couldn’t stop it.
The MCAS system was designed to prevent the 737 MAX from stalling. It was a safety feature. But it was also a lie. Pilots weren’t told what MCAS was doing. The system was not visible to them. When it activated, they saw only the effects: the nose pitching down, the aircraft losing altitude. They didn’t know why.
What followed was a cascade of psychological and cognitive failures: pilots trusting a system they didn’t understand, trusting it despite evidence of malfunction, trusting it even as it killed them. The MCAS crashes reveal how automation bias and skill decay can transform a safety feature into a deathtrap.
How Layers of Automation Create “Cliffs” of Understanding
The Boeing 737 has been in production for over 50 years. Pilots worldwide know the aircraft intimately. The 737 MAX was supposed to be a simple update—new engines for fuel efficiency, a slightly different handling characteristic, but fundamentally the same aircraft. Boeing’s certification strategy was based on this claim: pilots wouldn’t need new training because the MAX was essentially a 737.
But the new engines created a problem: they were larger and mounted differently, shifting the aircraft’s center of gravity. This made the aircraft more prone to stalling in certain flight conditions—specifically, during climbs with engines at high power. Boeing’s solution was MCAS: a system that would automatically pitch the nose down if it detected a stall-approaching condition.
Total deaths from both 737 MAX crashes
The two crashes killed 346 people—all in aircraft equipped with MCAS operating as designed.
This seems reasonable on paper. But it created a catastrophic problem: pilots flying the 737 MAX believed they were flying a 737 they understood. They didn’t know MCAS existed. When it activated, they experienced a nose-down pitch they couldn’t immediately explain. They didn’t know that a new system had taken control.
This is the “cliff of understanding.” Pilots were trained to understand and control the 737. They had decades of experience with suspension of disbelief—trusting the aircraft’s systems because they understood them. MCAS exploited that trust by operating invisibly. When the pilots encountered a malfunction, they were dealing with a system they didn’t know existed, with failure modes they hadn’t imagined.
Automation Bias: When Trust Overrides Vigilance
The first crash, Lion Air 610, provides a chilling example of automation bias in action. The aircraft experienced multiple false stall warnings. The MCAS system activated repeatedly, pushing the nose down. The pilots retracted the stabilizer trim, manually counteracting the MCAS inputs. The MCAS reactivated. This cycle repeated dozens of times.
The pilots didn’t know what was happening. They suspected a sensor failure, but the system’s response didn’t match any malfunction scenario they’d trained for. In their confusion, they made a critical error: they trusted the system more than their own senses and training. When MCAS commanded a nose-down pitch, their training told them: “Systems are smart. They’re designed to prevent stalls. Trust the system.”
Automation bias is the human tendency to give disproportionate weight to information generated by automated systems, especially when that information conflicts with human judgment. The pilots were trained to fly the aircraft. They could feel the problem through the control inputs they were making. But the authority of an automated safety system overrode their judgment.
Research on automation bias shows that people defer to automated systems even when they have reason to doubt them. This tendency is amplified when:
- The system is unseen (pilots didn’t know MCAS existed)
- The system is presented as a safety feature (no pilot would want to disable safety)
- The system has authority (manufacturers’ design decisions are treated as infallible)
- The pilot is under stress (both crashes occurred during challenging flight conditions)
MCAS activations before Lion Air 610 crashed
The MCAS system activated at least 132 times during the flight, each time pitching the nose down against the pilots’ control inputs.
Skill Decay and the Out-of-the-Loop Problem
This is where the story gets even darker. In modern aircraft, pilots spend most of their time monitoring systems rather than controlling the aircraft directly. The aircraft autopilot handles navigation, altitude, speed. The pilots manage the bigger picture: weather, fuel, route. This division of labor makes economic and operational sense.
But it creates a problem: pilots’ manual flying skills atrophy. When they need to take direct control, especially in emergency situations, they’re dealing with skills they haven’t actively used in months or years. This is called “skill decay” or “deskilling.”
The MCAS crashes occurred during challenging conditions where manual flying skills would have been essential. The pilots were confronted with a system behaving erratically and needed to disconnect it and fly the aircraft manually. But the 737 MAX pilots hadn’t trained extensively on manually flying the new aircraft type. Their understanding of its handling characteristics was theoretical, not experiential.
Lee and colleagues have documented this phenomenon extensively: operators who are removed from manual control of complex systems lose the ability to rapidly diagnose and recover from system failures. When the automated system fails, the human operators have lost the practical experience necessary to take over effectively.
Time from first MCAS activation to crash, Lion Air 610
From the first MCAS activation to aircraft impact: approximately 8 minutes. The pilots were given 8 minutes to diagnose and correct a malfunction in a system they didn’t know existed.
The Design Assumption: Pilots Will Intuitively Know What’s Wrong
Boeing’s design assumed that pilots, confronted with the MCAS system’s behavior, would quickly understand what was happening and either disable the system or correct it manually. This assumption was fundamentally flawed.
The MCAS system was hidden from pilots. There was no training on how to recognize it, understand it, or disable it. When the system malfunctioned, pilots had no framework for interpreting its behavior. They knew the aircraft was behaving strangely. They didn’t know why. And in the absence of clear understanding, they fell back on a principle that had served them well throughout their careers: trust the systems. Automated systems are designed by brilliant engineers. They have multiple redundancies. They’re safer than human judgment.
This assumption worked fine when pilots had comprehensive understanding of the systems they were operating. It became lethal when Boeing introduced a hidden system that violated that expectation.
Peter Robison’s investigation revealed that Boeing’s engineering culture had prioritized commercial objectives over pilot understanding. The MCAS system was introduced not because it was necessary for safe flight (the 737 had flown safely for decades), but because Boeing wanted to avoid expensive pilot retraining that would have delayed market entry of the MAX. The system was made invisible to save money on training.
The Conflict Between “Safety Feature” and “Single Point of Failure”
MCAS was introduced as a safety feature—a system to prevent stalls. But by making the system invisible and non-obvious to pilots, Boeing had transformed it from a safety feature into a single point of failure. If MCAS malfunctioned, pilots couldn’t see the malfunction. They had no training on how to recognize it. And the system had enough authority over the aircraft’s control surfaces that pilots couldn’t easily override it.
Both crashes resulted from MCAS sensors producing false stall warnings. The system was operating exactly as designed—it was just receiving bad data from a faulty sensor. The failure wasn’t in the MCAS logic. It was in the assumption that pilots would understand the system and know how to disable it when something went wrong.
Perrow’s theory of “normal accidents” applies perfectly: in complex systems, component failures interact in unexpected ways. Boeing had added a layer of automation that seemed like an obvious safety improvement. But that layer created new failure modes that hadn’t been anticipated or designed for.
Crashes before certification suspended
Boeing continued delivering 737 MAX aircraft and denying knowledge of MCAS flaws for months after the first crash.
Trust, Vigilance, and the Illusion of Knowledge
The tragedy of the MCAS crashes is that the pilots did everything they were trained to do. They monitored the system. They responded to malfunction indications. They tried to diagnose the problem. They attempted corrective actions. But they were working with incomplete and false information. They didn’t know that a system existed. They didn’t know how to recognize its failure. They didn’t know how to disable it.
In one sense, both crashes were “pilot error”—the pilots made decisions that, in retrospect, could have been different. But framing them as pilot error misses the deeper truth: the pilots were operating within a cognitive environment shaped by Boeing’s design decisions. Boeing had created a system where pilots would naturally trust an invisible safety feature more than their own judgment. Boeing had eliminated pilot training that might have helped them recognize the problem. Boeing had created an aircraft that appeared familiar but operated according to new rules.
Appropriate trust is trust that’s matched to understanding. The pilots of the Lion Air and Ethiopian Airlines flights had complete trust in the 737 MAX—because they thought they understood it. They didn’t. Their trust was misplaced not because they lacked skill but because Boeing had deliberately hidden from them the information necessary to make informed trust judgments.
The Design Assumption That Wasn’t Tested
Boeing’s fundamental assumption was: pilots will understand how MCAS works and will know how to disable it if something goes wrong. This assumption was never tested with pilots who didn’t know about MCAS. Boeing’s test pilots knew the system existed. Line pilots flying the aircraft didn’t. When MCAS malfunctioned, the gap between Boeing’s assumption and reality became lethal.
The NTSB investigation of the Ethiopian Airlines crash revealed that the flight deck resource management was undermined by incomplete information. The pilots had no procedures for MCAS failure because Boeing hadn’t disclosed the system’s existence. They had no training on MCAS because they didn’t know it was there. When the system malfunctioned, they were like surgeons operating with a missing piece of anatomy they didn’t know the body possessed.
What Happens When Automation Lies About Itself
The 737 MAX crashes represent a category of failure distinct from previous engineering disasters: failures caused not by visible flaws but by hidden systems. The Titanic’s watertight compartments were visible. The Challenger’s O-rings had been discussed in engineering meetings. The Tacoma Narrows’ oscillations were obvious. But MCAS operated in the shadows, invisible to the people whose lives depended on understanding it.
This represents a new phase of engineering risk. As systems become more automated, more invisible, and more complex, the gap between what pilots/operators understand and what the system is actually doing widens. That gap is where failures live.
Combined deaths from MCAS crashes
346 people died in aircraft where a safety system operated perfectly, killing because pilots couldn’t understand what was happening to their aircraft.
The solution isn’t to abandon automation. Modern aircraft are safer precisely because of sophisticated automated systems. The solution is to ensure that pilots understand the systems they’re relying on, that automation is transparent rather than hidden, and that people have meaningful ways to override systems when something doesn’t feel right.
The MCAS crashes killed people not because pilots made bad decisions, but because Boeing had structured the human-automation relationship in a way that made good decisions impossible. They hid the system. They eliminated training. They created an aircraft that demanded trust in something pilots couldn’t see.
The first rule of automation, Atul Gawande might say, is that it’s only safe if humans remain in control of their own understanding. Boeing violated that rule. And 346 people paid the price.
