Skip to main content
The Adaptive Archive: Part 2—Failure as Information: What Ecosystems Do With Error
By Hisham Eltaher
  1. Sustainability and Future/
  2. The Adaptive Archive: Design Lessons from Living Systems/

The Adaptive Archive: Part 2—Failure as Information: What Ecosystems Do With Error

Adaptive-Archive - This article is part of a series.
Part 2: This Article

The Forest That Incorporates Fire
#

On September 7, 2020, the Creek Fire in California created what meteorologists called a “fire-generated thunderstorm.” As flames consumed 379,895 acres, heat formed a pyrocumulonimbus cloud 50,000 feet high. To human observers, this was catastrophic failure. Yet to the forest ecosystem, fire represents not failure but feedback. Lodgepole pine cones remain sealed until temperatures reach 113°F (45°C), releasing seeds only when flames clear the forest floor. Chaparral plants contain flammable oils that encourage fire, knowing their root systems will sprout anew in nutrient-rich ash.

This fundamental difference in error handling reveals a deeper divergence. Human engineering operates on what mathematician Nassim Taleb terms the “fragilista” principle: systems designed to avoid failure become increasingly vulnerable to catastrophic collapse. Biological systems follow ecologist C.S. Holling’s “adaptive cycle”: growth, conservation, release, and reorganization. Failure—through fire, flood, or disease—isn’t an aberration but a phase, providing information that reshapes the system.

The distinction matters profoundly. Between 2000 and 2020, the global economic cost of natural disasters increased from $1.16 trillion to $2.97 trillion—not because disasters became more frequent, but because our systems became more fragile. Meanwhile, biological systems have persisted through five mass extinction events, adapting each time. This installment explores how ecosystems process error not as something to be eliminated, but as essential information for adaptation.

The Ecology of Error Processing
#

Biological systems maintain what resilience theorist Brian Walker calls “redundancy of function”—multiple components capable of performing the same task. In tropical rainforests, multiple pollinator species visit the same flowers. If one disappears, others compensate. The human body exemplifies this: two kidneys, redundant circulatory pathways, overlapping immune responses. No single point of failure exists because failure points are distributed.

Human engineering typically eliminates redundancy as inefficiency. Modern supply chains practice just-in-time inventory. Healthcare systems optimize bed occupancy above 85%. Power grids maintain minimal reserve margins. Each efficiency gain increases vulnerability. When COVID-19 struck, hospitals optimized for capacity lacked surge space, and lean manufacturing faltered.

A more sophisticated biological strategy is degeneracy—structurally different elements performing the same function. Neuroscientist Gerald Edelman first described this in 1978. In the immune system, different antibody configurations recognize the same pathogen. In metabolism, different enzyme pathways produce the same energy molecules. Degeneracy provides robustness against varied threats: if one pathway is blocked, alternatives function.

Human systems increasingly exhibit degeneracy without recognizing its value. The internet routes information through multiple protocols. Cryptocurrencies use different consensus mechanisms. Yet these are often criticized as “inefficient” rather than celebrated as robust. Financial regulators historically pushed for standardization that created systemic vulnerability—when one bank’s model failed, all failed similarly.

Some organizations intentionally design degeneracy. Singapore’s water system employs four different sources: imported water, desalination, recycled NEWater, and catchment reservoirs. Each uses different technologies and vulnerabilities. During droughts, desalination scales up; during energy shortages, catchment systems dominate. The system costs more than single-source approaches but ensures security against diverse disruptions.

Error as Evolutionary Catalyst
#

Evolution’s central mechanism depends on error. DNA replication incorporates approximately one error per 10⁷ to 10⁹ base pairs—a mistake rate that seems high until one calculates its necessity. For a human genome of 3.2 billion base pairs, this means 3-320 mutations per generation. Most are neutral, some harmful, a few beneficial. Population geneticist Motoo Kimura called this the “nearly neutral theory”—a balance between mutation pressure and selective constraint that maintains diversity.

Human systems often seek zero-error states. Six Sigma aims for 3.4 defects per million. Aviation targets “zero accidents.” While admirable, these goals can create fragility by eliminating variation needed for adaptation. The 2009 Air France Flight 447 crash revealed a paradox: as automation made flying statistically safer, pilots’ manual skills atrophied. When autopilot disengaged, pilots made basic errors that a less automated system might have prevented.

Some industries recognize controlled error’s value. Pharmaceutical companies incorporate “evolutionary pressure” in antibiotic development, testing compounds against bacteria in sub-lethal concentrations to reveal weaknesses early. Tech companies practice “chaos engineering”—intentionally injecting failures to test resilience. Netflix’s Simian Army randomly terminates servers to ensure applications handle outages.

The challenge lies in distinguishing productive from destructive errors. Biological systems achieve this through “population thinking”: variation at population level allows testing multiple solutions simultaneously. Some individuals fail, but the population learns. Human organizations struggle because failure carries stigma. However, some adapt. Eli Lilly holds “failure parties” for instructive clinical trial failures. SpaceX’s iterative rocket testing follows biological logic: test cheaply, fail early, learn quickly.

Designing for Informative Failure
#

Children’s sandboxes offer a design metaphor: bounded spaces where experimentation has limited consequences. Biological systems create similar sandboxes through modularity. Cells compartmentalize reactions in organelles. Bodies localize infections through inflammation. Ecosystems contain disturbances through landscape features. Failure in one module doesn’t necessarily propagate system-wide.

Human systems can design similar modularity. The Internet’s original design separated network layers so failures at one layer don’t collapse others. Singapore’s urban planning creates self-contained towns reducing transportation dependency. Microservice software architecture breaks applications into independent services that can fail individually.

The key is designing interfaces that allow interaction without tight coupling. Biological membranes achieve this through selective permeability. Human analogs include APIs standardizing communication between software components, or supply chain “firebreaks” limiting contagion during disruptions. The 2011 Thailand floods demonstrated tight coupling’s danger: hard disk drive manufacturing concentrated in one region meant global industry suffered.

Biological feedback systems report actual conditions rather than desired ones. Blood sugar levels trigger insulin response whether convenient or not. Predator-prey cycles reflect actual population sizes. Tree growth responds to actual sunlight. This truthful feedback enables accurate adaptation.

Human systems often distort feedback through psychologist Chris Argyris’s “defensive routines.” Organizations shoot messengers bearing bad news. Markets discount future risks. Political systems prioritize short-term popularity. The result is systems theorist John Sterman’s “policy resistance”—interventions addressing symptoms while worsening underlying problems because feedback was distorted.

Designing truthful feedback requires structural independence. Central banks maintain independence from political cycles. Environmental monitoring agencies report directly to the public. Whistleblower protections enable insiders to report problems. The Netherlands’ Delta Commission operates independently with a mandate to speak truth about sea level rise regardless of political convenience.

From Fragility to Anti-Fragility
#

Philosopher Nassim Taleb proposes that systems don’t just resist shocks (resilience) but can improve from them (anti-fragility). Biological systems are inherently anti-fragile: muscles strengthen from micro-tears, immune systems improve from exposure, ecosystems diversify from disturbances. Our engineered systems often become more fragile with age: metals fatigue, software accumulates technical debt, institutions develop bureaucratic sclerosis.

The 2020 pandemic revealed this divergence. Biological systems responded with rapid evolution (new virus variants, immune adaptation). Human systems struggled: healthcare overwhelmed, supply chains broken. Yet some anti-fragile responses emerged: mRNA vaccine technology proved adaptable, telehealth expanded, local supply networks formed. These successes shared biological characteristics: modularity, redundancy, ability to learn quickly.

Designing anti-fragility requires embracing Taleb’s “optionality”—creating systems with multiple possible responses. Biological systems achieve this through biodiversity, behavioral flexibility, and phenotypic plasticity. Human analogs include: maintaining multiple energy sources, developing adaptable workforce skills, creating modular infrastructure reconfigurable as needs change.

The Creek Fire’s pyrocumulonimbus cloud looked like apocalypse. But in burned areas today, green shoots emerge from blackened soil, nutrients cycle faster, forest structure becomes more varied. The system didn’t just survive; it transformed.

Adaptive-Archive - This article is part of a series.
Part 2: This Article

Related