Skip to main content
The Irreducible Problem: Modelling the Human Animal
By Hisham Eltaher
  1. Human Systems and Behavior/

The Irreducible Problem: Modelling the Human Animal

The Necessary Fool: Why Organizations Need Both the Child and the Expert - This article is part of a series.
Part : This Article
On October 9, 2008, as Iceland’s króna began a nauseating free fall, Eric Ball, then treasurer of the software giant Oracle, confronted a choice that could redirect billions of dollars in overseas assets. While headlines screamed of a global domino effect, Ball reached not for a frantic phone call but for two models: a network-contagion model of financial collapse and a classical supply-and-demand model linking the magnitude of a price shock to the size of the underlying economy. He noted that Iceland’s GDP was smaller than six months’ revenue at McDonald’s and concluded, "Iceland is smaller than Fresno. Go back to work." That swift juxtaposition of formal logic, the craft of modelling, saved Oracle from a costly overreaction. But replace a currency peg with a mood, a queue, or a vote, and the task of building a model that can reason, predict, or guide action becomes one of the hardest scientific challenges there is.

What Is a Model, and Why Model Humans?
#

A model is a formal structure, typically expressed in mathematics or computer code, that strips away detail to reveal the causal skeleton of a system. Scott E. Page, in The Model Thinker, describes models as "simplifications of the world, mathematical analogies, or exploratory, artificial constructs." They must be tractable enough to allow logic to operate. When applied to human behaviour, a model defines the entities (people, firms, governments), their objectives or rules, and the interactions among them, then traces how individual actions aggregate into macro-level phenomena, booms, bubbles, segregation, cooperation.

The drive to model humans serves seven distinct purposes, which Page captures with the acronym REDCAPE:

Reason: Establishing the logical conditions under which a claim holds.

Explain: Creating testable accounts for observed empirical patterns.

Design: Engineering better institutions (e.g., auctions, school choice).

Communicate: Turning abstract ideas into precise, shareable structures.

Act: Informing policy and strategy.

Predict: Forecasting macro outcomes like elections or market shifts.

Explore: Investigating "what if" scenarios in artificial worlds.

The Difficulty: A Six-Headed Beast
#

Why is modelling a single purchasing decision or a political movement so much harder than modelling a carbon atom? Page enumerates six properties that make human beings profoundly resistant to tidy formalisation.

Diversity. People differ in preferences, cognitive attention, social networks, and levels of altruism. Sometimes this variation cancels out in the aggregate, but when behaviour is socially influenced, the outliers matter disproportionately. A single highly connected activist can spark a cascade that the average citizen would not.

Social influence. Human actions are rarely independent. We buy what others buy, protest when others protest, and panic when others panic. This interdependence creates positive feedbacks, the Matthew effect of "more begets more", that can lock in arbitrary outcomes and generate long-tailed distributions of success and failure.

Error-proneness. We make mistakes. Some are random and may wash out; others are systematic cognitive biases, such as overweighting recent events or framing a choice as a loss rather than a gain. These correlated errors do not cancel out and must be modelled explicitly if they drive outcomes.

Purposive behaviour. Unlike billiard balls, people have goals. A model must declare what individuals are trying to achieve, wealth, status, a sense of belonging, and how those motives translate into actions. The choice of objective function is never innocent; it shapes everything the model can say.

Adaptation and learning. People change what they do in response to experience, observation, and the signals of others. In non-strategic settings this often pushes them toward better choices; in games, it may produce cycles or lock them into inferior equilibria.

Agency. Finally, humans possess at least a limited capacity to step outside their routines, to revolt, to invent. The "rider and the elephant" metaphor, conscious reasoning as a small rider atop a large, instinctive elephant, captures the tension: sometimes we optimise, sometimes we just ride.

No single model can incorporate all six features without becoming a replica of the world, as useless as Borges’s map of the empire that was the size of the empire. The modeller’s art is to choose which features matter for the question at hand.

The Three Families of Human-Behaviour Models
#

The modelling toolkit contains three broad approaches, each occupying a different point on the spectrum from fast, frugal rules to full optimisation.

1. The Rational-Actor Benchmark
#

The workhorse of economics and much political science, the rational-actor model assumes that a person’s preferences can be represented by a utility function and that she chooses the action that maximises it, given her beliefs about what others will do.

Its power is not descriptive accuracy but analytical leverage. Six justifications are commonly cited. First, people often act as if they optimise: a bus-maintenance superintendent can replace engines at near-optimal intervals without solving dynamic-programming equations. Second, in repeated settings, learning drives behaviour toward optimality. Third, when stakes are high, buying a house, not a coffee, cognitive effort rises. Fourth, optimality yields a unique prediction, making the model testable. Fifth, it provides an internal-consistency check: if a model assumes suboptimal behaviour, savvy agents would learn the model and deviate, rendering the assumption unstable. Finally, rationality serves as a benchmark, an upper bound on what intelligent actors could achieve, against which real-world inefficiencies can be measured.

The approach’s weakness is equally clear. The axioms required for a utility function, completeness, transitivity, independence, continuity, are routinely violated in laboratories. People exhibit loss aversion, hyperbolic discounting, and a raft of other biases. In pure form, the rational actor is a straw man.

2. Psychological Tweaks
#

A natural response is to amend the rational framework with the most robust behavioural findings. Prospect theory replaces the standard utility curve with one that is concave for gains (risk aversion) and convex for losses (risk seeking), and it weights losses roughly twice as heavily as equivalent gains. Hyperbolic discounting captures the immediacy bias: people apply a far higher discount rate to the near future than to the distant future, producing time-inconsistent choices such as under-saving for retirement.

These enrichments improve descriptive fit in domains where the biases are first-order drivers, addiction, procrastination, financial panic. Yet they come at a cost. Many psychological regularities have failed to replicate across cultures and subject pools. Moreover, adding parameters can create models that are as brittle as they are realistic; a loss-aversion parameter that fits one context may fail in another, and the mathematical complexity multiplies quickly.

3. Rule-Based Behaviour: From Zero Intelligence to Ecological Rationality
#

The third family abandons optimisation and instead endows agents with explicit rules. A fixed rule might be a zero-intelligence trader who bids randomly but never accepts a loss. Strikingly, when such traders are placed in a double-auction market, the market still converges to near-total efficiency, proving that the institution, not individual brilliance, can do the heavy lifting.

Adaptive rules go further: agents switch among rules based on past performance or copy successful neighbours. The canonical example is the El Farol problem, where 100 people decide independently whether to visit a bar that is enjoyable only if not too crowded. Each person possesses a portfolio of simple forecasting rules ("go if it was under capacity last week") and uses the one that would have worked best historically. The system self-organises around the bar’s optimal capacity, even though nobody calculates an equilibrium.

Rule-based models give the modeller enormous flexibility, any behaviour that can be encoded is fair game, and they align with the "ecological rationality" insight that simple heuristics can exploit environmental structure. The danger is ad hockey: without the discipline of an objective function, one can retrofit almost any pattern. The defence is to treat rules as lower bounds and to demand that they emerge from plausible psychological or evolutionary processes.

Learning as the Bridge, and the Wildcard
#

Learning models, which Page treats extensively, sit between fixed rules and full rationality. In reinforcement learning, an agent attaches a weight to each action and increases that weight when the reward exceeds an aspiration level. Over time, in a stationary environment, the probability mass concentrates on the best action, a convergence proof that learning can substitute for optimisation. Replicator dynamics, a social-learning analogue, adjusts the population shares of strategies according to their payoffs relative to the average.

The twist comes in games. In the Guzzler Game, where both players choosing an economy car yields the highest joint payoff, but a gas-guzzler is safer against a unilateral defector, both reinforcement learning and replicator dynamics lock onto the inefficient, risk-dominant equilibrium. Meanwhile, in the Generous/Spiteful Game, individual learning drives agents toward the dominant, generous action, while social learning (copying the higher-performing neighbour) selects spite. The choice of learning rule can flip the outcome, a reminder that behavioural assumptions are not neutral parameters; they are part of the model’s core.

The State of the Art and the Path Ahead
#

The frontier of human-behaviour modelling is not a search for the One True Model. It is a disciplined pluralism, what Page calls "many-model thinking." In practice, this means ensembles: a policy problem such as the opioid epidemic or rising inequality is assaulted simultaneously with a Markov model of addiction transitions, a systems-dynamics model of prescribing flows, a network model of peer-to-peer pill sharing, and a multi-armed-bandit model of clinical-trial design. Each lens reveals a different causal thread; together, they bound the range of plausible futures.

Technologically, the rise of granular, real-time data, from smartphone traces to online experiments, is enabling a new generation of agent-based models that calibrate heterogeneous, rule-following agents on millions of observations. Machine-learning tools can discover behavioural rules directly from data, blurring the line between inductive pattern-finding and deductive theory.

Yet the central tension identified by the Lucas critique endures: if a model of human behaviour is published and understood, the humans it describes may adapt, rendering the model obsolete. This reflexivity guarantees that modelling humanity will always be a moving-target science. The proper response is not to abandon formalism but to embrace humility, to build not monuments but flexible, composable frameworks that acknowledge their own provisionality. As Page writes, "All models are wrong, but many are useful." The craft lies in knowing which ones to reach for, and when to build a new one.


References
#

  1. Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decisions under risk. Econometrica, 47(2), 263–291.
  2. Granovetter, M. (1978). Threshold models of collective behavior. American Journal of Sociology, 83(6), 1360–1443.
  3. Schelling, T. (1978). Micromotives and macrobehavior. W. W. Norton.
  4. Axelrod, R. (1984). The evolution of cooperation. Basic Books.
  5. Arrow, K. (1963). Social choice and individual values. Yale University Press.
  6. Camerer, C., & Ho, T. (1999). Experience-weighted attraction learning in normal form games. Econometrica, 67(4), 827–874.
  7. Ostrom, E. (2004). Understanding institutional diversity. Princeton University Press.
  8. Watts, D., & Strogatz, S. (1998). Collective dynamics of ‘small-world’ networks. Nature, 393(6684), 440–442.
  9. Page, S. E. (2007). The difference: How the power of diversity creates better groups, teams, schools, and societies. Princeton University Press.
  10. Box, G. E. P., & Draper, N. (1987). Empirical model-building and response surfaces. Wiley.
The Necessary Fool: Why Organizations Need Both the Child and the Expert - This article is part of a series.
Part : This Article

Related

Pressure & Protocol - Part 3: In Less Than a Millisecond: The Physics of an Unrecoverable Failure

The thermodynamics and mechanics of the Titan's implosion — what the debris field encodes about the failure sequence, why deep-sea structural failure has no intermediate states, and what the pattern of interface failures across engineering history tells us about where monitoring systems need to be focused.

Pressure & Protocol - Part 2: The Warnings Were Written: How the Titan Recorded Its Own Destruction

The Titan submersible's structural health monitoring system recorded acoustic anomalies and permanent strain shifts across months of operations. The data described a hull in progressive failure. None of it triggered a halt. This is the record of what was documented, when, and what was done with it.

Pressure & Protocol - Part 1: The Five-Inch Compromise: Carbon Fiber, Commercial Ambition, and the Hull That Was Never Ready

How OceanGate's foundational material choice — a carbon fiber pressure hull for deep-ocean use — combined with manufacturing defects, suppressed safety dissent, and a deliberately constructed regulatory void to set the Titan on an irreversible course toward implosion.