Skip to main content
The Intelligent Proxy - Part 2: The Attribution Gap: Who Bears the Weight of a Machine's Suggestion?
By Hisham Eltaher
  1. Systems and Innovation/
  2. The Intelligent Proxy: Ambition and Ambiguity in the Age of AI Co-Creation/

The Intelligent Proxy - Part 2: The Attribution Gap: Who Bears the Weight of a Machine's Suggestion?

Intelligent-Proxy - This article is part of a series.
Part 2: This Article

In 2021, an architect using an AI-powered urban planning tool accepted its optimization for solar gain and traffic flow. The resulting building design, however, created a wind tunnel effect at street level, making the plaza below unusable for much of the year. The client sued for design failure. In the ensuing legal dispute, a novel question arose: could liability be shared with, or even transferred to, the AI software that proposed the flawed massing? The court’s answer, echoing the prevailing consensus in law and ethics, was a firm no. Responsibility remained entirely with the licensed human architect. The AI was deemed an instrument, a sophisticated calculator whose output required professional interpretation and validation. This case highlights the central, unresolved tension in human-AI collaboration: we increasingly rely on machines to generate ideas, but we insist that only humans can bear responsibility for them. This creates what philosophers of technology call the attribution gap—a disconnect between the source of an action and the bearer of its consequences.

The legal and ethical principle is clear: AI lacks moral agency. It has no consciousness, no intent, and no capacity for free will. As such, it cannot be held legally liable or be said to have “acted” in a morally meaningful way. Responsibility cascades to the human agents in the loop: the designer who used the tool, the firm that deployed it, or the company that developed it. They are the principals; the AI is merely their agent, or more accurately, their tool. This framework treats AI systems like power saws or calculators—objects that extend human capability but do not share in human accountability.

The Practical Quagmire of the “Many Hands Problem”
#

In practice, this clear principle collides with the messy reality of modern design processes. Attribution becomes fiendishly complex, giving rise to the “many hands problem.” Consider a faulty medical implant designed with AI assistance. Who is responsible? The surgeon who selected the final design? The biomedical engineer who tuned the AI’s parameters? The data scientist who trained the model on imperfect clinical data? The software company that sold the tool with a disclaimer? The chain of causation is long and opaque.

This complexity can create a responsibility vacuum. When an error emerges from the interplay of countless micro-decisions across a team and a software pipeline, it becomes easy for each contributor to argue they fulfilled their narrow duty. The surgeon trusted the engineer’s specs; the engineer trusted the AI’s optimization; the data scientist used the best available data. No single person feels wholly culpable, yet a harmful outcome occurred. This vacuum can erote professional diligence, encouraging a dangerous passivity where humans defer to the AI’s suggestion as a way to offload not just labor, but the cognitive burden of decision-making and its attendant risk.

From Instrument to Collaborator: A Shifting Psychological Burden
#

The legal doctrine may insist the AI is just a tool, but our psychological experience of working with it tells a different story. When a system generates compelling, novel options and explains its reasoning (however superficially), we naturally begin to relate to it as a collaborative partner. This shift is subtle but profound. We start to “trust” its judgment, to feel a sense of teamwork. This makes the subsequent attribution of sole responsibility to the human feel dissonant and unfair.

Studies on human-AI teaming show that performance surpasses either alone only when humans build an accurate mental model of the AI’s capabilities and limitations—a state of calibrated trust. Without it, trust becomes miscalibrated: we either over-rely on the AI (automation bias) or underutilize it. The legal insistence on human responsibility therefore imposes a heavy epistemic burden on the designer. They must not only make the final choice but also maintain a vigilant, skeptical understanding of their silicon partner’s blind spots and failure modes, even as that partner becomes more sophisticated and its operations more inscrutable. This is the core paradox of the attribution gap: we demand human responsibility for system outputs that increasingly exceed any single human’s capacity to fully understand or audit. This gap becomes not just a legal concern, but a critical fault line when the stakes of design shift from aesthetics and efficiency to matters of life, safety, and profound social consequence.

Intelligent-Proxy - This article is part of a series.
Part 2: This Article