The Unrealistic Part of Terminator Isn't Skynet -- It's the Scientist Who Stops

A viral meme about Terminator hit a nerve: the truly unrealistic part is a scientist choosing to stop building dangerous AI. Game theory explains why.

AI Newspaper Today··7 min read

In "Terminator 2: Judgment Day," Miles Dyson -- the Cyberdyne Systems engineer whose research leads to Skynet -- learns that his work will eventually cause the extinction of humanity. He responds by helping to destroy his own research. He sacrifices his life in the process.

A meme that went viral in August 2025 made a simple observation about this scene: the unrealistic part of Terminator is not the time travel, the liquid metal robot, or the nuclear apocalypse. The unrealistic part is a scientist who, upon learning his creation could destroy the world, actually stops working on it.

"He's an engineer, not a tech founder," one commenter noted, "which explains why he has a shred of human decency left."

The joke stings because it contains a thesis: the AI arms race cannot be stopped, not because the technology is unstoppable, but because the incentive structures surrounding it make voluntary restraint irrational for any individual actor. This is not a moral failure. It is a structural one. And it has a name: the prisoner's dilemma.

The Game Theory of AI Development

The prisoner's dilemma is the most famous problem in game theory. Two players each choose to cooperate or defect. If both cooperate, both get a good outcome. If one defects while the other cooperates, the defector wins big and the cooperator loses. If both defect, both lose -- but not as badly as the cooperator in the asymmetric case.

The rational choice, for each player individually, is always to defect. The result is that both players defect, producing an outcome worse than mutual cooperation -- but better than being the sucker who cooperated alone.

Applied to AI development:

  • Cooperate = pause or slow AI development for safety reasons
  • Defect = continue or accelerate development

If the US pauses and China does not, China achieves AI dominance. If China pauses and the US does not, the reverse. If both pause, both benefit from more time to develop safety measures. If neither pauses, both face the risks of rushed development -- but neither faces the catastrophic disadvantage of unilateral restraint.

The Nash equilibrium -- the stable outcome from which neither player wants to deviate -- is mutual defection. Both keep building. This is precisely what we observe.

"If we stop working on our own Skynet, the Chinese will build Skynet before us," as one commenter summarized the logic. It is not that anyone wants Skynet. It is that no one wants the other side to have Skynet first.

The Nuclear Parallel

This dynamic is not theoretical. We have seen it before.

In 1945, many of the scientists who built the atomic bomb expressed horror at what they had created. J. Robert Oppenheimer famously quoted the Bhagavad Gita: "Now I am become Death, the destroyer of worlds." Leo Szilard, who had urged Einstein to write the letter that launched the Manhattan Project, spent the rest of his life campaigning against nuclear weapons.

But the bombs kept being built. The Soviet Union tested its first nuclear weapon in 1949. The United Kingdom followed in 1952. France in 1960. China in 1964. India in 1974. Pakistan in 1998. North Korea in 2006.

At every stage, the same logic applied: if the other side has nuclear weapons, you cannot afford not to have them. Individual moral objections were overwhelmed by structural incentives. The scientists who opposed continued development were replaced by those who did not.

The parallels to AI are instructive but imperfect:

Where the analogy holds:

  • Competitive pressure between great powers drives development regardless of risk assessments
  • Individual researchers who raise safety concerns are sidelined or ignored
  • The technology has both civilian and military applications, making purely military controls inadequate
  • First-mover advantage creates intense pressure to move fast

Where the analogy breaks down:

  • Nuclear weapons have a clear, binary destructive function. AI has millions of beneficial applications. You cannot ban AI the way you can (theoretically) ban nuclear weapons because AI is useful for everything from drug discovery to translation.
  • Nuclear proliferation requires rare materials and massive infrastructure. AI development requires GPUs and data -- both far more accessible and harder to control.
  • Nuclear weapons produce immediate, visible destruction. AI risks are diffuse, gradual, and debatable, making collective action harder to mobilize.

The Tripolar Problem

The AI competition is not bilateral. It is at minimum tripolar -- the United States, China, and the European Union each pursuing distinct strategies:

The United States has chosen speed over safety regulation, betting that maintaining technological leadership is the best path to both economic dominance and national security. Export controls on AI chips attempt to slow competitors while accelerating domestic development.

China has pursued a state-directed approach, concentrating resources in national champions while maintaining tight control over AI applications through its own regulatory framework. Despite chip restrictions, Chinese labs have demonstrated impressive capability with less computational resources, suggesting that the export control strategy has limits.

The European Union has chosen regulation first, implementing the AI Act as the world's most comprehensive AI governance framework. Critics argue this approach cedes technological leadership. Defenders argue it establishes norms that others will eventually adopt.

None of these strategies include a pause. None include voluntary restraint on capability development. The structural incentives do not permit it.

Why "Just Stop" Is Not a Strategy

The AI safety community has spent years arguing for slowdowns, moratoriums, and coordination agreements. The Future of Life Institute's open letter calling for a six-month pause on AI training beyond GPT-4 scale, signed by thousands of researchers in 2023, produced precisely zero months of pause.

This does not mean the safety community is wrong about the risks. It means they are proposing a solution that requires all major players to cooperate simultaneously in a prisoner's dilemma where the incentive to defect is overwhelming.

The solutions that might actually work look different:

Technical safety research that makes AI systems safer without requiring anyone to slow down -- alignment techniques, interpretability tools, automated red-teaming. If safety can be built into the development process rather than imposed on it, the incentive problem is bypassed.

International agreements with verification mechanisms -- analogous to nuclear arms treaties with inspection regimes. These are extraordinarily difficult to negotiate and enforce, but they represent the only proven approach to managing great-power competition over dangerous technology.

Liability frameworks that make the companies deploying AI financially responsible for harms, creating market incentives for safety rather than relying on voluntary restraint.

Compute governance that monitors and potentially restricts the training runs required for frontier models, using the physical chokepoints in the semiconductor supply chain as leverage.

The Terminator Got One Thing Right

James Cameron's franchise made one accurate prediction about artificial intelligence, even if it got the mechanism wrong. It is not that AI will suddenly become conscious and decide to destroy humanity. It is that the humans building AI will be unable to stop themselves from building it, even when presented with compelling reasons to slow down.

Miles Dyson destroyed his research because he was fictional. Real engineers, embedded in real institutions with real competitive pressures, real investors, real career incentives, and real national security obligations, do not have that option.

The question is not whether someone will choose to stop. No one will. The question is whether we can build the guardrails -- technical, legal, international -- fast enough to manage what no one will stop building.

"Imagine your feelings of pride and patriotism," one commenter wrote with grim humor, "seeing 'Made in the USA' on the side of the drone sent to euthanize you."

The joke is dark. The structural analysis underneath it is deadly serious. And the window for building those guardrails is not infinite.

Discussion

Comments are not configured yet.

Set up Giscus and add your environment variables to enable discussions.

Related Articles