By Eric Vandenbroeck and co-workers
War and Peace in the Age of Artificial
Intelligence
As artificial intelligence evolves from predefined narrow
applications to more capable general-purpose models, there is growing interest
in how this technology affects international security.
From the
recalibration of military strategy to the reconstitution of diplomacy,
artificial intelligence will become a key determinant of order in the world.
Immune to fear and favor, AI introduces a new possibility of objectivity in
strategic decision-making. But that objectivity, harnessed by both the
warfighter and the peacemaker, should preserve human subjectivity, which is
essential for the responsible exercise of force. AI in war
will illuminate the best and worst expressions of humanity. It will serve as the means both to wage war and to end it.
Humanity’s
long-standing struggle to constitute itself in ever-more complex arrangements,
so that no state gains absolute mastery over others, has achieved the status of
a continuous, uninterrupted law of nature. In a world where the major actors
are still human—even if equipped with AI to inform, consult, and advise
them—countries should still enjoy a degree of stability based on shared norms
of conduct, subject to the tunings and adjustments of time.
But if AI emerges as
a practically independent political, diplomatic, and military set of entities,
that would force the exchange of the age-old balance
of power for a new, uncharted disequilibrium. The international concert of
nation-states—a tenuous and shifting equilibrium achieved in the last few
centuries—has held in part because of the inherent equality of the players. A
world of severe asymmetry—for instance, if some states
adopted AI at the highest level more readily than others—would be far less predictable.
In cases where some humans might face off militarily or diplomatically against
a highly AI-enabled state, or against AI itself, humans could struggle to
survive, much less compete. Such an intermediate order could witness an
internal implosion of societies and an uncontrollable explosion of external
conflicts.
Other possibilities
abound. Beyond seeking security, humans have long fought wars in pursuit of
triumph or in defense of honor. Machines—for now—lack any conception of either
triumph or honor. They may never go to war, choosing instead, for instance, immediate,
carefully divided transfers of territory based on complex calculations. Or they
might—prizing an outcome and deprioritizing individual lives—take actions that
spiral into bloody wars of human attrition. In one scenario, our species could
emerge so transformed as to avoid entirely the brutality of human conduct. In
another, we would become so subjugated by the technology
that it would drive us back to a barbaric past.
At a military convention in Washington, D.C., October
2024
The AI Security Dilemma
Many countries are fixated
on how to “win the AI race.” In part, that drive is understandable. Culture,
history, communication, and perception have conspired to create among today’s
major powers a diplomatic situation that fosters insecurity and suspicion on
all sides. Leaders believe that an incremental tactical advantage could be
decisive in any future conflict, and that AI could offer just that advantage.
If each country
wished to maximize its position, then the conditions would be set for a
psychological contest among rival military forces and intelligence agencies the likes of which humanity has never faced before.
An existential security dilemma awaits. The logical first wish for any human
actor coming into possession of superintelligent AI—that is, a hypothetical AI
more intelligent than a human—might be to attempt to guarantee that nobody else
gains this powerful version of the technology. Any such actor might also
reasonably assume by default that its rival, dogged by
the same uncertainties and facing the same stakes, would be pondering a similar
move.
Short of war, a
superintelligent AI could subvert, undermine, and block a competing program.
For instance, AI promises both to strengthen conventional computer viruses with
unprecedented potency and to disguise them thoroughly. Like the computer worm
Stuxnet—the cyberweapon uncovered in 2010 that was thought to have ruined a
fifth of Iran’s uranium centrifuges—an AI agent could sabotage a rival’s
progress in ways that obfuscate its presence, thereby forcing enemy scientists
to chase shadows. With its unique capacity for manipulation of weaknesses in
human psychology, an AI could also hijack a rival nation’s media, producing a
deluge of synthetic disinformation so alarming as to inspire mass opposition
against further progress in that country’s AI capacities.
It will be hard for
countries to get a clear sense of where they stand relative to others in the AI
race. Already the largest AI models are being trained on secure networks
disconnected from the rest of the internet. Some executives believe that AI
development will itself sooner or later migrate to
impenetrable bunkers whose supercomputers will be powered with nuclear
reactors. Data centers are even now being built on the bottom of the ocean
floor. Soon they could be sequestered in orbits around Earth. Corporations or
countries might increasingly “go dark,” ceasing to publish AI research so as
not only to avoid enabling malicious actors but also to obscure their own pace
of development. To distort the true picture of their progress, others might
even try deliberately publishing misleading research, with AI assisting in the
creation of convincing fabrications.
There is a precedent
for such scientific subterfuge. In 1942, the Soviet physicist Georgy Flyorov
correctly inferred that the United States was building a nuclear bomb after he
noticed that the Americans and the British had suddenly stopped publishing scientific
papers on atomic fission. Today, such a contest would be made all the more unpredictable given the complexity and
ambiguity of measuring progress toward something so abstract as intelligence.
Although some see advantage as commensurate with the size of the AI models in
their possession, a larger model is not necessarily superior across all
contexts and may not always prevail over smaller models deployed at scale.
Smaller and more specialized AI machines might operate like a swarm of drones
against an aircraft carrier—unable to destroy it, but sufficient to neutralize
it.
An actor might be
perceived to have an overall advantage were it to demonstrate achievement in a
particular capability. The problem with this line of thinking, however, is that
AI refers merely to a process of machine learning that is embedded not just in
a single technology but also in a broad spectrum of technologies. Capability in
any one area may thus be driven by factors entirely different from capability
in another. In these senses, any “advantage” as ordinarily calculated may be
illusory.
Moreover, as
demonstrated by the exponential and unforeseen explosion of AI capability in
recent years, the trajectory of progress is neither linear nor predictable.
Even if one actor could be said to “lead” another by an approximate number of
years or months, a sudden technical or theoretical breakthrough in a key area
at a critical moment could invert the positions of all players.
In such a world,
where no leaders could trust their most solid intelligence, their most primal
instincts, or even the basis of reality itself, governments could not be blamed
for acting from a position of maximum paranoia and suspicion. Leaders are no doubt
already making decisions under the assumption that their endeavors are under
surveillance or harbor distortions created by malign influence. Defaulting to
worst-case scenarios, the strategic calculus of any actor at the frontier would
be to prioritize speed and secrecy over safety. Human leaders could be gripped
by the fear that there is no such thing as second place. Under pressure, they
might prematurely accelerate the deployment of AI as deterrence against
external disruption.
Examining an autonomous vehicle at a military
convention in Washington, D.C., October 2024
A New Paradigm of War
For almost all of human history, war has been fought in a defined space
in which one could know with reasonable certainty the capability and position
of hostile enemy forces. The combination of these two attributes offered each
side a sense of psychological security and common consensus, allowing for the
informed restraint of lethality. Only when enlightened leaders were unified in
their basic understanding of how a war might be fought could opposing forces
determine whether a war should be fought.
Speed and mobility
have been among the most predictable factors underpinning the capability of any
given piece of military equipment. An early illustration is the development of
the cannon. For a millennium after their construction, the Theodosian Walls protected
the great city of Constantinople from outside invaders. Then, in 1452, a
Hungarian artillery engineer proposed to Emperor Constantine XI the
construction of a giant cannon that, firing from behind the defensive walls,
would pulverize attackers. But the complacent emperor, possessing neither the
material means nor the foresight to recognize the technology’s significance,
dismissed the proposal.
Unfortunately for
him, the Hungarian engineer turned out to be a mercenary. Switching tactics
(and sides), he updated his design to be more mobile—transportable by no fewer
than 60 oxen and 400 men—and approached the emperor’s rival, the Ottoman Sultan
Mehmed II, who was preparing to besiege the impermeable fortress. Winning the
young sultan’s interest with his claim that this gun could “shatter the walls
of Babylon itself,” the entrepreneurial Hungarian helped the Turkish forces
breach the ancient walls in only 55 days.
The contours of this
fifteenth-century drama can be seen again and again throughout history. In the
nineteenth century, speed and mobility transformed the fortunes first of
France, as Napoleon’s army overwhelmed Europe, and then of Prussia, under the
direction of Helmuth von Moltke (the Elder) and
Albrecht von Roon, who capitalized on the newly developed railways to enable
faster and more flexible maneuvering. Similarly, blitzkrieg—an evolution of the
same German military principles—would be used against the Allies in World War
II to great and terrible effect.
“Lightning war” has
taken on new meaning—and ubiquity—in the era of digital warfare. Speeds are
instantaneous. Attackers need not sacrifice lethality to sustain mobility, as
geography is no longer a constraint. Although that combination has largely
favored the offense in digital attacks, an AI era could see an increase of the velocity of response and allow cyber defenses to
match cyber offenses.
In kinetic warfare,
AI will provoke another leap forward. Drones, for instance, will be extremely
quick and unimaginably mobile. Once AI is deployed not only to guide one drone
but to direct fleets of them, clouds of drones will form and fly in sync as a
single cohesive collective, perfect in their synchronicity. Future drone swarms
will dissolve and reconstitute themselves effortlessly in units of every size,
much as elite special-operations forces are built from scalable detachments,
each of which is capable of sovereign command.
In addition, AI will
provide similarly speedy and flexible defenses. Drone
fleets are impractical if not impossible to shoot down with conventional
projectiles. But AI-enabled guns firing rounds of photons and electrons
(instead of ammunition) could re-create the same lethal disabling capacities as
a solar storm that can fry the circuitry of exposed satellites.
AI-enabled weapons
will be unprecedentedly exact. Limits to the knowledge of an antagonist’s
geography have long constrained the capabilities and intentions of any warring
party. But the alliance between science and war has come to ensure increasing
accuracy in instruments, and AI can be expected to make more breakthroughs. AI
will thus shrink the gap between original intent and ultimate outcome,
including in the application of lethal force. Whether land-based drone swarms,
machine corps deployed in the sea, or possibly interstellar fleets, machines
will possess highly precise capabilities of killing humans with little degree
of uncertainty and with limitless impact. The bounds of the potential
destruction will hinge only on the will, and the restraint, of both human and
machine.
That being so, the AI
age of warfare will be reduced primarily to an assessment not of an adversary’s
capabilities but rather of its intentions and strategic applications. In the
nuclear age, we have already entered such a phase—but its dynamics and significance
will come into much sharper focus as AI proves its worth as a weapon of war.
With such valuable
technology involved, humans may not even be the primary targets of AI-enabled
war. AI could remove humans as a proxy in warfare entirely, making war less
deadly but potentially no less decisive. Similarly, territory alone seems
unlikely to provoke AI aggression—but data centers and other critical digital
infrastructure certainly could.
Surrender, then, will
come not when the opponent’s numbers are diminished and its armory empty but
when the survivors’ shield of silicon is rendered incapable of saving its
technological assets—and finally its human deputies. War could evolve into a
game of purely mechanical fatalities, the deciding factor being the
psychological strength of the human (or AI) who must contest to risk, or
forfeit to prevent, a breakthrough moment of total
destruction.
Even the motives
governing the new battlefield would be alien, to some extent. The English
writer G. K. Chesterton once quipped that “the true soldier fights not because
he hates what is in front of him, but because he loves what is behind him.” An
AI war is unlikely to involve love or hate, let alone a concept of soldierly
bravery. On the other hand, it may still incorporate ego, identity, and
loyalty—although the nature of those identities and loyalties may not be
consistent with those of today.
The calculation in
warfare has always been relatively straightforward: whichever side first finds
intolerable the pain of battle will likely be conquered. The consciousness of
one’s own shortcomings has in the past produced restraint. Without such awareness,
and with no sense for (and thus a great tolerance of) pain, one cannot but
wonder what, if anything, would prompt restraint in an AI that has been
introduced into warfare, and what would conclude the conflicts it wages. A
chess-playing AI, if it had never been informed of the rules dictating the end
of the game, could play to the very last pawn.
Geopolitical Restructuring
In every age of
humanity, almost as if in obedience to some natural law, there has emerged, as
one of us (Kissinger) once put it, a unit “with the power, the will, and the
intellectual and moral impetus to shape the entire international system in
accordance with its own values.” The most familiar arrangement of human
civilizations is that of the Westphalian system as conventionally understood.
The idea of the sovereign nation-state, however, is only a few centuries old,
having emerged from treaties that are collectively known as the Peace of
Westphalia in the mid-seventeenth century. It is not the preordained unit of
social organization, and it may not be suited for the age of AI. Indeed, as
mass disinformation and automated discrimination trigger a loss of faith in
that arrangement, AI may pose an inherent challenge to the power of national
governments. Alternatively, AI may well reset the relative positions of
competitors within today’s system. If its powers are harnessed primarily by
nation-states themselves, humanity could be forced toward a hegemonic stasis,
or else toward a new equilibrium of AI-empowered nation-states. But the technology could also be the catalyst of an even more
fundamental transition—a shift to an entirely new system, in which state
governments would in turn be forced to abandon their central role in the global
political infrastructure.
One possibility is
that the companies that own and develop AI will accrue totalizing social,
economic, military, and political power. Today’s governments are forced to
contend with their difficult position both as cheerleaders for private
corporations—lending their military power, diplomatic capital, and economic
heft to promote these homegrown firms—and as supporters of the average citizen
suspicious of monopolistic greed and secrecy. That may prove an untenable
contradiction.
Meanwhile, corporations
could form alliances to consolidate their already
considerable strength. Those alliances might be built on complementary
advantages and the profit of amalgamation or, alternatively, on a shared
philosophy of development and deployment of AI systems. These corporate
alliances might take on traditional nation-state functions, though rather than
seeking to define and expand bounded territories, they would cultivate diffuse
digital networks as their domains.
And there is still
another alternative. Uncontrolled, open-source diffusion could give rise to
smaller gangs or tribes with substandard but substantial AI capacities,
sufficient to administer to, provide for, and defend themselves within some
limited scope. Among human groups that reject established authority in favor of
decentralized finance, communication, and governance, such technology-enabled
proto-anarchy could win out. Or such groupings might incorporate a religious
dimension. After all, in terms of reach, Christianity, Islam, and Hinduism have
all been larger and longer-lasting than any state in
history. In the age to come, religious denomination, more than national
citizenship, might conceivably prove the more relevant framework for identity
and loyalty.
In either future,
whether dominated by corporate alliances or diffused into loose religious
groupings, the new “territory” that each group would claim—and over which they
would fight—would not be inches of land but a digital landscape, seeking the
loyalties of individual users. Linkages between these users and any
administration would subvert the traditional notion of citizenship, and
agreements between the entities would be unlike ordinary alliances.
Historically,
alliances have been forged by individual leaders and have served to augment a
nation’s strength in case of war. By contrast, the prospect of citizenships and
alliances—and perhaps conquests or crusades—structured around the opinions,
beliefs, and subjective identities of ordinary people in times of peace would
require a new (or very old) conception of empire. It would also force a
reassessment of the obligations entailed in pledging allegiance and the cost of
exit options, if indeed any were to exist in the AI-entangled future.
Peace and Power
The foreign policies
of nation-states have been built and then adjusted by balancing idealism and
realism. The temporary balances struck by our leaders are seen in retrospect
not as end-states but as only ephemeral (if necessary) strategies for their time.
With each new age, this tension has produced a different expression of what
constitutes political order. The dichotomy between the pursuit of interests and
the pursuit of values—or between a particular nation-state’s advantage and the
global good—has been part of this unending evolution. In the conduct of their
diplomacy, leaders of smaller states historically have responded
straightforwardly, prioritizing the necessities of their own survival. By
contrast, those responsible for global empires, with the means to realize
additional goals, have faced a more agonizing predicament.
Since the beginning
of civilization, as human units of organization have grown, they have
simultaneously achieved new levels of cooperation. But today, perhaps because
of the scale of planetary challenges as well as to the material inequalities
evident among and within states, a backlash against this trend has surfaced. AI
could prove commensurate to the demands of this still-grander scale of human
governance, capable of seeing with granularity and fidelity not merely the
imperatives of the country but also the interplay of the globe.
We harbor a hope that
AI, deployed for political ends at home and abroad, might do more than just
illuminate balanced tradeoffs. Ideally, it could provide new, globally optimal
solutions, acting on a longer time horizon and with greater precision than humans
are capable of, and thus bringing competing human interests into alignment. In
the coming world, machine intelligences navigating
conflict and negotiating peace might help clarify, or even surmount,
traditional dilemmas.
However, if AI were
indeed to fix problems that we should have hoped to solve ourselves, we could
face a crisis of confidence—of both overconfidence and the lack of confidence.
To the former, once we understand the limits of our own ability for self-correction,
it may be difficult to admit that we have come to cede too much power to
machines in handling existential issues of human conduct. To the latter, the
realization that simply removing human agency from the handling of our affairs
has been enough to solve our most intractable problems might reveal too
explicitly the shortcomings of human design. If peace has always been but a
simple voluntary choice, the price of human imperfection has been paid in the
coin of perpetual war. To know that a solution has always existed but has never
been conceived by us would be crushing to human pride.
In the case of
security, unlike that of the displacement of people in scientific or other
academic endeavors, we may more readily accept the impartiality of a mechanical
third party as necessarily superior to the self-interestedness of a human—just
as humans easily recognize the need for a mediator in a contentious divorce.
Some of our worst traits will enable us to exhibit some of our best: the human
instinct toward self-interest, even at the expense of others, may prepare us
for accepting AI’s transcendence of the self.
For updates click hompage here