By Eric Vandenbroeck and co-workers
Earlier, we pointed out that most people before that led lives
comparable to those of their remote ancestors millennia ago rather than their
current descendants. Where over the past two centuries, the dramatic spike in
income per capita across world regions follows thousands
of years of stagnation.
A more graphic example during the Cretaceous Period, a comet six miles
wide, taller than Mount Everest, and traveling at a hundred times the speed of
a jet plane targeted the earth. We would not
have made it if Homo sapiens had been alive when the comet struck.
Today we equally could argue that for every person alive today, ten
have lived and died in the past. But if human beings survive as long as the
average mammal species, then for every person alive today, a thousand people
will live in the future. We are the ancients. On the scale of a typical human
life, humanity today is barely an infant struggling to walk.
Although the future of our species may yet be long, it may instead be
fleeting. From climate change to nuclear war, engineered pandemics, uncontrolled artificial intelligence (AI), and
other destructive technologies not yet foreseen, a worrying number of risks
conspire to threaten the end of humanity.
Just over 30 years ago, as the Cold War
came to an end, some thinkers saw the future unfurling in a far more placid
way. The threat of apocalypse, so vivid in the Cold War imagination, had begun
to recede. The end of communism a few decades after the defeat of fascism
during World War II seemed to have settled the primary ideological debates.
Capitalism and democracy would spread inexorably. The political theorist Francis Fukuyama divided the world into
“post-historical” and “historical” societies. War might persist in certain
parts of the world in the shape of ethnic and sectarian conflicts, for
instance. But large-scale wars would become a thing of the past as more and
more countries joined the likes of France, Japan, and the United States on the
other side of history. The future offered a narrow range of political
possibilities, promising relative peace, prosperity, and ever-widening
individual freedoms.
The prospect of a timeless future has given way to visions of no future.
Ideology remains a fault line in geopolitics, market globalization is fragmenting, and great-power conflict has become
increasingly likely. But the threats to the future are bigger still, with the
possibility of the eradication of the human species. In the face of that
potential oblivion, the range of political and policy debates is likely to be
wider in the years ahead than in decades. The great ideological disputes are
far from settled. In truth, we are likely to encounter bigger questions and be
forced to consider more radical proposals that reflect the challenges posed by
the transformations and perils ahead. Our horizons must expand, not shrink.
Chief among those challenges is how humanity manages the dangers of its
genius. Advances in weaponry, biology, and computing could spell the end of the
species, either through the deliberate misuse or a large-scale accident.
Societies face risks whose sheer scale could paralyze any concerted action. But
governments can and must take meaningful steps today to ensure the species'
survival without forgoing the benefits of technological progress. Indeed, the
world will need innovation to overcome several cataclysmic dangers it already
faces—humanity needs to be able to generate and store clean energy, detect novel
diseases when they can still be contained, and maintain peace between the great powers without relying on a
delicate balance of nuclear-enabled mutually assured destruction.
Far from a safe resting place, the technological and institutional
status quo is a precarious predicament from which societies need to escape. To
lay the groundwork for this escape, governments must become more aware of their
risks and develop a robust institutional apparatus for managing them. This
includes embedding concern for worst-case scenarios into relevant areas of
policymaking and embracing an idea known as “differential technological
development”—reining in work that would produce potentially dangerous outcomes,
such as biological research that can be weaponized, while funding and otherwise
accelerating those technologies that would help reduce risk, such as wastewater
monitoring for pathogen detection.
The greatest shift needed is one of perspective. Fukuyama looked to the
future mournfully, seeing a gray, undramatic expanse—a tableau for technocrats.
“The end of history will be a very sad time,” he wrote in 1989, in which
“daring, courage, imagination, and idealism will be replaced by economic
calculation, the endless solving of technical problems, environmental concerns,
and the satisfaction of sophisticated consumer demands.” But at this beginning
of history, this critical juncture in the human story, it will take daring and
imagination to meet the various challenges ahead. Contrary to what Fukuyama
foresaw, the political horizon has not narrowed to a sliver. Enormous economic,
social, and political transformations remain possible—and necessary. If we act
wisely, the coming century will be defined by recognizing what we owe the
future, and our grandchildren’s grandchildren will look back at us with
gratitude and pride. If we mess up, they might never see the light of day.
Those who are yet to come
The fossil record indicates that the average mammal species lasts a
million years. By this measure, we have about 700,000 years ahead of us. During
this time, even if humanity remained earthbound at just one-tenth of the
current world population, a staggering ten trillion people would be born in the
future.
Moreover, our species is not the average mammal, and humans may well be
able to outlast their relatives. If we survived until the expanding sun
scorched the earth, humanity would persist for hundreds of millions of years.
More time would separate us from our last descendants than from the earliest
dinosaurs. And if one day we settled space—entirely conceivable on the scale of
thousands of years—earth-originating intelligent life could continue until the
last stars burned out in tens of trillions of years.
From being an idle exercise in juggling unfathomable numbers,
appreciating the potential scale of humanity’s future is vital to understanding
what is at stake. Actions today could affect whether and how trillions of our
descendants might live—whether they will face poverty or abundance, war or
peace, slavery or freedom—placing inordinate responsibility on the shoulders of
the present. The profound consequences of such a shift in perspective are
demonstrated by a striking experiment conducted in the small Japanese town of Yahaba. Before debating municipal policy, half the
participants were asked to put on ceremonial robes and imagine they were from
the future, representing the interests of the current citizens’ grandchildren.
Not only did researchers observe a “stark contrast in deliberation styles and
priorities between the groups,” the concern for future generations was
infectious—among the measures on which consensus could be achieved, but more
than half were also proposed by the imaginary grandchildren.
Thinking in the long term reveals how much societies can still achieve.
As little as 500 years ago, it would have been inconceivable that one-day
incomes would double every few generations, that most people would live to see
their grandchildren grow up, and that the world’s leading countries would be
secular societies whose leaders are chosen in free elections. Countries that
now seem so permanent to their citizens may not last more than a few centuries.
None of the world’s various modes of social organization appeared in history
fully formed. A short-term focus on days, months, or years obscures the potential
for fundamental long-term change.
The fact that humanity is only in its infancy highlights what a tragedy
its untimely death would be. There is so much life left to live, but in our
youth, our attention flits quickly from one thing to the next, and we stumble
around, not realizing that some of our actions place us at serious risk. Our
powers increase daily, but our self-awareness and wisdom lag behind. Our story
might end before it has truly begun.
How we could end history
In contrast to Fukuyama’s “end of history,” other observers of
international affairs have focused on the more literal meaning of the phrase:
the potential for humanity to perish altogether. Such views were especially
prevalent at the dawn of the Cold War, shortly after nuclear scientists enabled
a massive leap in humanity’s destructive potential. As the British statesman
Winston Churchill put it in 1946 with characteristic verve, “The Stone Age may
return on the gleaming wings of science, and what might now shower immeasurable
material blessings upon mankind, may even bring about its destruction.” A few
years later, U.S. President Dwight
Eisenhower echoed these concerns during his first
inaugural address. He warned that “science seems ready to confer upon us, as
its final gift, the power to erase human life from this planet.”
Human history is rife with catastrophe, from the horrors of the Black Death to those of slavery and colonialism.
But barring a few highly unlikely natural events, such as supervolcano
eruptions or meteors crashing into the planet, there were no plausible
mechanisms by which humanity as a whole could perish. In his book The
Precipice, the Oxford philosopher Toby Ord estimated that even accepting
all the most pessimistic assumptions, the accumulated risks of naturally
occurring extinction still afford humanity an expected lifespan of at least
100,000 years.
Serious concerns about “existential catastrophe”—defined by Ord as the
permanent destruction of humanity’s potential—emerged mainly in the second half
of the twentieth century, hand in hand with an acceleration of technological
progress. Lord Martin Rees, the former president of the Royal Society, wrote in
2003 that humanity’s odds of surviving this century are “no better than 50-50.”
Ord estimated the likelihood of humanity wiping itself out or otherwise
permanently derailing the course of civilization at one in six within the next
hundred years. If either is right, the most likely way an American born today
could die young is in a civilization-ending catastrophe.
Nuclear weapons exhibit several
crucial properties that future technological threats may also possess. When
invented in the middle of the twentieth century, they presented a sudden jump
in destructive capabilities: the atomic bomb was thousands of times more
powerful than pre-nuclear explosives; hydrogen bombs allowed yields thousands
of times more explosive. Compared with the pace of increases in destructive
power in the pre-nuclear age, 10,000 years of advances occurred within just a
few decades.
These developments were hard to anticipate: the eminent physicist
Ernest Rutherford dismissed the idea of atomic energy as “moonshine” as late as
1933, one year before Leo Szilard, another acclaimed physicist, patented the
idea of a nuclear fission reactor. Once nuclear bombs had arrived, destruction
could have been unleashed either deliberately, such as when U.S. generals
advocated for a nuclear first strike on China during the 1958 Taiwan Strait crisis, or accidentally, as
demonstrated by the harrowing track record of misfires in early warning
systems. Even worse, measures to defend against a deliberate attack often came
at the price of an increased risk of accidental nuclear Armageddon. Consider,
for instance, the United States’ airborne alert, its launch-on-warning
doctrine, or the Soviet “Dead Hand” system, which guaranteed that if Moscow
suffered a nuclear attack, it would automatically launch an all-out nuclear
retaliation. The end of the Cold War did not fundamentally change this deadly
calculus, and nuclear powers still balance safety and force readiness at the
heart of their policies. Future technologies might impose even more dangerous
tradeoffs between safety and performance.
Apocalypse soon?
But nuclear weapons are far from
the only risks we face. Several future technologies could be more destructive,
easier to obtain for a wider range of actors, pose more dual-use concerns, or
require fewer missteps to trigger the extinction of our species—and hence be
much harder to govern. A recent report by the U.S. National Intelligence
Council identified runaway artificial intelligence, engineered pandemics, and
nanotechnology weapons, in addition to nuclear war, as sources of existential
risks—“threats that could damage life on a global scale” and “challenge our
ability to imagine and comprehend their potential scope and scale.”
Take, for example, engineered pandemics. Progress in biotechnology has
been extremely rapid, with key costs, such as gene sequencing, falling ever
faster. Further advances promise numerous benefits, such as gene therapies for
as yet incurable diseases. But dual-use concerns loom large: some of the
methods used in medical research could, in principle, be employed to identify
or create more transmissible and lethal pathogens than anything in nature. This
may be done as part of open scientific enterprises— in which scientists sometimes
modify pathogens to learn how to combat them—or with less noble intentions in
terrorist or state-run bioweapons programs. (Such programs are not a thing of
the past: a 2021 U.S. State Department report concluded that both North Korea
and Russia maintain an offensive bioweapons program.) Bad actors could also
misuse research published with pro-social intentions, perhaps in ways the
original authors never considered.
The cranium and mandible of
an early human species at the Museum of Human Evolution in Burgos, Spain, July
2010
Unlike nuclear weapons, bacteria and viruses are self-replicating.
As the COVID-19 pandemic tragically
proved, once a new pathogen has infected a single human being, there may be no
way to put the genie back in the bottle. And although just nine states have
nuclear weapons—with Russia and the United States controlling more than 90
percent of all warheads—the world has thousands of biological laboratories. Of
these, dozens—spread out over five continents—are licensed to experiment with
the world’s most dangerous pathogens.
Worse, the safety track record of biological research is even more
dismal than that of nuclear weapons. In 2007, foot-and-mouth
disease, which spreads rapidly through livestock populations and can easily
cause billions of dollars of economic damage, leaked not once but twice from
the same British laboratory within weeks, even after government intervention.
And lab leaks have already led to the loss of human life, such as when
weaponized anthrax escaped from a plant connected to the Soviet bioweapons
program in Sverdlovsk in 1979, killing dozens. Perhaps most worrying, genetic
evidence suggests that the 1977 “Russian flu” pandemic may have originated in
human experiments involving an influenza strain that had circulated in the
1950s. Around 700,000 people died.
Hundreds of accidental infections have occurred in U.S. labs alone—one
per 250 person-years of laboratory work. Since there are dozens of
high-security labs in the world, each of which employs dozens, perhaps even
hundreds, of scientists and other staff, such a rate amounts to multiple
accidental infections per year. Societies must significantly reduce this rate.
If these facilities ever start tinkering with extinction-level pathogens,
humanity’s premature end will be just a matter of time.
Governance at the end of
the world
Despite this rising level of risk, it is far from assured that humanity
will be able to take the necessary steps to protect itself. There are several
obstacles to adequate risk mitigation.
The most fundamental issue is painfully familiar from the struggles
of climate diplomacy in recent
years. When burning fossil fuels, individual countries reap most of the
benefits, but other countries and future generations will bear most of the
costs. Similarly, engaging in risky biological research holds the promise of
patentable drugs that could boost a country’s economy and prestige—but a
pathogen accidentally released in that country would not respect borders. In
the language of economists, imposing a risk on the future is a negative
externality, and providing risk-reduction measures, such as establishing an
early warning system for novel diseases, is a global public good. (Consider how
the whole world would have benefited if COVID-19, like SARS between 2002
and 2004, had been contained in many countries and then eradicated.) This is
precisely the sort of good that neither the market nor the international system
will provide by default because countries have powerful incentives to free-ride
on the contributions of others.
Humanity has several avenues for escaping this structural tragedy. To
assuage concerns about losing ground in the struggle for security, countries
could enter into agreements to collectively refrain from developing especially
dangerous technologies such as bioweapons. Alternatively, a coalition of the
willing could band together to form what the economist William Nordhaus has called a “club.” Members of a club
jointly help provide the global public good the club was formed to promote. At
the same time, they commit to providing benefits to one another (such as
economic growth or peace) while imposing costs (through measures such as
tariffs) on nonmembers, enticing them to join. For instance, clubs could be
based on safety standards for artificial intelligence systems or a moratorium on
risky biological research.
Unfortunately, the resurgence of great-power competition casts doubt on
the likelihood of these feats of global cooperation. Worse, geopolitical
tensions could compel states to accept an increased risk to the world—and to
themselves—if they perceive it as a gamble worth taking to further their
security interests. (In the eight years the United States maintained bombers on
continuous airborne alert, five aircraft crashed while carrying nuclear
payloads.) And if even one state’s bioweapons program experimented with
extinction-level pathogens—perhaps on a foolhardy quest to develop the ultimate
deterrent—the next laboratory accident could precipitate a global pandemic much
worse than that of COVID-19.
In the worst case, the great powers could, in their struggle for global
hegemony, resort to outright war. For people who grew up in the West
after World War II, this notion might seem
far-fetched. The psychologist Steven Pinker has popularized the claim that
violence—including among states—has long declined. Subsequent analysis by the
political scientist Bear Braumoeller and others,
however, has substantially complicated the picture. The researchers have
suggested that the intensity of conflict appears to follow what is known as a
“power law,” meaning that after an interlude of relative peace, it is entirely
possible that war might return in an even more deadly incarnation. Calculations
by the computer scientist Aaron Clauset have
indicated that the “long peace” that followed World War II would need to endure
for another century before it would constitute significant evidence of an
actual long-term decline in war. Braumoeller asserted
that it is “not at all unlikely that another war that would surpass the two
World Wars in lethality will happen in your lifetime,” noting that in the
conclusion of his book on the topic, he “briefly considered typing, ‘We’re all
going to die,’ and leaving it at that.”
Staving off the risk of World War III while also achieving
unprecedented innovations in international governance is a tall order. But like
it or not, that is the challenge we face.
Innovate to survive
One response to this daunting challenge is retreat. If it is so
difficult to govern emerging technologies safely, some argue, then why don’t we
simply refrain from inventing them in the first place? Members of the
“degrowth” movement take precisely this stance, decrying economic growth and
technological progress as the main culprits behind alienation, environmental
destruction, and all kinds of other harms. In 2019, 11,000 scientists from more
than 150 countries signed an open letter demanding that the world's population
“be stabilized—and, ideally, gradually reduced” and that countries turn their
priorities away “from GDP growth.”
Despite its intuitive appeal, this response is unrealistic and
dangerous. It is unrealistic because it simply fails to engage with the
interdependence of states in the international system. Even if the world’s
countries came together temporarily to halt innovation, sooner or later,
someone would resume the pursuit of advanced technology.
Be that as it may, technological stagnation is not desirable anyway. To
see why note that new technologies can both exacerbate and reduce risk. Once a
new technological danger has been introduced—such as nuclear
weapons—governments might require additional technologies to manage that risk.
For example, the threat nuclear
weapons pose to the survival of the human species would be greatly
reduced if during a potential nuclear winter, people could produce food without
sunlight or if early warning systems could more reliably distinguish between
intercontinental ballistic missiles and small scientific rockets. But if
societies stop technological progress, new technological threats may emerge
that cannot be contained because the commensurate strides in defense have not
been made. For instance, a wide variety of actors may be able to create
unprecedentedly dangerous pathogens when people have not made much progress in
the early detection and eradication of
novel diseases.
The status quo, in other words, is already heavily mined with potential
catastrophes. And in the absence of defensive measures, threats from nature
might eventually lead to human extinction as they have for many other species:
to survive to their full potential, human beings will need to learn to perform
such feats as deflecting asteroids and quickly fighting off new pandemics. They must avoid the fate
of Icarus—but still fly.
The challenge is to continue reaping the fruits of technological
advancement while protecting humanity against its downsides. Some experts refer
to this as “differential technological development,” the idea being that if
people can’t prevent destructive technology or accidents from happening in the
first place, they can, with foresight and careful planning, at least attempt to
develop beneficial and protective technologies first.
We’re already in a game of what Richard Danzig, the former U.S.
secretary of the navy, has called “technology roulette.” No bullet has been
fired yet, but that doesn’t change how risky the game is. There are many more
turns to pull the trigger in the future: a bad accident and perhaps a fatal one
is inevitable unless our species changes the game.
What we owe the future
Game-changers have so far been in short supply. Given the stakes,
societies have to date, done little to protect their future. Consider, for
instance, the Biological Weapons Convention, which prohibits the development,
storage, and acquisition of biological weapons. The national security expert
Daniel Gerstein described it as “the most important arms control treaty of the
twenty-first century.” Yet, it lacks a verification mechanism, and that of the
Met Gala dwarfs its budget. As if this weren’t enough of a travesty, the BWC struggles
to raise even the meager contributions it is due—a 2018 report by the
convention’s chair lamented the “precarious and worsening state of the
financial situation of the BWC . . . due to long-standing non-payment of
assessed contributions by some States Parties.”
The management of nonbiological risks doesn’t inspire confidence,
either. Research aimed at preventing the loss of control over artificially
intelligent systems remains a minuscule fraction of overall AI research. And militaries are using
lethal autonomous weapons on the battlefield, while efforts to limit such
weapons systems have stalled for years at the UN. The domestic situation
doesn’t look much better—less than one percent of the U.S. defense budget is
dedicated to biodefense, and the majority of that goes to fending off chemical
weapons such as anthrax. Even after COVID-19 killed one in every 500
people in the world and inflicted $16 trillion worth of economic damage in the
United States alone, Congress couldn’t agree to provide a modest $15 billion to
bolster pandemic preparedness.
This kind of risk reduction is so neglected that opportunities for
positive change abound. One success story of existential risk mitigation is
NASA’s Spaceguard program. At a cost of less than $5
million per year, between its inception in 1998 and 2010, scientists tracked
more than 90 percent of extinction-threatening asteroids, in the process increasing
the accuracy of their predictions and reducing the best estimate of the risk
that one will strike the earth by a factor of ten. During the COVID-19
pandemic, the U.S. government spent $18 billion on Operation Warp Speed to
accelerate vaccine development. The program resulted in safe and effective
vaccines that the United States and other countries could buy at a price
constituting a small fraction of the vaccines’ social benefits, which have been
estimated to amount to tens of trillions of dollars. The economist Robert Barro
has estimated that between September 2021 and February 2022, these vaccines
saved American lives at the cost of between $55,000 and $200,000 each, more
than 20 times above the cost-effectiveness threshold that lifesaving policies usually
need to meet.
A climate change
demonstration in London, September 2019
If the world’s best and brightest step up and governments or the
private sector provide funding, we can achieve even more impressive successes.
For instance, although it still must overcome major technical hurdles,
widespread metagenomic sequencing of wastewater would help detect novel
diseases at a stage when they can still be contained and eradicated. The
Nucleic Acid Observatory, based at the Massachusetts Institute of Technology,
is pursuing just this vision. The public and private sectors should also
develop better personal protective equipment and do further research on
sterilization technology such as Far UVC—an ionizing radiation process that, if
successful, could offer a near-universal defense against pathogens and be
installed in any building. Regarding artificial
intelligence, research aimed at making systems safe and
reliable must be scaled up tenfold. The common thread running through such
measures is an emphasis on defensive strategies that do not themselves create
or enhance other risks.
Progress is also possible in other domains. Intelligence collection and
analysis will be critical to the known sources of large-scale risks. And
although achieving complete certainty is impossible (as the
astronomer Carl Sagan once quipped, “Theories that involve the end of
the world are not amenable to experimental verification—or at least, not more
than once”), scanning and forecasting what is on the horizon can help identify
new concerns. In this vein, it is encouraging that the most recent Global
Trends report by the National Intelligence Council included a discussion of the
concept of existential risk, calling for “the development of resilient
strategies to survive.”
More governments, institutions, and firms need to take such ideas
seriously. Regulatory reform will also be important. In Averting
Catastrophe, Cass Sunstein, a former head of the
regulatory office at the White House, showed how the government’s current
approach to cost-benefit analysis couldn’t sufficiently account for potential
catastrophic risks. Sunstein argued for what he called the “maximin principle”:
in the face of extreme risks—and human extinction certainly qualifies as
such—governments must focus on eliminating the worst outcomes. As it happens,
the White House is currently modernizing its framework for reviewing
regulations. It should use this opportunity to make its approach to dealing
with low-probability risks of extreme damage fit for the twenty-first century,
whether by adopting Sunstein’s maximin principle or something similar that
takes global catastrophic risks seriously.
Fukuyama prophesied “centuries of boredom at the end of history.”
Nothing could be further from the case. Powerful and destructive technologies
will present an unprecedented challenge to the current political system.
Advanced AI could undermine the balance of power between individuals and
states: an entirely automated workforce would give the government little reason
to treat its citizens well; a dictatorship with an AI army and police force
could prevent the possibility of an uprising or a coup. The government could
use the prospect of a third world war as a reason to expand the state and crack
down on individual liberties such as free speech on the grounds of protecting
national security. The possibility of easily accessible bioweapons could be
used to justify universal surveillance.
With humanity’s future in mind, we should resist such pressures. We
must fight to ensure both that we have a future and that it is a future worth
having. The cultural shift toward liberalism over the past three centuries
created an engine of moral progress that led to the spread of democracy, the abolition of slavery, and expanded rights for women and people of
color. That engine can’t be turned off now. If anything, we need to further
promote moral and political diversity and experimentation. Looking back
millennia, moderns see the Romans’ slaveholding practices, torture for
entertainment, and ultra-patriarchy as barbaric. Perhaps future generations
will see many of our current practices as a little better.
So we must walk a tightrope. We must ensure that global cooperation
reduces the risks of global catastrophe to near zero while maintaining the
freedom and diversity of thought and social structures that would enable us to
build a future that our grandchildren’s grandchildren would thank us for.
Contemplating large-scale political change is daunting, but past innovations in
governance, such as the UN system and the EU, provide reasons for hope.
We are not used to seeing ourselves as one of history’s first
generations; we tend to focus on what we have inherited from the past, not what
we could bequeath to the future. This is a mistake. To tackle the task before
us, we must reflect on where we stand in humanity’s full lineage. We, in the
present day, recklessly gamble, not just with our lives and our children’s
lives but with the very existence of all who are yet to come. Let us be the
last generation to do so.
For updates click hompage here