By Eric Vandenbroeck
and co-workers
But Why Technology Defines The Future Of
Geopolitics
Few thought Ukraine
could survive when Russian forces marched on Kyiv in February 2022. Russia had
more than twice as many soldiers as Ukraine. Its military budget was more than
ten times as large. The U.S. intelligence community estimated that Kyiv would
fall within one to two weeks.
Outgunned and
outmanned, Ukraine turned to one area where it held an advantage over the
enemy: technology. Shortly after the invasion, the Ukrainian government
uploaded critical data to the cloud to safeguard information and keep
functioning even if Russian missiles turned its ministerial offices into
rubble. The country’s Ministry of Digital Transformation, which Ukrainian
President Volodymyr Zelensky had established just two years earlier,
repurposed its e-government mobile app, Diia, for open-source intelligence
collection so citizens could upload photos and videos of enemy military units.
With their communications infrastructure in jeopardy, the Ukrainians turned to Starlink satellites and ground stations provided by SpaceX
to stay connected. When Russia sent Iranian-made drones across the border,
Ukraine acquired drones specially designed to intercept their attacks—while its
military learned how to use unfamiliar weapons supplied by Western allies. In
the cat-and-mouse game of innovation, Ukraine proved nimbler. And so what
Russia had imagined would be a quick and easy invasion has turned out to be
anything but.
Ukraine’s success can
be credited partly to the Ukrainian people's resolve, the Russian military's
weakness, and Western support. But it also owes to the defining new force of
international politics: innovation power. Innovation power is the ability to
invent, adopt, and adapt new technologies. It contributes to both hard and soft
power. High-tech
weapons systems
increase military might, new platforms and the standards that govern them
provide economic leverage, and cutting-edge research and technologies enhance
the global appeal. There is a long tradition of states harnessing innovation to
project power abroad, but what has changed is the self-perpetuating nature of
scientific advances. Developments in artificial intelligence, in
particular, not only unlock new areas of scientific discovery; they also speed
up that process. Artificial intelligence supercharges the ability of scientists
and engineers to discover ever more powerful technologies, fostering advances
in artificial intelligence itself and other fields—and reshaping the world in
the process.
The ability to
innovate faster and better—the foundation on which military, economic, and
cultural power rest—will determine the outcome of the great-power competition
between the United States and China. For now, the United States remains in the
lead. But China is catching up in many areas and has already surged
ahead in others. To emerge victorious from this century-defining contest,
business as usual will not do. Instead, the U.S. government will have to
overcome its stultified bureaucratic impulses, create favorable conditions for
innovation, and invest in the tools and talent needed to kick-start the
virtuous cycle of technological advancement. It needs to commit itself to
promoting innovation in the service of the country and the service of
democracy. At stake is nothing less than the future of free societies, open
markets, democratic government, and the broader world order.
The nexus between
technological innovation and global domination dates back centuries, from the
muskets the conquistador Francisco Pizarro wielded to defeat the Inca Empire to
the steamboats Commodore Matthew Perry commanded to force the opening
of Japan. But the sheer speed at which innovation is happening has no
precedent. Nowhere is this change clearer than in one of the foundational
technologies of our time: artificial intelligence.
Today’s AI systems
can already provide key advantages in the military domain, where they can parse
millions of inputs, identify patterns, and alert commanders to enemy activity.
The Ukrainian military, for example, has used AI to efficiently scan
intelligence, surveillance, and reconnaissance data from various sources.
However, AI systems will increasingly move beyond merely assisting human
decision-making and start making decisions themselves. John Boyd, a military
strategist, and U.S. Air Force colonel, coined the term “OODA loop”—observe,
orient, decide, act—to describe the decision-making process in combat.
Crucially, AI will be able to execute each part of the OODA loop much
faster. Conflict can happen at the speed of computers, not the speed of
people. As a result, command-and-control systems that rely on human
decision-makers—or, worse, complex military hierarchies—will lose out to
faster, more efficient systems that team machines with humans.
The technologies that
shaped geopolitics—from bronze to steel, steam power to nuclear fission—were largely
singular in previous eras. There was a clear threshold of technological
mastery, and once a country reached it, the playing field was leveled.
Artificial intelligence, by contrast, is generative. By presenting a platform
for continuous scientific and technological innovation, it can lead to yet more
innovation. That phenomenon fundamentally differentiates the AI age from the
Bronze Age or the steel age. Rather than natural resource wealth or mastery of
a given technology, the source of a country’s power now lies in its ability to
innovate continuously.
This virtuous cycle
will only get faster and faster. Once quantum computing comes of age, superfast
computers will allow for processing ever-larger amounts of data, producing
ever-smarter AI systems. These AI systems, in turn, will be able to produce
breakthrough innovations in other emerging fields, from synthetic biology to
semiconductor manufacturing. Artificial intelligence will change the very
nature of scientific research. Instead of making progress one study at a time,
scientists will discover the answers to age-old questions by analyzing massive
data sets, freeing the world’s smartest minds to devote more time to developing
new ideas. As a foundational technology, AI will be critical in the race for
innovation power, lying behind countless future developments in drug discovery,
gene therapy, material science, and clean energy—and in AI itself. Faster
airplanes did not help build faster airplanes, but faster computers will help
build faster computers.
Even more powerful
than today’s artificial intelligence is a more comprehensive technology—for
now, given current computing power, still hypothetical—called “artificial
general intelligence,” or AGI. Whereas traditional AI is designed to solve a
discrete problem, AGI should be able to perform any mental task a human can and
more. Imagine an AI system that could answer seemingly intractable questions,
such as the best way to teach a million children English or to treat a case of
Alzheimer’s disease. The advent of AGI remains years, perhaps even decades,
away, but whichever country develops the technology first will have a massive
advantage since it could then use AGI to develop ever more advanced versions of
AGI, gaining an edge in all other domains of science and technology in the
process. A breakthrough in this field could usher in an era of predominance,
not unlike the short period of nuclear superiority the United States enjoyed in
the late 1940s.
Whereas many of AI’s most
transformative effects are still far off, drone innovation is already
upending the battlefield. In 2020, Azerbaijan employed Turkish- and
Israeli-made drones to gain a decisive advantage in its war against Armenia in
the disputed Nagorno-Karabakh region, racking up battlefield victories after
more than two decades of military stalemate. Similarly, Ukraine’s fleet of
drones—many low-cost commercial models repurposed for reconnaissance behind
enemy lines—has played a critical role in its successes.
Drones offer distinct
advantages over traditional weapons: smaller and cheaper, offer unmatched
surveillance capabilities, and reduce soldiers’ risk exposure. Marines in urban
warfare, for example, could be accompanied by microdrones that serve as their
eyes and ears. Over time, countries will improve the hardware and software
powering drones to out-innovate their rivals. Eventually, autonomous weaponized
drones—not just unmanned aerial vehicles but also ground-based ones—will
replace soldiers and manned artillery. Imagine an autonomous submarine that
could quickly move supplies into contested waters or an autonomous truck that
could find the optimal route to carry small missile launchers across rough
terrain. Swarms of drones, networked and coordinated by AI, could overwhelm
tank and infantry formations. In the Black Sea, Ukraine has used
drones to attack Russian ships and supply vessels, helping a country with a
minuscule navy constrain Russia’s mighty Black Sea Fleet. Ukraine offers a
preview of future conflicts: wars that will be waged and won by humans and
machines working together.
As developments in
drones make clear, innovation power underlies military power. First and
foremost, technological dominance in crucial domains bolsters a country’s
ability to wage war and thus strengthens its deterrent capabilities. But
innovation also shapes economic power by giving states leverage over supply
chains and the ability to make the rules for others. Countries reliant on
natural resources or trade, especially those that must import rare or
foundational goods, face vulnerabilities others do not.
Consider China's
power over the countries it supplies with communications hardware.
Unsurprisingly, countries dependent on Chinese-supplied infrastructure—such as
many countries in Africa, where components produced by Huawei make up
about 70 percent of 4G networks—have been loath to criticize Chinese human
rights violations. Taiwan’s primacy in semiconductor manufacturing provides a
powerful deterrent against invasion since China has little interest in
destroying its largest source of microchips. Leverage also accrues to countries
pioneering new technologies. The United States, thanks to its role in the
foundation of the Internet, has for decades enjoyed a seat at the table in defining
Internet regulations. During the Arab Spring, for example, the fact that
the United States was home to technology companies that provided the backbone
of the Internet-enabled those companies to refuse Arab governments’ censorship
requests.
Less obvious but also
crucial, technological innovation buoys a country’s soft power. Hollywood and
tech companies like Netflix and YouTube have built a trove of content for an
increasingly global consumer base while helping spread American values. Such
streaming services project the American way of life into living rooms
worldwide. Similarly, the prestige associated with U.S. universities and the
opportunities for wealth creation created by U.S. companies attract strivers
from across the globe. In short, a country’s ability to project power in the
international sphere—militarily, economically, and culturally—depends on its
ability to innovate faster and better than its competitors.
Race To The Top
The main reason
innovation now lends such a massive advantage is that it begets more
innovation. In part, it does so because of the path dependency that arises from
clusters of scientists attracting, teaching, and training other great
scientists at research universities and large technology companies. But it also
does so because innovation builds on itself. Innovation relies on a loop of
invention, adoption, and adaptation—a feedback cycle that fuels more
innovation. If any link in the chain breaks, so, too, does a country’s ability
to innovate effectively.
A lead-in invention
is typically built on years of prior research. Consider how the United
States led the world into the 4G era of telecommunications. The rollout of
4G networks across the country facilitated the early development of mobile
applications such as Uber that required faster cellular data connections. With
that lead, Uber was able to refine its product in the United States so it could
roll it out in developing countries. This led to many more customers—and much
more feedback to incorporate—as the company adapted its product for new markets
and new releases.
But the moat around
countries that enjoy structural advantages in technology is shrinking. Thanks
in part to more accessible academic research and the rise of open-source
software, technologies now diffuse more quickly worldwide. The availability of
new advances has helped competitors catch up at record speed, as China
eventually did in 4G. Although some of China’s recent technological success
stems from economic espionage and a disregard for patents, much of it traces
back to innovative, rather than derivative, efforts to adapt and implement new
technology.
Indeed, Chinese companies
have successfully adopted and commercialized foreign technological
breakthroughs. In 2015, the Chinese Communist Party launched its “Made in China
2025” strategy to achieve self-sufficiency in high-tech industries such as
telecommunications and AI. As part of this bid, it announced an economic “dual
circulation” plan whereby China intends to boost domestic and foreign demand
for its goods. Through public-private partnerships, direct subsidies to private
companies, and support for state-backed companies, Beijing has poured billions
of dollars into ensuring it comes out ahead in the race for technological
supremacy. So far, the record is mixed. China is ahead of the United States in
some technologies yet lags in others.
At a semiconductor factory in Huai'an, China
It is hard to say
whether China will seize the lead in AI, but top officials in Beijing certainly
think it will. In 2017, Beijing announced plans to become the global leader in
artificial intelligence by 2030, and it may achieve that goal even earlier than
expected. China has already accomplished its goal of becoming the world’s
leader in AI-based surveillance technology, which it uses to control dissidents
at home and sells to authoritarian governments abroad. China still ranks behind
the United States in attracting the best minds in AI, with almost 60 percent of
top-tier researchers working in U.S. universities. But China’s lax privacy
laws, mandatory data collection, and targeted government funding give the
country a key advantage. Indeed, it already leads to the production of
autonomous vehicles.
For now, the United
States still retains an edge in quantum computing. Yet over the past decade,
China has invested at least $10 billion in quantum technology,
roughly ten times as much as the U.S. government. China is working to build
quantum computers so powerful that they will easily crack today’s encryption.
The country is also investing heavily in quantum networks—a way of transmitting
information in the form of quantum bits—presumably in the hope that such
networks would be impervious to monitoring by other intelligence agencies. Even
more alarming, the Chinese government may already be storing stolen and
intercepted communications to decrypt them once it possesses the computing
power to do so, a strategy known as “store now, decrypt later.” When quantum
computers become fast enough, all communications encrypted through non-quantum
methods will be at risk for the interception, raising the stakes of achieving
this breakthrough first.
China is also trying
to catch up with the United States in synthetic biology. Scientists in this field
are working on various new biological developments, including microbe-made
cement that absorbs carbon dioxide, crops with an increased ability to
sequester carbon, and plant-based meat substitutes. Such technology holds
enormous promise to fight climate change and create jobs, but since
2019, Chinese private investment in synthetic biology has outpaced U.S.
investment.
When it comes to
semiconductors, China has ambitious plans, too. The Chinese government is
funding unprecedented efforts to become a leader in semiconductor manufacturing
by 2030. Chinese companies are creating what is known as “seven-nanometer”
chips in the industry. Still, Beijing has set its sights further, announcing
plans to produce the new generation of “five-nanometer” chips domestically. The
United States outperforms China in semiconductor design, as do U.S.-aligned
Taiwan and South Korea. In October 2022, the Biden
administration took the important step of blocking leading U.S. companies
producing AI computer chips from selling to China as part of a package of
restrictions released by the Department of Commerce. Yet Chinese companies control
85 percent of the processing of the rare-earth minerals that go into these
chips and other critical electronics, offering important leverage over their
competitors.
A Battle Of Systems
The competition
between the United States and China is as much a competition between systems as
between states. In the Chinese model of civil-military fusion, the government
promotes domestic competition and funds emerging winners as “national
champions.” These companies play a dual role, maximizing commercial success and
advancing Chinese national security interests. The American model, on the other
hand, relies on a more disparate set of private actors. The federal government
provides funding to basic science but largely leaves innovation and
commercialization to the market.
The trifecta of
government, industry, and academia was the primary source of American
innovation for a long time. This collaboration drove many technological
breakthroughs, from the moon landing to the Internet. But with the end of
the Cold War, the U.S. government grew averse to allocating funding for
applied research, and it even lowered the amount devoted to fundamental
research. Although private spending has taken off, public investments have
plateaued over the past half-century. In 2015, the share of government funding
for basic research dropped below 50 percent for the first time since the end
of World War II, having hovered around 70 percent in the 1960s. Meanwhile,
the geometry of innovation—the respective role of public and private players in driving
technological progress—has changed since the Cold War in ways that have not
always yielded what the country needs. The rise of venture capital helped
accelerate adoption and commercialization, but it did little to address
higher-order scientific problems.
The reasons for
Washington’s reluctance to fund the science that serves as the
foundation of innovation power are structural. Innovation requires risk and,
sometimes, failure—something politicians are loath to accept. Innovation can
demand long-term investments, but the U.S. government operates on a single-year
budget cycle and, at most, a two-year political cycle. Despite these obstacles,
Silicon Valley (along with other hot spots in the United States) has encouraged
innovation. The American success story relies on a potent mix of inspiring
ambition, startup-friendly legal and tax regimes, and a culture of openness
that allows entrepreneurs and researchers to iterate and improve on new ideas.
That may be too
little, however. Government support has long played a critical role in
jump-starting innovation in the United States. Research in technologies that
seem outlandish now may prove critical in the not-too-distant future. In 2013,
for example, the Defense Advanced Research Projects Agency invested in
messenger RNA vaccines, working with the biotech company Moderna, which would
later develop and deliver a COVID-19 vaccine in record time. But such examples are rarer than they should
be.
Competition with
China demands reenergizing the interplay among the government, the private
sector, and academia. Just as the Cold War led to the creation of the National
Security Council, today’s tech-fueled competition should spur a rethinking of
existing policymaking structures. As the National Security Commission on
Artificial Intelligence recommended, a new “technology competitiveness
council,” inspired by the NSC, could help coordinate action among private
actors and develop a national plan to advance crucial emerging technologies. In
a promising sign, Congress recognized the need for decisive support. In 2022, a
bipartisan vote passed the CHIPS and Science Act, which directs $200 billion in
funding for scientific R&D over the next ten years.
Investing In The Future
To ensure that it
remains an innovation superpower, the United States must invest billions of
dollars in key areas of technological competition. In semiconductors, perhaps
the most vital technology today, the U.S. government should redouble its
efforts to onshore and “friend shore” supply chains, relocating them to the
United States or friendly countries. In renewable energy, it should fund R
& D for microelectronics, stockpile the rare-earth minerals (such as
lithium and cobalt) needed for batteries and electric vehicles, and invest in
new technologies that can replace lithium-ion batteries and offset China’s
resource dominance. Meanwhile, the rollout of 5G in the United States has
been slow, partly because government agencies—most notably, the Department of
Defense—control most of the high-frequency radio spectrum that 5G uses. The
Pentagon should open more of the spectrum to private actors to catch up with
China.
The United States
must invest in all parts of the innovation cycle, funding basic research and
commercialization. Meaningful innovation requires both invention and
implementation, the ability to execute and commercialize new inventions at
scale. This is often the main stumbling block. Research in electric cars, for
example, helped General Motors brought its first model onto the market in 1996.
Still, it took two more decades before Tesla mass-produced a commercially
viable model. Every new technology, from AI to quantum computing to synthetic
biology, must be pursued with the clear goal of commercialization.
In addition to
directly investing in the technologies that fuel innovation power, the United
States must invest in the input at the core of innovation: talent. The United
States boasts the world’s top startups, incumbent companies, and universities,
all of which attract the best and the brightest from around the world. Yet too
many talented people are prevented from coming to the United States by its
outdated immigration system. Instead of creating an easy path to a green card
for foreigners who earn STEM degrees from American schools, the current system
makes it needlessly difficult for top graduates to contribute to the U.S.
economy.
The United States has
an asymmetric advantage in employing highly skilled immigrants. Its enviable
living standards and abundant opportunities explain why the country has
attracted most of the world’s brightest AI minds. More than half of all AI
researchers working in the United States hail from abroad, and the demand for
AI talent still far exceeds the supply. If the United States closes its doors
to talented immigrants, it risks losing its innovative edge. Just as the
Manhattan Project was largely led by refugees and émigrés from Europe, the next
American technological breakthrough will almost certainly rely on immigrants.
The Best Defense
As part of its
efforts to translate innovation into hard power, the United States must
fundamentally rethink some defense policies. During the Cold War, the country
designed various “offset” strategies to counterbalance Soviet numerical
superiority through military strategy and technological innovations. Today,
Washington needs what the Special Competitive Studies Project has called an
“Offset-X” strategy, a competitive approach through which the United States can
maintain technological and military superiority.
Given how much modern
militaries and economies rely on digital infrastructure, any future great-power
war will likely start with a cyber strike. Therefore, the United States' cyber
defenses need a response time faster than humans’ reaction time. Having faced
constant cyberattacks even in peacetime, the United States should armor itself
with redundancy, creating backup systems and alternative paths for data flows.
What starts in cyberspace
could easily escalate into the physical realm, and there, too, the United
States will need to meet new challenges. It must invest in defensive artillery
and missile systems to counter possible swarm drone attacks. The U.S. military
should focus on deploying a network of inexpensive AI-powered sensors to
monitor contested areas to improve battlefield awareness. This approach is
often more effective than a single, exquisitely crafted system. As human
intelligence becomes harder to obtain, the United States must increasingly rely
on any country's largest constellation of sensors, ranging from undersea to
outer space. It will also need to focus more on open-source intelligence
since most of today’s data is publicly available. Without this capability, the
United States risks being surprised by its intelligence failures.
When it comes to
actual fighting, military units should be networked and decentralized to
outmaneuver opponents better. Facing adversaries with rigid military
hierarchies, the United States could gain an advantage by using smaller, more
connected units whose members are adept at network-based decision-making,
employing artificial intelligence tools to their advantage. For example, a
single unit could bring together capabilities in intelligence collection,
long-range missile attacks, and electronic warfare. The Pentagon needs to
provide battlefield commanders with all the best information and allow them to
make the best choices on the ground.
The Pentagon’s
burdensome procurement process is a major bottleneck: major weapons systems
take more than ten years to design, develop, and deploy. The Department of
Defense should look for inspiration in how the tech industry designs products.
It should build missiles like companies now build electric cars, using a design
studio to develop and simulate software. It should look for innovations ten
times as fast and cost-effective as current processes. The current procurement
system is especially ill-suited for a future in which software primacy proves decisive
on the battlefield.
The United States
spends four times as much as any other country to procure military systems, but
the price is a poor metric for judging innovation power. In April 2022,
Ukrainian forces fired two Neptune missiles at the Moskva, a
600-foot Russian warship, sinking the vessel. The ship cost $750 million; the
missiles $500,000 apiece. Likewise, China’s state-of-the-art hypersonic
antiship missile, the YJ-21, could someday sink a $10 billion U.S. aircraft
carrier. The U.S. government should think twice before committing another $10
billion and ten years to such a vessel. Buying many low-cost items often makes
more sense than investing in a few high-ticket prestige projects.
Playing To Win
In the contest of the
century—the U.S. rivalry with China—the deciding factor will be innovation
power. Technological advances in the next five to ten years will determine
which country gains the upper hand in this world-shaping competition. However,
the challenge for the United States is that government officials are
incentivized to avoid risk and focus on the short term, leaving the country to
underinvest in the technologies of the future chronically.
If necessity is the
mother of invention, war is the midwife of innovation. Speaking to Ukrainians on
a visit to Kyiv in the fall of 2022, I heard from many that the first months of
the war were the most productive of their lives. The United States last truly
global war—World War II—led to the widespread adoption of penicillin, the
nuclear technology revolution, and computer science breakthrough. Now, the
United States must innovate in peacetime faster than ever before. Failing to do
so erodes its ability to deter—and, if necessary, to fight and win—the next
war.
The alternative could
be disastrous. Hypersonic missiles could leave the United States defenseless,
and cyberattacks could cripple the country’s electric grid. Perhaps even more
important, the warfare of the future will target individuals in completely new
ways: authoritarian states such as China and Russia may be able to collect
individual data on Americans’ shopping habits, location, and even DNA profiles,
allowing for tailor-made disinformation campaigns and even targeted biological
attacks and assassinations. The United States must remain ahead of its
technological competitors to avert these horrors.
The principles that
have defined life in the United States—freedom, capitalism, individual
effort—were the right ones for the past and remain so for the future. These
basic values lie at the foundation of an innovation ecosystem that is still the
world's envy. They have enabled breakthroughs that have transformed everyday
life around the world. The United States started the innovation race in the
pole position, but it cannot rest assured it will remain there. In industry and
geopolitics, Silicon Valley’s old mantra holds: innovate or die.
For updates click hompage here