By Eric Vandenbroeck and co-workers
The Cost of the AGI Delusion
In early August, one
day before releasing GPT-5, OpenAI CEO Sam Altman
posted an image of the Death Star on social media. It was just the latest
declaration by Altman that his new AI model would change the world forever. “We
have discovered, invented, whatever you want to call it, something
extraordinary that is going to reshape the course of human history,” Altman
said in a July interview. He compared his company’s research to the Manhattan
Project and said that he felt “useless” compared with OpenAI’s newest
invention. Altman, in other words, suggested that GPT-5 would bring society
closer to what computer scientists call artificial general intelligence: an AI
system that can match or exceed human cognition, including the ability to learn
new things.
For years, creating
AGI has been the holy grail of many leading AI researchers. Altman and other
top technologists, including Anthropic CEO Dario Amodei
and computer science professors Yoshua Bengio and Stuart Russell, have been
dreaming of constructing superintelligent systems for decades—as well as
fearing them. And recently, many of these voices have declared that the day of
reckoning is near, telling government officials that whichever country invents
AGI first will gain enormous geopolitical advantages. Days before U.S.
President Donald Trump’s second inauguration, for example, Altman told Trump
that AGI would be achieved within his term—and that Washington needed to
prepare.
These declarations
have clearly had an effect. Over the last two years, Democratic and Republican
politicians alike have been discussing AGI more frequently and exploring
policies that could unleash its potential or limit its harms. It is easy to see
why. AI is already at the heart of a range of emerging technologies, including
robotics, biotechnology, and quantum computing. It is also a central element of
U.S.-China competition. AGI could
theoretically unlock more (and more impressive) scientific advancements,
including the ability to stop others from making similar breakthroughs. In this
view, if the United States makes it first, American economic growth might
skyrocket and the country could attain an unassailable military advantage.
There is no doubt
that AI is a very powerful invention. But when it comes to AGI, the hype has
grown out of proportion. Given the limitations of existing systems, it is
unlikely that superintelligence is actually imminent, even though AI systems
continue to improve. Some prominent computer scientists, such as Andrew Ng,
have questioned whether artificial general intelligence will ever be created.
For now, and possibly forever, advances in AI are more likely to be iterative,
like other general-purpose technologies.
The United States
should therefore treat the AI race with China like a marathon, not a sprint.
This is especially important given the centrality of AI to Washington’s
competition with Beijing. Today, both the country’s new tech firms, like
DeepSeek, and existing powerhouses, like Huawei, are increasingly keeping pace
with their American counterparts. By emphasizing steady advancements and
economic integration, China may now even be ahead of the United States in terms
of adopting and using robotics. To win the AI race, Washington thus needs to
emphasize practical investments in the development and rapid adoption of AI. It
cannot distort U.S. policy by dashing for something that might not exist.

Wildest Dreams
In Washington, AGI is
a hot topic. In a September 2024 hearing on AI oversight, Connecticut Senator
Richard Blumenthal declared that AGI is “here and now—one to three years has
been the latest prediction.” In July, South Dakota Senator Mike Rounds introduced
a bill requiring the Pentagon to establish an AGI steering committee. The
bipartisan U.S.-China Economic and Security Review Commission’s 2024 report
argued that AGI demanded a Manhattan Project–level effort to ensure the United
States achieved it first. Some officials even believe AGI is about to
jeopardize human existence. In June 2025, for instance, Representative Jill
Tokuda of Hawaii said that “artificial superintelligence, ASI, is one of the
largest existential threats that we face.”
The fixation on AGI
goes beyond rhetoric. Former Biden administration officials issued executive
orders that regulated AI in part based on concerns that AGI is on the horizon.
Trump’s AI Action Plan, released in July, may avoid explicit mentions of AGI. But
it emphasizes frontier AI, infrastructure expansions, and an innovation-centric
race for technological dominance. It would, in the words of Time magazine,
fulfill “many of the greatest policy wishes of the top AI companies—which are
all now more certain than ever that AGI is around the corner.”
The argument for
dashing toward AGI is simple. An AGI system, the thinking goes, might be able
to self-improve simultaneously along multiple dimensions. In doing so, it could
quickly surpass what humans are capable of and solve problems that have vexed society
for millennia. The company and country that reaches that point first will thus
not only achieve enormous financial returns, scientific breakthroughs, and
military advancements but also lock out competitors by monopolizing the
benefits in ways that restrict the developments of others and that establish
the rules of the game. The AI race, then, is really a race to a predetermined,
AGI finish line in which the winner not only bursts triumphantly through the
ribbon but picks up every trophy and goes home, leaving nothing for even the
second- and third-place competitors.
Yet there is reason
to be skeptical of this framing. For starters, AI researchers can’t even agree
on how to define AGI and its capabilities; in other words, no one agrees on
where the finish line is. That makes any policy based around achieving it inherently
dubious. Instead of a singular creation, AI is more of a broad category of
technologies, with many different types of innovations. That means progress is
likely to be a complex and ever-changing wave, rather than a straight-line
trip.
This is evident in
the technology’s most recent developments. Today’s models are making strides in
usability. The most advanced large language models, however, still face many of
the same challenges they faced in 2022, including shallow reasoning, brittle
generalization, a lack of long-term memory, and a lack of genuine metacognition
or continual learning—as well, of course, as hallucinations. Since its release,
for instance, GPT-5 has looked more like a normal advancement than a
transformative breakthrough. As a result, some of AGI’s biggest proponents have
started tempering their enthusiasm. At the start of the summer, former Google
CEO Eric Schmidt said that AI wasn’t hyped enough; now, he argues that people
have become too obsessed with "superintelligent" systems. Similarly,
in August, Altman declared that AGI is “not a useful concept.” In some ways,
when it comes to AGI, the computer science world may still be where it was in
2002, when the then director of MIT’s AI lab joked that the true definition of AI
was “almost implemented.”
Even if some AI
models do prove transformative, their effects will be mediated by adoption and diffusion
processes—as happens with almost every invention. Consider, for example,
electricity. It has generated untold value and utterly transformed the global
economy, but it became useful thanks to the thousands of scientists, engineers,
inventors, and companies who worked on it over the course of decades. Benjamin
Franklin proved lightning was electricity in 1752, Alessandro Volta invented
the first battery in 1799, and Nikola Tesla developed alternating current in the
late 1880s. Even then, it took many more years before most homes had power
outlets. All of these innovations were critical to reaching that eventual
endpoint, and no one actor captured the global market for electricity or
effectively prevented others from continuing to innovate.
The modern combustion
engine provides another case in point. It was invented in 1876 by the German
engineer Nicholas Otto, but was advanced and improved upon over the course of
several decades before automobiles went mainstream. Companies around the world
ultimately achieved massive gains from automobiles, not just German ones
(although the German auto industry is, of course, very successful). Perhaps the
most prominent early leader, the Ford Motor Company, was American, and it first
dominated the car market thanks to its innovations in production, not engines.

Innovation and Adaptation
If AI competition is
more likely to span a generation than just a few more years, American officials
need to think more about how the country can quickly adopt AI advances and less
about how to summon AI’s speculative potential. This is closer to what Beijing
does. Although the United States and China are very different and the latter’s
approach has its limits, China is moving faster at scaling robots in society,
and its AI Plus Initiative emphasizes achieving widespread industry-specific
adoption by 2027. The government wants AI to essentially become a part of the
country’s infrastructure by 2030. China is also investing in AGI, but Beijing’s
emphasis is clearly on quickly scaling, integrating, and applying current and
near-term AI capabilities.
To avoid falling
behind in AI adoption within the bureaucracy, the United States
should launch a large-scale AI literacy initiative across the government.
Public employees of all kinds need to know how to use both general AI systems
and ones tailored to their jobs. American officials should offer expanded
access to AI training both for their particular roles and for general use,
including training on issues like automation bias (in which people overestimate
the accuracy of AI systems). To do so, Washington can take advantage of the
fact that major American companies, including OpenAI and Anthropic, are willing
to give public employees and agencies more exposure and access to their
technologies, allowing the state, at least for now, to use their large language
models virtually for free.
The United States
must also modernize its infrastructure and data practices, including within the
national security apparatus. Advanced AI models require sophisticated hardware,
adequate computing power, and state-of-the-art knowledge management systems to
operate effectively. And today, Washington is behind on each. The government
has started to make some progress on upgrading its systems, but decades of
siloing and bureaucratic processes have created entrenched lags that are
hindering innovation. To achieve AI adoption at scale, Washington will likely
need to invest billions of dollars in procurement over the next few years,
especially for the Pentagon.
Done right, AI could
revolutionize the government’s efficiency. Even if it helps only in mundane areas,
such as energy load optimization, cybersecurity and IT, predictive maintenance,
logistics, supply chain management, and acquisition paperwork, it will allow
larger bureaucracies to overcome or eliminate regulatory hurdles. That could,
in turn, fuel more private-sector adoption. Right now, private sector pilot
projects with frontier AI sometimes fail to successfully transition from
prototype to full capability, often because of integration challenges or
misalignment between a proposed AI solution and the problem it targets. By some
estimates, more than 80 percent of AI projects fail to deliver results.
Industry surveys report that 88 percent of pilots never reach production. The
IT company Gartner projects that 40 percent of “agentic AI” deployments—autonomous
AI systems capable of planning and executing multi-step tasks with minimal or
no human oversight—will be scrapped by 2027. By placing greater value on and
demonstrating how AI can be integrated into large, complex bureaucracies, the
government can help forge a pathway for private companies, lowering their
perceived risks. By adopting AI, Washington can also create demand signal for
scalable, near-term AI applications.
But protecting
American AI leadership will require the government to do more than just help
itself and the private sector. The United States will also need to invest in
universities and researchers who can make invaluable technical breakthroughs in
AI safety, efficiency, and effectiveness, but lack the capacities of big firms.
The Trump administration must therefore follow through on its plan to expand
support for the National AI Research Resource, a nascent, government-provided
consortium of AI infrastructure that would provide researchers, educators, and
students with the specialized tools they need for advanced AI work.
None of these steps
means U.S. officials should abandon thinking about AGI. In fact, some of the
best policies for ensuring AI leadership today will also hasten the arrival of
more advanced systems. Any policy that supports AI research and development, such
as the immense investment in technology mandated by the 2022 CHIPS and Science
Act, will lead to more sophisticated algorithms. So will continued investment
in the country’s power infrastructure, which helps the energy-intensive AI
industry grow and function.
But Washington must
ensure that the pursuit of AGI does not come at the expense of near-term
adoption. Racing toward a myth is not sound policy. Instead, the country’s
primary goal must be rapidly scaling practical AI applications—improvements
that meet government needs and deliver real efficiencies today and tomorrow.
Otherwise, the United States could keep producing the world’s fanciest models.
It could lead in algorithm creation. But it will still fall behind countries
that make better use of AI innovations.
For updates click hompage here