By Eric Vandenbroeck and co-workers

The United States’ lead in artificial intelligence might seem unassailable. U.S. companies—Anthropic, Google, OpenAI, and xAI—are leading the way across almost all assessments of the technology’s general capabilities. American AI models are outperforming doctorate-level scientists on challenging questions in physics, chemistry, and biology. Just a few American AI and chip giants are worth more than the entire Chinese stock market, and investors from across the world are plowing ever more resources into the American AI ecosystem.

This breakneck progress is, in many ways, a testament to the strengths of the model of American AI development that has dominated for the last decade: letting the private sector operate on its own, with remarkably little direct government meddling or resourcing. This approach is quite different from those that ushered in past breakthrough technologies. Nuclear weapons and power, space travel, stealth systems, personal computing, and the Internet emerged either directly from U.S. government efforts or on the back of significant public funding. AI also has roots in government-funded science, including in personal computing and the Internet, and it benefits from ongoing government-supported research. But scaling up AI has been essentially a private-sector activity.

Yet there is reason to think the American way of developing AI is reaching its limits. Those limits will likely become increasingly evident in the coming months and years, and they will start to erode—and perhaps even end—U.S. dominance. Eventually, they will place the United States at a disadvantage against China, which has an alternative approach to the AI contest.

To avoid that outcome, Washington will need to embrace new ways of advancing AI development, ones that demand much tighter mutual support between the private sector and the state. Further progress now depends on resources and capabilities that only the government can provide or facilitate: the energy to power ever-larger data centers, a pipeline of international talent, and effective defenses against sophisticated foreign espionage efforts. The U.S. government, for its part, will need the cooperation of the private sector to integrate AI into the national security apparatus and to make sure the technology does not undermine democracy across the world.

The new American model of AI, in other words, must rest on a grand bargain between the tech industry and the government. The tech sector can help the state make sense of and deploy AI. The state can help the tech sector continue to grow in a way that advances everyone’s interests.

Data storage tapes at a computing center in Berkeley, California, May 2025

 

Maxing Out

It is easy to see why Washington’s light-touch approach to AI has, by and large, paid dividends. Past revolutionary technologies, such as nuclear weapons and space flight, did not have immediate commercial applications. But the business case for modern AI is already highly compelling. AI firms have found huge user demand, resulting in skyrocketing revenues, and they have promised to automate myriad valuable tasks, such as coding. As a result, capital markets are funding AI projects at scales that would historically have required government resources. Moreover, the computation-centric nature of today’s AI means that it builds neatly on the cloud computing infrastructure that the private sector, not the government, has mastered.

The sufficiency of private-sector capital in enabling AI advances is wonderful for taxpayers, but the limits of this approach are becoming apparent. To see why, look at infrastructure. The vast fleets of computer chips needed to develop and use today’s AI require extraordinary amounts of energy, so U.S. companies will need more power to fuel the data centers they plan to build in the coming years. An analysis by Anthropic estimated that the United States will need to produce 50 gigawatts of new power just for AI by 2028—roughly equivalent to what the entire country of Argentina uses today. (One of us, Buchanan, advises AI and cybersecurity companies, including Anthropic.) By then, data centers could consume up to 12 percent of American electricity production. Without more electricity, the AI build-out will stall. Amazon’s CEO, Andy Jassy, for example, has labeled power the “single biggest constraint” to AI progress. And building this level of new infrastructure will require government help.

For too long, Washington did too little to add new power to its grid. From 2005 to 2020, the United States added close to zero net new power. After U.S. President Joe Biden took office, in 2021, and passed a law subsidizing the construction of clean energy infrastructure, the country added more than 100 gigawatts in new capacity. In the last days of his term, he signed an executive order specifically aimed at further expediting the AI and clean energy build-out. But although his successor, Donald Trump, has said the right things about building new energy infrastructure for AI, he has not delivered. He signed an executive order to accelerate federal permitting for data centers, but implementation remains nascent. Worse yet, his signature “One Big Beautiful Bill,” passed in July, and other executive actions gutted key parts of Biden’s energy expansion efforts, such as vital transmission projects. An area that could have been a bipartisan success fell prey to politics and has now become a major concern for business and AI competitiveness..

Executed well, an AI-fueled energy boom would have benefits far beyond AI development itself. Leading AI companies are investing hundreds of billions of dollars in infrastructure development, creating employment opportunities. They have committed to carbon-free operations and demonstrated a willingness to pay higher prices for clean energy. These massive investments can accelerate the domestic development of better energy sources, many of which have bipartisan appeal, such as advanced geothermal power and next-generation nuclear facilities. Powerful AI models could also accelerate climate-related research.

If the United States does not construct more energy capacity, however, American AI firms will feel pressure to outsource the development of strategically critical facilities—likely to oil-rich regions such as the Gulf that run on dirtier fuel. For Washington, any prospect of offshoring AI should set off alarm bells. An American company shifting advanced AI training to a foreign country, especially an autocratic one, would pose huge risks as AI begins to power more of the U.S. economy and to play an integral role in defense. If a host country became unhappy with American behavior, it could punish Washington with the flick of a switch. A failure to build domestic energy capacity would thus echo the outsourcing mistakes of past decades in other important industries, such as semiconductors, in which the United States is now dependent on foreign suppliers.

The United States has the technology and industrial capacity needed to build new energy facilities. But it remains inhibited by a thicket of government and utility regulations and by procedural delays—some backed by good reason, some not. These restrictions impose huge delays in interconnection (the process of connecting a new power source or data center to the grid) and require years-long environmental assessments. On top of federal and utility hurdles, state and local policies can be cumbersome, especially for projects that cross multiple states, such as transmission lines. Companies—not citizens—should pay for the energy build-out, but government policies must make it possible for them to undertake these projects on reasonable timelines.

Deeper collaboration between the public and private sectors, as well as with civil society, does not guarantee that the state will make the right calls. But it does give Washington a fighting chance of securing a net-positive outcome. With a stronger technical foundation, officials can better understand how reliably AI systems follow instructions, how they handle dangerous tasks, in which areas they can replace human labor, and to what extent they favor offense versus defense in security and safety domains.

The rise of the Center for AI Standards and Innovation at the Department of Commerce (founded as the AI Safety Institute under the Biden administration) represents a valuable initial step to build meaningful collaboration. Since its inception, CAISI has brought together government officials and companies to collaborate on safety issues. It has also aided in the development of standardized testing mechanisms for AI. CAISI has worked alongside other agencies with domain-specific expertise to carry out additional voluntary testing on particularly critical topics, such as partnering with the Department of Energy and the AI company Anthropic to assess whether frontier AI models have dangerous knowledge about nuclear weapons. CAISI featured prominently in Trump’s AI Action Plan, and the administration must empower it to carry out voluntary collaboration with companies, to set standards, and to conduct safety testing.

Thanks to CAISI’s work and the voluntary commitments that leading AI companies made to the Biden White House, AI firms have already promised to conduct independent safety testing of their models, often based on CAISI guidance. In some cases, companies have even agreed to grant CAISI access to new systems before they are released and have praised the government for the national security–specific expertise it has offered in return. Both sides should deepen this collaboration, spending more time and resources building high standards and conducting rigorous assessments of new models.

At the World Artificial Intelligence Conference in Shanghai, July 2025

 

From the Government, Here To Help

Grand bargains often work better as tag lines than as policy, and getting the right kind of deal when it comes to AI is easier said than done. The technology, after all, is rapidly progressing along an unpredictable path. As AI improves, ever-larger amounts of infrastructure, power, and money will be required; the need for improved security from foreign intelligence threats will increase; and the urgency of collaboration with the defense apparatus will grow. So will the risks of misuse, prompting new policy tradeoffs. More startups will arrive on the scene, and legacy companies that today look unstoppable may fall by the wayside. Everyone involved in the AI world should prepare for constant renegotiation and rebalancing. U.S. officials, for their part, will almost certainly have to remain agile, experimenting with different AI policies as time goes on.

But amid this uncertainty, Washington must take a more active role in enabling and shaping the American AI ecosystem. The technology does not need to develop as nuclear weapons did—under strict state control—but Washington cannot sit this one out. Instead, AI should perhaps evolve as the American railroads did in the 1800s. The private sector handled most planning and construction, but the government played a vital role, as well. It organized laws and permits for building the infrastructure. It passed carefully calibrated, common-sense government safety requirements—such as standardized track gauges, rules for the use of air brakes, and requirements for car coupling—which all helped make trains both faster and safer. The collaboration was not perfect, but it worked: American railroads became a national asset that increased the United States’ security and prosperity. Advanced AI, too, can promote U.S. power and interests, provided it is developed in the right way and under the right set of arrangements. Now, as before, it is time for the public and private sectors to stand shoulder to shoulder.

 

 

For updates click hompage here

 

 

shopify analytics