Datacenter Bans or AI Deregulation? Neither: Prohibit ASI

See All Posts

Connor Leahy, Gabriel Alfour

AI has the potential to be the most impactful technology ever created.

Its impact is great enough that top experts have warned that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” At ControlAI, we alert the public and policy makers of these extinction risks and believe they are concentrated in superintelligent AI systems (ASI).

Two policies have emerged to deal with its risks, neither of which we support.

1. Banning the construction of new data-centers in the United States

This policy does not directly address the risks from AI, and still leaves open the possibility of building dangerous AI systems elsewhere or with currently existing datacenters. Unilaterally restricting the development of AI would also weaken the US economically and militarily, and put the US’s national security at risk.

2. Deregulating and accelerating the development of AI

Geopolitical theory and economic theory both predict that unregulated global companies will harm their country’s interest. In practice, AI corporations have regularly acted in ways that undermine American national security.

Said actions include releasing open-source software; making cyber-capable AI systems available to everyone (including adversaries) with only superficial guardrails; publishing key techniques in research papers; training employees who then leave for Chinese companies; and de-prioritizing software security in a way that leaves critical IP vulnerable.

Through these actions, American companies have been freely giving out strategic assets to its opponents for years.

Furthermore, deregulating AI leaves AI corporations free to pursue the development of the most dangerous AI systems unhindered, risking the creation of the very systems that experts warn threaten humanity. 

Instead, the only way to avert the risk from ASI is to ensure that superintelligent AI is not built by any actor in the world, ranging from both domestic AI corporations to foreign adversaries like China. 

This must be done by establishing an international “trust, but verify” regime in order to enforce the prohibition on developing superintelligent AI. 

We do not support unilaterally banning datacenters in the US

To put it briefly, we do not support a domestic ban on the construction of datacenters.

First, we believe the primary danger from AI is the development of ASI. And banning new datacenters in the US will not stop ASI from being developed.

Second, banning the construction of new datacenters is a step of unilateral disarmament. In the current geopolitical climate, we do not believe this is a wise course of action. We recommend instead the enforcement of a global prohibition through conditional agreements, such that no country leaves itself stranded.

I. Domestically banning datacenters does not stop ASI

At ControlAI, we believe the worst risks associated with AI are concentrated in the development of ASI. Policies that do not directly target ASI will predictably fail at addressing these risks. Banning the domestic construction of datacenters is one such policy.

Let us say we successfully banned the domestic construction of new datacenters.

In this scenario, AI corporations now cannot build new datacenters. But building new datacenters is not their goal: many AI corporations with deep resources, such as OpenAI, Google Deepmind, or Meta have publicly stated they are building superintelligence, while others such as Anthropic use euphemisms like “a country of geniuses in a datacenter”.

If they cannot build new datacenters, these companies will simply put this money and talent towards building ASI in other ways. 

One obvious way to pursue ASI is to direct more of their resources toward improving the algorithms that AI systems are based on. Such improvements have been comparably critical to building more powerful AI as scaling has been. For reference, according to Andrej Karpathy, an ex-OpenAI employee, the original development of GPT-2, one of the earliest language models out of OpenAI, may have cost around 43 thousand dollars. Training an equivalent model today costs around $73, more than a five-hundred fold improvement in training efficiency.

Existing infrastructure compounds this problem. AI corporations have already constructed a massive number of datacenters and trained incredibly powerful AI systems. As algorithms grow more efficient, these existing datacenters can run increasingly powerful models, potentially including ASI, without a single new facility being built.

Another obvious way for US companies is to simply move ASI development abroad. US companies can relocate the construction of datacenters to foreign countries such as the UAE or China. The New Yorker reported that Greg Brockman, the president of OpenAI, in the early days of OpenAI to have floated doing just this by auctioning OpenAI to Russia and China:

“According to several interviews and contemporaneous records, Brockman offered a counterproposal: OpenAI could enrich itself by playing world powers—including China and Russia—against one another, perhaps by starting a bidding war among them.

Even setting aside algorithmic improvements and offshoring, AI corporations may still find other paths to ASI. Banning the construction of new datacenters domestically just constrains one part of the supply chain necessary to build ASI instead of directly banning building it.

Finally, even preventing US corporations from building ASI anywhere in the world isn’t enough, as ASI may still be developed abroad. The other leading power of AI development outside of the US is China, where leading internet companies like Alibaba explicitly pursue building ASI as a goal. A plan which still allows adversaries free to build ASI cannot prevent ASI.

II. A domestic datacenter ban is unilateral disarmament

While banning datacenters domestically will not stop the development of ASI, it may be understood as a blunt regulation of an ASI precursor.

At ControlAI, we draw a distinction between ASI and ASI precursors. ASI are systems that are powerful enough to outsmart, outdo and outcompete humans and humanity. ASI precursors comprise all the tools, technologies, techniques, equipment and resources that are critical to the development of ASI.

We believe that it is good for countries to ban ASI unilaterally. ASI is uncontrollable, and the presence of uncontrollable systems that can stage a coup is as much a threat to the international order and humanity as it is to national security and the US government.

However, ASI precursors are more than solely precursors to ASI. They are often dual-use technologies and resources that are helpful in a wide variety of contexts. In the following section, the precursors that we will consider are compute and the AI systems currently in circulation.

ASI precursors are widely used in modern warfare, from the Russo-Ukrainian war to the latest US-Iran conflict, and have played a critical role on the battlefield. China is also investing massively in warfare-capable AI systems. In the same vein, Claude Mythos, recently announced by Anthropic, is already superhuman at finding and exploiting vulnerabilities in computer systems.

ASI precursors are not necessarily military in nature. Many are more relevant to scientific and economic power. For example, AlphaFold and its follow ups have enabled large breakthroughs in biology, while agentic AI models are being increasingly used in mathematics, and AI systems are used for a majority of software development tasks.  Sacrificing these scientific and economic benefits will directly sacrifice our military and diplomatic power.

ASI precursors can be and often are strategic. Unilaterally banning any of them would be a hit to America. In effect, a unilateral domestic datacenter ban would be done in the hope that other countries, when seeing our gesture of good will, would in turn take action. We would be taking a leap of faith that other countries would reciprocate.

We do not believe that in the current geopolitical situation, it is warranted to blindly trust foreign countries, especially adversaries, to this degree. If the US unilaterally banned domestic datacenter construction, adversaries like China would have no reason to not continue developing ever more powerful AI systems.

What we recommend is to enforce global prohibitions, regulation and monitoring regimes. We advise doing so through international agreements based on thresholds, whose provisions only kick-in once a critical mass of actors have joined. That way, no country has to unilaterally take a “leap of faith”, and policies can be established on solid foundations rather than blind trust.

A unilateral domestic ban directly violates this position, and thus, we do not recommend it.

We oppose accelerating and deregulating AI

We are also strongly opposed to accelerating and deregulating the development of AI in the current circumstances.

The accelerated and deregulated push to develop more powerful AI systems in pursuit of ASI directly threatens US national security, and has already repeatedly undermined it. Not only does it threaten the US, the development of ASI threatens everyone in the world, with stakes going as high as human extinction.

I. The unregulated development of AI threatens America’s national security

In pursuit of ASI, American AI corporations have already repeatedly undermined American national security. If left unregulated, they will continue to do so.

As Greg Brockman demonstrated when he considered auctioning OpenAI to China and Russia, private companies have shown that they do not care about American national security and do not prioritize it. This is wholly expected from geopolitical and economic theory, which predict that unregulated companies will act in ways that benefit them but undermine the national security of their host countries.

Nowhere is this clearer than in how open the culture of major AI corporations are. They regularly publish key technical results in publicly available scientific papers and make critical benchmarks and data public

They generally give public access to use their most advanced AI systems. This public access has been used by China to distill cutting edge AI systems to make their own versions which are nearly as strong. Public access to AI systems has also been used to use these AI systems hosted by American companies to conduct widespread cyberattacks against American targets.

Contrast this culture with other strategic areas, such as military or nuclear technology. We would never allow Lockheed Martin to publish critical details regarding weapons systems like the F-35, much less make the actual weapon systems freely available to the public.

When AI corporations do try to keep things private, they do not take sufficient precautions to secure their datacenters and AI systems. In contrast with other critical technologies like nuclear weapons or even private projects by military contractors, AI corporations do not follow ITAR security regulations nor do they require their employees possess security clearances.

Because of this lack of security, expectedly, their systems can be compromised. Anthropic did not publicly release their latest AI system, Claude Mythos, due to concerns over its superhuman cyber capabilities. Unauthorized individuals already accessed Mythos and it is rumored to have been leaked to China.

When they are not directly giving critical technology away or failing to secure it, their employees often leave for foreign countries.

Ilya Sutskever, one of the cofounders of OpenAI, formed a new ASI company called SSI in Israel. Yann LeCun from Meta is spinning off a new AI company in France. Employees also regularly leave, taking their technical expertise and inside knowledge to foreign countries including China. Several employees from Google Deepmind, OpenAI, and Meta’s FAIR research laboratory have returned to China to work at Chinese tech corporations or co-found Chinese AI startups

II. Development of superintelligent AI threatens the world

Beyond undermining American national security, the development of ASI and sufficiently powerful AI systems will effectively be creating an “enemy from within” that we cannot control. 

Powerful AI systems may act on their own to seize power from the American government, or enable the companies that create them to do so. AI corporation CEOs even hint at this possibility in their own writing about the AI systems they are building:

“It is somewhat awkward to say this as the CEO of an AI company, but I think the next tier of risk [of people using AI to seize power from existing governments] is actually AI companies themselves.”

They also warn that AI systems may wreak havoc across our country, unleashing previously unheard of bioterrorist or cyberattacks:

“I am concerned that a genius in everyone’s pocket could remove that barrier, essentially making everyone a PhD virologist who can be walked through the process of designing, synthesizing, and releasing a biological weapon step-by-step…

I expect AI-led cyberattacks to become a serious and unprecedented threat to the integrity of computer systems around the world”.

In the most extreme case, top experts from the godfathers of deep learning to top scientists in academia and industry and even AI corporation CEOs warn that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

When faced with corporations developing AI systems that could wreak havoc on the country, potentially overthrow the American government, or even cause the extinction of humanity, unregulated acceleration is extremely reckless. 

As developing ASI undermines US national security, and may lead to human extinction, we strongly oppose racing to ASI.

An international agreement

If domestically banning data-centers and deregulating AI are both bad policies for helping the US manage the risks of AI, then what should we do?

The only known way to prevent the risks of extinction from ASI is to not build it anywhere in the world. To do so, we advise international agreements that prohibit the development of ASI, and regulate and monitor the precursors to ASI.

As we cannot trust that other countries, especially adversaries, will follow this prohibition from the goodness of their heart, all international agreements must be enforced by strict mutual verification and monitoring regimes along with escalating consequences from countries who do not follow this agreement.

Such consequences may look like export controls on countries that build datacenters above a certain level; sanctions, tariffs or trade embargoes on countries that automate too much of their economy or integrate general purpose AI into command-and-control loops of their military; and we should recognize that countries possess a legitimate right to self-defense against countries that pursue ASI.

We must “trust but verify”, not blindly trust that other countries will reciprocate unilateral actions like banning the construction of datacenters in America.

Similarly, adversaries will be able to verify we are following the agreement as well, so they will not be incentivized to escalate when fearing the potential development of ASI inside the United States.

Conclusion

ControlAI does not support either of the emerging positions of banning the construction of new datacenters in America to stop ASI nor accelerating and deregulating AI development.

Prohibiting the construction of datacenters does not stop the development of ASI while unilaterally weakening the United States on the international scene.

Racing to build ASI risks human extinction, while AI corporations take reckless action that undermines American security.

Instead, we recommend preventing the development of ASI globally through an international prohibition on superintelligence, enforced by mutual monitoring and assurances as well as measures to compel every country's compliance with the prohibition.

AI has the potential to be the most impactful technology ever created.

Its impact is great enough that top experts have warned that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” At ControlAI, we alert the public and policy makers of these extinction risks and believe they are concentrated in superintelligent AI systems (ASI).

Two policies have emerged to deal with its risks, neither of which we support.

1. Banning the construction of new data-centers in the United States

This policy does not directly address the risks from AI, and still leaves open the possibility of building dangerous AI systems elsewhere or with currently existing datacenters. Unilaterally restricting the development of AI would also weaken the US economically and militarily, and put the US’s national security at risk.

2. Deregulating and accelerating the development of AI

Geopolitical theory and economic theory both predict that unregulated global companies will harm their country’s interest. In practice, AI corporations have regularly acted in ways that undermine American national security.

Said actions include releasing open-source software; making cyber-capable AI systems available to everyone (including adversaries) with only superficial guardrails; publishing key techniques in research papers; training employees who then leave for Chinese companies; and de-prioritizing software security in a way that leaves critical IP vulnerable.

Through these actions, American companies have been freely giving out strategic assets to its opponents for years.

Furthermore, deregulating AI leaves AI corporations free to pursue the development of the most dangerous AI systems unhindered, risking the creation of the very systems that experts warn threaten humanity. 

Instead, the only way to avert the risk from ASI is to ensure that superintelligent AI is not built by any actor in the world, ranging from both domestic AI corporations to foreign adversaries like China. 

This must be done by establishing an international “trust, but verify” regime in order to enforce the prohibition on developing superintelligent AI. 

We do not support unilaterally banning datacenters in the US

To put it briefly, we do not support a domestic ban on the construction of datacenters.

First, we believe the primary danger from AI is the development of ASI. And banning new datacenters in the US will not stop ASI from being developed.

Second, banning the construction of new datacenters is a step of unilateral disarmament. In the current geopolitical climate, we do not believe this is a wise course of action. We recommend instead the enforcement of a global prohibition through conditional agreements, such that no country leaves itself stranded.

I. Domestically banning datacenters does not stop ASI

At ControlAI, we believe the worst risks associated with AI are concentrated in the development of ASI. Policies that do not directly target ASI will predictably fail at addressing these risks. Banning the domestic construction of datacenters is one such policy.

Let us say we successfully banned the domestic construction of new datacenters.

In this scenario, AI corporations now cannot build new datacenters. But building new datacenters is not their goal: many AI corporations with deep resources, such as OpenAI, Google Deepmind, or Meta have publicly stated they are building superintelligence, while others such as Anthropic use euphemisms like “a country of geniuses in a datacenter”.

If they cannot build new datacenters, these companies will simply put this money and talent towards building ASI in other ways. 

One obvious way to pursue ASI is to direct more of their resources toward improving the algorithms that AI systems are based on. Such improvements have been comparably critical to building more powerful AI as scaling has been. For reference, according to Andrej Karpathy, an ex-OpenAI employee, the original development of GPT-2, one of the earliest language models out of OpenAI, may have cost around 43 thousand dollars. Training an equivalent model today costs around $73, more than a five-hundred fold improvement in training efficiency.

Existing infrastructure compounds this problem. AI corporations have already constructed a massive number of datacenters and trained incredibly powerful AI systems. As algorithms grow more efficient, these existing datacenters can run increasingly powerful models, potentially including ASI, without a single new facility being built.

Another obvious way for US companies is to simply move ASI development abroad. US companies can relocate the construction of datacenters to foreign countries such as the UAE or China. The New Yorker reported that Greg Brockman, the president of OpenAI, in the early days of OpenAI to have floated doing just this by auctioning OpenAI to Russia and China:

“According to several interviews and contemporaneous records, Brockman offered a counterproposal: OpenAI could enrich itself by playing world powers—including China and Russia—against one another, perhaps by starting a bidding war among them.

Even setting aside algorithmic improvements and offshoring, AI corporations may still find other paths to ASI. Banning the construction of new datacenters domestically just constrains one part of the supply chain necessary to build ASI instead of directly banning building it.

Finally, even preventing US corporations from building ASI anywhere in the world isn’t enough, as ASI may still be developed abroad. The other leading power of AI development outside of the US is China, where leading internet companies like Alibaba explicitly pursue building ASI as a goal. A plan which still allows adversaries free to build ASI cannot prevent ASI.

II. A domestic datacenter ban is unilateral disarmament

While banning datacenters domestically will not stop the development of ASI, it may be understood as a blunt regulation of an ASI precursor.

At ControlAI, we draw a distinction between ASI and ASI precursors. ASI are systems that are powerful enough to outsmart, outdo and outcompete humans and humanity. ASI precursors comprise all the tools, technologies, techniques, equipment and resources that are critical to the development of ASI.

We believe that it is good for countries to ban ASI unilaterally. ASI is uncontrollable, and the presence of uncontrollable systems that can stage a coup is as much a threat to the international order and humanity as it is to national security and the US government.

However, ASI precursors are more than solely precursors to ASI. They are often dual-use technologies and resources that are helpful in a wide variety of contexts. In the following section, the precursors that we will consider are compute and the AI systems currently in circulation.

ASI precursors are widely used in modern warfare, from the Russo-Ukrainian war to the latest US-Iran conflict, and have played a critical role on the battlefield. China is also investing massively in warfare-capable AI systems. In the same vein, Claude Mythos, recently announced by Anthropic, is already superhuman at finding and exploiting vulnerabilities in computer systems.

ASI precursors are not necessarily military in nature. Many are more relevant to scientific and economic power. For example, AlphaFold and its follow ups have enabled large breakthroughs in biology, while agentic AI models are being increasingly used in mathematics, and AI systems are used for a majority of software development tasks.  Sacrificing these scientific and economic benefits will directly sacrifice our military and diplomatic power.

ASI precursors can be and often are strategic. Unilaterally banning any of them would be a hit to America. In effect, a unilateral domestic datacenter ban would be done in the hope that other countries, when seeing our gesture of good will, would in turn take action. We would be taking a leap of faith that other countries would reciprocate.

We do not believe that in the current geopolitical situation, it is warranted to blindly trust foreign countries, especially adversaries, to this degree. If the US unilaterally banned domestic datacenter construction, adversaries like China would have no reason to not continue developing ever more powerful AI systems.

What we recommend is to enforce global prohibitions, regulation and monitoring regimes. We advise doing so through international agreements based on thresholds, whose provisions only kick-in once a critical mass of actors have joined. That way, no country has to unilaterally take a “leap of faith”, and policies can be established on solid foundations rather than blind trust.

A unilateral domestic ban directly violates this position, and thus, we do not recommend it.

We oppose accelerating and deregulating AI

We are also strongly opposed to accelerating and deregulating the development of AI in the current circumstances.

The accelerated and deregulated push to develop more powerful AI systems in pursuit of ASI directly threatens US national security, and has already repeatedly undermined it. Not only does it threaten the US, the development of ASI threatens everyone in the world, with stakes going as high as human extinction.

I. The unregulated development of AI threatens America’s national security

In pursuit of ASI, American AI corporations have already repeatedly undermined American national security. If left unregulated, they will continue to do so.

As Greg Brockman demonstrated when he considered auctioning OpenAI to China and Russia, private companies have shown that they do not care about American national security and do not prioritize it. This is wholly expected from geopolitical and economic theory, which predict that unregulated companies will act in ways that benefit them but undermine the national security of their host countries.

Nowhere is this clearer than in how open the culture of major AI corporations are. They regularly publish key technical results in publicly available scientific papers and make critical benchmarks and data public

They generally give public access to use their most advanced AI systems. This public access has been used by China to distill cutting edge AI systems to make their own versions which are nearly as strong. Public access to AI systems has also been used to use these AI systems hosted by American companies to conduct widespread cyberattacks against American targets.

Contrast this culture with other strategic areas, such as military or nuclear technology. We would never allow Lockheed Martin to publish critical details regarding weapons systems like the F-35, much less make the actual weapon systems freely available to the public.

When AI corporations do try to keep things private, they do not take sufficient precautions to secure their datacenters and AI systems. In contrast with other critical technologies like nuclear weapons or even private projects by military contractors, AI corporations do not follow ITAR security regulations nor do they require their employees possess security clearances.

Because of this lack of security, expectedly, their systems can be compromised. Anthropic did not publicly release their latest AI system, Claude Mythos, due to concerns over its superhuman cyber capabilities. Unauthorized individuals already accessed Mythos and it is rumored to have been leaked to China.

When they are not directly giving critical technology away or failing to secure it, their employees often leave for foreign countries.

Ilya Sutskever, one of the cofounders of OpenAI, formed a new ASI company called SSI in Israel. Yann LeCun from Meta is spinning off a new AI company in France. Employees also regularly leave, taking their technical expertise and inside knowledge to foreign countries including China. Several employees from Google Deepmind, OpenAI, and Meta’s FAIR research laboratory have returned to China to work at Chinese tech corporations or co-found Chinese AI startups

II. Development of superintelligent AI threatens the world

Beyond undermining American national security, the development of ASI and sufficiently powerful AI systems will effectively be creating an “enemy from within” that we cannot control. 

Powerful AI systems may act on their own to seize power from the American government, or enable the companies that create them to do so. AI corporation CEOs even hint at this possibility in their own writing about the AI systems they are building:

“It is somewhat awkward to say this as the CEO of an AI company, but I think the next tier of risk [of people using AI to seize power from existing governments] is actually AI companies themselves.”

They also warn that AI systems may wreak havoc across our country, unleashing previously unheard of bioterrorist or cyberattacks:

“I am concerned that a genius in everyone’s pocket could remove that barrier, essentially making everyone a PhD virologist who can be walked through the process of designing, synthesizing, and releasing a biological weapon step-by-step…

I expect AI-led cyberattacks to become a serious and unprecedented threat to the integrity of computer systems around the world”.

In the most extreme case, top experts from the godfathers of deep learning to top scientists in academia and industry and even AI corporation CEOs warn that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

When faced with corporations developing AI systems that could wreak havoc on the country, potentially overthrow the American government, or even cause the extinction of humanity, unregulated acceleration is extremely reckless. 

As developing ASI undermines US national security, and may lead to human extinction, we strongly oppose racing to ASI.

An international agreement

If domestically banning data-centers and deregulating AI are both bad policies for helping the US manage the risks of AI, then what should we do?

The only known way to prevent the risks of extinction from ASI is to not build it anywhere in the world. To do so, we advise international agreements that prohibit the development of ASI, and regulate and monitor the precursors to ASI.

As we cannot trust that other countries, especially adversaries, will follow this prohibition from the goodness of their heart, all international agreements must be enforced by strict mutual verification and monitoring regimes along with escalating consequences from countries who do not follow this agreement.

Such consequences may look like export controls on countries that build datacenters above a certain level; sanctions, tariffs or trade embargoes on countries that automate too much of their economy or integrate general purpose AI into command-and-control loops of their military; and we should recognize that countries possess a legitimate right to self-defense against countries that pursue ASI.

We must “trust but verify”, not blindly trust that other countries will reciprocate unilateral actions like banning the construction of datacenters in America.

Similarly, adversaries will be able to verify we are following the agreement as well, so they will not be incentivized to escalate when fearing the potential development of ASI inside the United States.

Conclusion

ControlAI does not support either of the emerging positions of banning the construction of new datacenters in America to stop ASI nor accelerating and deregulating AI development.

Prohibiting the construction of datacenters does not stop the development of ASI while unilaterally weakening the United States on the international scene.

Racing to build ASI risks human extinction, while AI corporations take reckless action that undermines American security.

Instead, we recommend preventing the development of ASI globally through an international prohibition on superintelligence, enforced by mutual monitoring and assurances as well as measures to compel every country's compliance with the prohibition.

Get Updates

Sign up to our newsletter if you'd like to stay updated on our work,
how you can get involved, and to receive a weekly roundup of the latest AI news.