A Race We All Lose

ControlAI

Aug 2, 2024

See All Posts

Control AI’s Response to Situational Awareness

With increasing calls for the US to initiate a dangerous AI development arms race with China, we thought it was important to reflect on some of the key claims made in Leopold Aschenbrenner’s recent ‘Situational Awareness’ essay series that advocated for this and brought the discussion to the fore.

In ‘Situational Awareness’, Leopold Aschenbrenner delivers a striking message: The development of artificial superintelligence — a form of advanced AI that surpasses human intelligence in practically every field — is akin to an arms race. His starkest claim is that superintelligence will give a decisive military advantage to the first country to build it, allowing this country to impose their will internally, externally, and decisively.

Faced with this potential reality, Aschenbrenner concludes that the only realistic way to ensure the free world prevails is for a US-led alliance of Western democracies to consolidate a ‘healthy lead’ in the race to develop superintelligence. A ‘healthy lead’, he claims, would give the alliance the opportunity to win the race while also devoting time to making superintelligence safe.

Due to the inherent risks of developing this technology, including risks to national security, Aschenbrenner believes responsibility for orchestrating this effort would fall to the US government.

However, does the development of superintelligence inevitably require an arms race? We believe this is not the case. Aschenbrenner’s solution hinges on several assumptions:

  • First, that a country can win and develop superintelligence safely in the context of a race, despite no current understanding how to control such an intelligence.

  • Second, that racing to build a ‘decisive strategic advantage’ over other countries, especially those with nuclear capabilities, will not be perceived as a national security threat worthy of a pre-emptive military response.

  • Third, that positive international negotiation with China is not possible.

In this article, we show that these assumptions are flawed and put us at greater risk by encouraging a dangerous race in AI development. In our view, it is crucial to build a collaborative international approach instead of rushing into dangerous racing dynamics. We seek a more ambitious and mature future for humanity where we are able to harness the benefits of AI while ensuring we stay in control.

Discussion of Aschenbrenner’s assumptions

Flawed assumption 1: The race is winnable

Aschenbrenner argues that superintelligence confers a ‘decisive military advantage’: whoever ‘wins’ this race will not only succeed in developing superintelligence but also be able to prevent others from developing their own.

However, Aschenbrenner himself acknowledges that there is currently no known method of ensuring advanced AI systems including superintelligence remain under our control — whether they are being developed by private actors like OpenAI, or public actors like the US Government. To assert that the first country to build superintelligence will gain a decisive military advantage through this technology, one must first assume that it is possible to develop and deploy superintelligence safely. But this is not feasible in a scenario where safety is compromised in favour of rapid AI development.

In other words, a race dynamic makes it more difficult for actors to develop superintelligence and control it. If safety takes a back seat, it is likely that countries developing superintelligence will inadvertently create dangerous models in the lab or cause catastrophic events, potentially leading to human extinction, well before getting a deployed military advantage. In this context, the balance of power among competitors in the hypothetical race is irrelevant: we are all subject to the same risks from superintelligence, and we all lose.

Flawed assumption 2: Racing to build a ‘decisive strategic advantage’ over other countries will not risk a pre-emptive response

A race to build superintelligence creates a world where everyone fears that coming in second will result in a decisive, permanent and irreversible economic and military disadvantage. Racing to build superintelligence will therefore likely be perceived as a national security threat and incentivise countries to prevent others from securing such an advantage — whether through diplomatic, economic, or potentially military means.

This could for instance mean sabotaging compute facilities and research labs, and potentially escalating to military strikes and even outright war. This dynamic is particularly worrying in a world where most major countries or alliances have nuclear capabilities. A scenario in which several countries are incentivised to use nuclear weapons to avoid being decisively defeated is one where no one can win.

A race to build superintelligent AI is therefore one we cannot win: it makes it more difficult to control the technology and unnecessarily risks catastrophic outcomes.

Flawed assumption 3: Positive international negotiation is not possible

Aschenbrenner asserts that the idea of an international treaty on safety is 'fanciful.' He believes China would only agree to negotiate if it had somehow already given up any hope of contesting for dominance on AI. However, we should explore other strategies to facilitate negotiation.

Historically, even the most bitter adversaries of the US have engaged in negotiations on areas of mutual interest, as exemplified by the Strategic Arms Limitation Talks (SALT) with the Soviet Union during the Cold War. Moreover, there is limited reason to believe that negotiation with China is impossible.

China has cooperated internationally on previous societal-scale risks (e.g., the Montreal Protocol on the ozone layer and the nuclear Non-Proliferation Treaty). Most recently, China joined other major countries in signing the Bletchley Declaration, which acknowledged the potential for catastrophic harm stemming from capable AI models, and in which countries resolved to work together to address AI risks.

China’s signature of the Declaration reflects some awareness of the risks involved in advanced AI development — which will only multiply if AI development is transformed into an arms race. Therefore, there is no reason to bypass steps and jeopardise the potential for collaboration by transforming AI development into an arms race that we all lose.

Conclusion

Given the stakes, it is absolutely worth attempting to negotiate with China and other countries on the risks of advanced AI, and to try to build a ‘trust-but-verify’ approach before dashing headlong to an arms race that no side can be sure it will survive.

The choice now is whether we first try to build that international agreement proactively, or whether nations race to build their own superintelligent AI and then seek to prevent proliferation through force or subsequent agreement.

The latter approach opens us all up to disaster: in such a race, countries are likely to fail to build superintelligence without triggering catastrophe, or even human extinction. Moreover, nation states do not usually respond well to threats, and to not at least substantively try a collaborative approach first is not realism but a dereliction of duty.

N.B.: The geopolitical implications of advanced AI are matters of serious debate. While we disagree with much of Aschenbrenner’s answer, we agree that the issue demands detailed and serious thinking and writing.

Control AI’s Response to Situational Awareness

With increasing calls for the US to initiate a dangerous AI development arms race with China, we thought it was important to reflect on some of the key claims made in Leopold Aschenbrenner’s recent ‘Situational Awareness’ essay series that advocated for this and brought the discussion to the fore.

In ‘Situational Awareness’, Leopold Aschenbrenner delivers a striking message: The development of artificial superintelligence — a form of advanced AI that surpasses human intelligence in practically every field — is akin to an arms race. His starkest claim is that superintelligence will give a decisive military advantage to the first country to build it, allowing this country to impose their will internally, externally, and decisively.

Faced with this potential reality, Aschenbrenner concludes that the only realistic way to ensure the free world prevails is for a US-led alliance of Western democracies to consolidate a ‘healthy lead’ in the race to develop superintelligence. A ‘healthy lead’, he claims, would give the alliance the opportunity to win the race while also devoting time to making superintelligence safe.

Due to the inherent risks of developing this technology, including risks to national security, Aschenbrenner believes responsibility for orchestrating this effort would fall to the US government.

However, does the development of superintelligence inevitably require an arms race? We believe this is not the case. Aschenbrenner’s solution hinges on several assumptions:

  • First, that a country can win and develop superintelligence safely in the context of a race, despite no current understanding how to control such an intelligence.

  • Second, that racing to build a ‘decisive strategic advantage’ over other countries, especially those with nuclear capabilities, will not be perceived as a national security threat worthy of a pre-emptive military response.

  • Third, that positive international negotiation with China is not possible.

In this article, we show that these assumptions are flawed and put us at greater risk by encouraging a dangerous race in AI development. In our view, it is crucial to build a collaborative international approach instead of rushing into dangerous racing dynamics. We seek a more ambitious and mature future for humanity where we are able to harness the benefits of AI while ensuring we stay in control.

Discussion of Aschenbrenner’s assumptions

Flawed assumption 1: The race is winnable

Aschenbrenner argues that superintelligence confers a ‘decisive military advantage’: whoever ‘wins’ this race will not only succeed in developing superintelligence but also be able to prevent others from developing their own.

However, Aschenbrenner himself acknowledges that there is currently no known method of ensuring advanced AI systems including superintelligence remain under our control — whether they are being developed by private actors like OpenAI, or public actors like the US Government. To assert that the first country to build superintelligence will gain a decisive military advantage through this technology, one must first assume that it is possible to develop and deploy superintelligence safely. But this is not feasible in a scenario where safety is compromised in favour of rapid AI development.

In other words, a race dynamic makes it more difficult for actors to develop superintelligence and control it. If safety takes a back seat, it is likely that countries developing superintelligence will inadvertently create dangerous models in the lab or cause catastrophic events, potentially leading to human extinction, well before getting a deployed military advantage. In this context, the balance of power among competitors in the hypothetical race is irrelevant: we are all subject to the same risks from superintelligence, and we all lose.

Flawed assumption 2: Racing to build a ‘decisive strategic advantage’ over other countries will not risk a pre-emptive response

A race to build superintelligence creates a world where everyone fears that coming in second will result in a decisive, permanent and irreversible economic and military disadvantage. Racing to build superintelligence will therefore likely be perceived as a national security threat and incentivise countries to prevent others from securing such an advantage — whether through diplomatic, economic, or potentially military means.

This could for instance mean sabotaging compute facilities and research labs, and potentially escalating to military strikes and even outright war. This dynamic is particularly worrying in a world where most major countries or alliances have nuclear capabilities. A scenario in which several countries are incentivised to use nuclear weapons to avoid being decisively defeated is one where no one can win.

A race to build superintelligent AI is therefore one we cannot win: it makes it more difficult to control the technology and unnecessarily risks catastrophic outcomes.

Flawed assumption 3: Positive international negotiation is not possible

Aschenbrenner asserts that the idea of an international treaty on safety is 'fanciful.' He believes China would only agree to negotiate if it had somehow already given up any hope of contesting for dominance on AI. However, we should explore other strategies to facilitate negotiation.

Historically, even the most bitter adversaries of the US have engaged in negotiations on areas of mutual interest, as exemplified by the Strategic Arms Limitation Talks (SALT) with the Soviet Union during the Cold War. Moreover, there is limited reason to believe that negotiation with China is impossible.

China has cooperated internationally on previous societal-scale risks (e.g., the Montreal Protocol on the ozone layer and the nuclear Non-Proliferation Treaty). Most recently, China joined other major countries in signing the Bletchley Declaration, which acknowledged the potential for catastrophic harm stemming from capable AI models, and in which countries resolved to work together to address AI risks.

China’s signature of the Declaration reflects some awareness of the risks involved in advanced AI development — which will only multiply if AI development is transformed into an arms race. Therefore, there is no reason to bypass steps and jeopardise the potential for collaboration by transforming AI development into an arms race that we all lose.

Conclusion

Given the stakes, it is absolutely worth attempting to negotiate with China and other countries on the risks of advanced AI, and to try to build a ‘trust-but-verify’ approach before dashing headlong to an arms race that no side can be sure it will survive.

The choice now is whether we first try to build that international agreement proactively, or whether nations race to build their own superintelligent AI and then seek to prevent proliferation through force or subsequent agreement.

The latter approach opens us all up to disaster: in such a race, countries are likely to fail to build superintelligence without triggering catastrophe, or even human extinction. Moreover, nation states do not usually respond well to threats, and to not at least substantively try a collaborative approach first is not realism but a dereliction of duty.

N.B.: The geopolitical implications of advanced AI are matters of serious debate. While we disagree with much of Aschenbrenner’s answer, we agree that the issue demands detailed and serious thinking and writing.

Control AI’s Response to Situational Awareness

With increasing calls for the US to initiate a dangerous AI development arms race with China, we thought it was important to reflect on some of the key claims made in Leopold Aschenbrenner’s recent ‘Situational Awareness’ essay series that advocated for this and brought the discussion to the fore.

In ‘Situational Awareness’, Leopold Aschenbrenner delivers a striking message: The development of artificial superintelligence — a form of advanced AI that surpasses human intelligence in practically every field — is akin to an arms race. His starkest claim is that superintelligence will give a decisive military advantage to the first country to build it, allowing this country to impose their will internally, externally, and decisively.

Faced with this potential reality, Aschenbrenner concludes that the only realistic way to ensure the free world prevails is for a US-led alliance of Western democracies to consolidate a ‘healthy lead’ in the race to develop superintelligence. A ‘healthy lead’, he claims, would give the alliance the opportunity to win the race while also devoting time to making superintelligence safe.

Due to the inherent risks of developing this technology, including risks to national security, Aschenbrenner believes responsibility for orchestrating this effort would fall to the US government.

However, does the development of superintelligence inevitably require an arms race? We believe this is not the case. Aschenbrenner’s solution hinges on several assumptions:

  • First, that a country can win and develop superintelligence safely in the context of a race, despite no current understanding how to control such an intelligence.

  • Second, that racing to build a ‘decisive strategic advantage’ over other countries, especially those with nuclear capabilities, will not be perceived as a national security threat worthy of a pre-emptive military response.

  • Third, that positive international negotiation with China is not possible.

In this article, we show that these assumptions are flawed and put us at greater risk by encouraging a dangerous race in AI development. In our view, it is crucial to build a collaborative international approach instead of rushing into dangerous racing dynamics. We seek a more ambitious and mature future for humanity where we are able to harness the benefits of AI while ensuring we stay in control.

Discussion of Aschenbrenner’s assumptions

Flawed assumption 1: The race is winnable

Aschenbrenner argues that superintelligence confers a ‘decisive military advantage’: whoever ‘wins’ this race will not only succeed in developing superintelligence but also be able to prevent others from developing their own.

However, Aschenbrenner himself acknowledges that there is currently no known method of ensuring advanced AI systems including superintelligence remain under our control — whether they are being developed by private actors like OpenAI, or public actors like the US Government. To assert that the first country to build superintelligence will gain a decisive military advantage through this technology, one must first assume that it is possible to develop and deploy superintelligence safely. But this is not feasible in a scenario where safety is compromised in favour of rapid AI development.

In other words, a race dynamic makes it more difficult for actors to develop superintelligence and control it. If safety takes a back seat, it is likely that countries developing superintelligence will inadvertently create dangerous models in the lab or cause catastrophic events, potentially leading to human extinction, well before getting a deployed military advantage. In this context, the balance of power among competitors in the hypothetical race is irrelevant: we are all subject to the same risks from superintelligence, and we all lose.

Flawed assumption 2: Racing to build a ‘decisive strategic advantage’ over other countries will not risk a pre-emptive response

A race to build superintelligence creates a world where everyone fears that coming in second will result in a decisive, permanent and irreversible economic and military disadvantage. Racing to build superintelligence will therefore likely be perceived as a national security threat and incentivise countries to prevent others from securing such an advantage — whether through diplomatic, economic, or potentially military means.

This could for instance mean sabotaging compute facilities and research labs, and potentially escalating to military strikes and even outright war. This dynamic is particularly worrying in a world where most major countries or alliances have nuclear capabilities. A scenario in which several countries are incentivised to use nuclear weapons to avoid being decisively defeated is one where no one can win.

A race to build superintelligent AI is therefore one we cannot win: it makes it more difficult to control the technology and unnecessarily risks catastrophic outcomes.

Flawed assumption 3: Positive international negotiation is not possible

Aschenbrenner asserts that the idea of an international treaty on safety is 'fanciful.' He believes China would only agree to negotiate if it had somehow already given up any hope of contesting for dominance on AI. However, we should explore other strategies to facilitate negotiation.

Historically, even the most bitter adversaries of the US have engaged in negotiations on areas of mutual interest, as exemplified by the Strategic Arms Limitation Talks (SALT) with the Soviet Union during the Cold War. Moreover, there is limited reason to believe that negotiation with China is impossible.

China has cooperated internationally on previous societal-scale risks (e.g., the Montreal Protocol on the ozone layer and the nuclear Non-Proliferation Treaty). Most recently, China joined other major countries in signing the Bletchley Declaration, which acknowledged the potential for catastrophic harm stemming from capable AI models, and in which countries resolved to work together to address AI risks.

China’s signature of the Declaration reflects some awareness of the risks involved in advanced AI development — which will only multiply if AI development is transformed into an arms race. Therefore, there is no reason to bypass steps and jeopardise the potential for collaboration by transforming AI development into an arms race that we all lose.

Conclusion

Given the stakes, it is absolutely worth attempting to negotiate with China and other countries on the risks of advanced AI, and to try to build a ‘trust-but-verify’ approach before dashing headlong to an arms race that no side can be sure it will survive.

The choice now is whether we first try to build that international agreement proactively, or whether nations race to build their own superintelligent AI and then seek to prevent proliferation through force or subsequent agreement.

The latter approach opens us all up to disaster: in such a race, countries are likely to fail to build superintelligence without triggering catastrophe, or even human extinction. Moreover, nation states do not usually respond well to threats, and to not at least substantively try a collaborative approach first is not realism but a dereliction of duty.

N.B.: The geopolitical implications of advanced AI are matters of serious debate. While we disagree with much of Aschenbrenner’s answer, we agree that the issue demands detailed and serious thinking and writing.

Safeguard our future

Sign up to keep updates on our current and future campaigns.