Cold War to Code War

ControlAI

Aug 15, 2024

See All Posts

In an eerie echo of the warnings given by the most prominent scientists of the 20th century on the risks of nuclear development, the developers of frontier AI systems are warning of the potential dangers of this technology. In the words of OpenAI CEO Sam Altman, "The bad case — and I think this is important to say — is, like, lights out for all of us."

This is not unlike what Leo Szilard, one of the key scientists involved with the Manhattan project, had to say on the development of nuclear armament: ​​“We turned the switch, saw the flashes, watched for ten minutes, then switched everything off and went home. That night I knew the world was headed for sorrow.”

We knew then what we know now: uncontrolled proliferation of technology with devastating capabilities has the potential to unleash catastrophic events that could well mean the extinction of humanity. Sam Altman has echoed many world leaders, top scientists, and other CEOs in saying that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The echoes of our nuclear past are far from reassuring. History shows that humanity has teetered on the brink of nuclear war multiple times: in 1962, during the Cuban Missile Crisis; in 1983, when the realism of NATO's Able Archer exercise led Soviet leaders to fear an imminent strike; and even after the Cold War; in 1995, when Russian leaders briefly mistook a Norwegian scientific rocket launch for a nuclear attack… The list goes on.

These narrow escapes from catastrophe underscore the fragility of a proliferation regime where the only guardrail is the belief that people will act sufficiently rationally so as not to trigger a nuclear apocalypse. While catastrophe has been averted so far, our repeated brushes with disaster suggest we are collectively accepting much more risk than we should be.

There was an alternative to these close calls: the so-called ‘Baruch Plan’ of international coordination and cooperation, which would prevent nuclear proliferation and minimise the risks of nuclear conflict. This plan, presented to the United Nations in 1946, proposed the complete nuclear disarmament of the US and the blanket prohibition of nuclear armament. It also entailed comprehensive international control of atomic energy development and use, and envisioned the removal of veto power on atomic matters in the United Nations Security Council —- which would enable that organisation to swiftly and effectively punish violations of the treaty. 

The plan was eventually met with considerable scepticism by Soviet officials, and was eventually rejected. As a consequence, the world gradually went MAD (“mutually assured destruction”).

The failure of these negotiations led to the emergence of the "mutually assured destruction" deterrence strategy. Each side's potential to essentially destroy its opponent enabled a fragile equilibrium where neither would initiate an attack, knowing it would trigger immediate retaliation - and almost certainly lead to all-out nuclear war. While this strategy prevailed beyond the fall of the Soviet regime, diplomatic efforts to address nuclear proliferation continued.

The Strategic Arms Limitation Talks (SALT), while falling short of the Baruch Plan's ambitious disarmament goals, helped slow down the arms race. SALT introduced verification mechanisms that built a foundation of trust between the superpowers: through the use of National Technical Means, both sides could monitor compliance with less reliance on often-fraught on-site inspections. This significant step forward in arms control demonstrated that even bitter rivals could find ways to cooperate on issues of existential importance. While the SALT treaties did not fully succeed in reducing existing nuclear arsenals, they paved the way for more comprehensive arms reduction treaties in later decades. 

The lessons of the Baruch plan and the SALT agreements should inform the design of international proposals to govern AI. 

The failure of the Baruch Plan highlights countries’ resistance to relinquishing sovereignty even when doing so would be in humanity’s best interest. In the case of arms control, however, relinquishing such power is largely symbolic, primarily signalling that a country has ceded authority over technology development. Therefore, governance frameworks should include measures to make this easier and less politically damaging.

These measures could include verification mechanisms like those in SALT. The success of SALT highlights the importance of trust-building measures in any governance framework. For AI, this might involve shared technical standards, shared code repositories, or third-party audits of AI systems. In deploying these initial measures, countries could establish a common basis of understanding from which to deepen the commitments made under a more ambitious plan.

To achieve lasting benefit and safety, an international organisation should promote and fund safe and beneficial uses of AI technology, much as early non-proliferation nuclear plans envisioned a potential for promoting safe nuclear technology. This would give all signatories an important stake in setting the speed of advancement at the frontier, without the risk of existing asymmetries in capabilities fueling an arms race dynamic. 

Drawing on the lessons of both the Baruch Plan and SALT, and an undesirable subsequent period of nuclear proliferation with associated high-risk of an exchange, we might revisit the proposal for an international organisation with ambitious aims in terms of non-proliferation of potentially dangerous AI systems. This requires first establishing a common understanding of the risks that AI poses to humanity, followed by taking seriously the need to build trust between all parties, addressing the limitations of previous proposals, and finding viable solutions to sovereignty concerns in a context of mutual distrust.

In an eerie echo of the warnings given by the most prominent scientists of the 20th century on the risks of nuclear development, the developers of frontier AI systems are warning of the potential dangers of this technology. In the words of OpenAI CEO Sam Altman, "The bad case — and I think this is important to say — is, like, lights out for all of us."

This is not unlike what Leo Szilard, one of the key scientists involved with the Manhattan project, had to say on the development of nuclear armament: ​​“We turned the switch, saw the flashes, watched for ten minutes, then switched everything off and went home. That night I knew the world was headed for sorrow.”

We knew then what we know now: uncontrolled proliferation of technology with devastating capabilities has the potential to unleash catastrophic events that could well mean the extinction of humanity. Sam Altman has echoed many world leaders, top scientists, and other CEOs in saying that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The echoes of our nuclear past are far from reassuring. History shows that humanity has teetered on the brink of nuclear war multiple times: in 1962, during the Cuban Missile Crisis; in 1983, when the realism of NATO's Able Archer exercise led Soviet leaders to fear an imminent strike; and even after the Cold War; in 1995, when Russian leaders briefly mistook a Norwegian scientific rocket launch for a nuclear attack… The list goes on.

These narrow escapes from catastrophe underscore the fragility of a proliferation regime where the only guardrail is the belief that people will act sufficiently rationally so as not to trigger a nuclear apocalypse. While catastrophe has been averted so far, our repeated brushes with disaster suggest we are collectively accepting much more risk than we should be.

There was an alternative to these close calls: the so-called ‘Baruch Plan’ of international coordination and cooperation, which would prevent nuclear proliferation and minimise the risks of nuclear conflict. This plan, presented to the United Nations in 1946, proposed the complete nuclear disarmament of the US and the blanket prohibition of nuclear armament. It also entailed comprehensive international control of atomic energy development and use, and envisioned the removal of veto power on atomic matters in the United Nations Security Council —- which would enable that organisation to swiftly and effectively punish violations of the treaty. 

The plan was eventually met with considerable scepticism by Soviet officials, and was eventually rejected. As a consequence, the world gradually went MAD (“mutually assured destruction”).

The failure of these negotiations led to the emergence of the "mutually assured destruction" deterrence strategy. Each side's potential to essentially destroy its opponent enabled a fragile equilibrium where neither would initiate an attack, knowing it would trigger immediate retaliation - and almost certainly lead to all-out nuclear war. While this strategy prevailed beyond the fall of the Soviet regime, diplomatic efforts to address nuclear proliferation continued.

The Strategic Arms Limitation Talks (SALT), while falling short of the Baruch Plan's ambitious disarmament goals, helped slow down the arms race. SALT introduced verification mechanisms that built a foundation of trust between the superpowers: through the use of National Technical Means, both sides could monitor compliance with less reliance on often-fraught on-site inspections. This significant step forward in arms control demonstrated that even bitter rivals could find ways to cooperate on issues of existential importance. While the SALT treaties did not fully succeed in reducing existing nuclear arsenals, they paved the way for more comprehensive arms reduction treaties in later decades. 

The lessons of the Baruch plan and the SALT agreements should inform the design of international proposals to govern AI. 

The failure of the Baruch Plan highlights countries’ resistance to relinquishing sovereignty even when doing so would be in humanity’s best interest. In the case of arms control, however, relinquishing such power is largely symbolic, primarily signalling that a country has ceded authority over technology development. Therefore, governance frameworks should include measures to make this easier and less politically damaging.

These measures could include verification mechanisms like those in SALT. The success of SALT highlights the importance of trust-building measures in any governance framework. For AI, this might involve shared technical standards, shared code repositories, or third-party audits of AI systems. In deploying these initial measures, countries could establish a common basis of understanding from which to deepen the commitments made under a more ambitious plan.

To achieve lasting benefit and safety, an international organisation should promote and fund safe and beneficial uses of AI technology, much as early non-proliferation nuclear plans envisioned a potential for promoting safe nuclear technology. This would give all signatories an important stake in setting the speed of advancement at the frontier, without the risk of existing asymmetries in capabilities fueling an arms race dynamic. 

Drawing on the lessons of both the Baruch Plan and SALT, and an undesirable subsequent period of nuclear proliferation with associated high-risk of an exchange, we might revisit the proposal for an international organisation with ambitious aims in terms of non-proliferation of potentially dangerous AI systems. This requires first establishing a common understanding of the risks that AI poses to humanity, followed by taking seriously the need to build trust between all parties, addressing the limitations of previous proposals, and finding viable solutions to sovereignty concerns in a context of mutual distrust.

In an eerie echo of the warnings given by the most prominent scientists of the 20th century on the risks of nuclear development, the developers of frontier AI systems are warning of the potential dangers of this technology. In the words of OpenAI CEO Sam Altman, "The bad case — and I think this is important to say — is, like, lights out for all of us."

This is not unlike what Leo Szilard, one of the key scientists involved with the Manhattan project, had to say on the development of nuclear armament: ​​“We turned the switch, saw the flashes, watched for ten minutes, then switched everything off and went home. That night I knew the world was headed for sorrow.”

We knew then what we know now: uncontrolled proliferation of technology with devastating capabilities has the potential to unleash catastrophic events that could well mean the extinction of humanity. Sam Altman has echoed many world leaders, top scientists, and other CEOs in saying that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The echoes of our nuclear past are far from reassuring. History shows that humanity has teetered on the brink of nuclear war multiple times: in 1962, during the Cuban Missile Crisis; in 1983, when the realism of NATO's Able Archer exercise led Soviet leaders to fear an imminent strike; and even after the Cold War; in 1995, when Russian leaders briefly mistook a Norwegian scientific rocket launch for a nuclear attack… The list goes on.

These narrow escapes from catastrophe underscore the fragility of a proliferation regime where the only guardrail is the belief that people will act sufficiently rationally so as not to trigger a nuclear apocalypse. While catastrophe has been averted so far, our repeated brushes with disaster suggest we are collectively accepting much more risk than we should be.

There was an alternative to these close calls: the so-called ‘Baruch Plan’ of international coordination and cooperation, which would prevent nuclear proliferation and minimise the risks of nuclear conflict. This plan, presented to the United Nations in 1946, proposed the complete nuclear disarmament of the US and the blanket prohibition of nuclear armament. It also entailed comprehensive international control of atomic energy development and use, and envisioned the removal of veto power on atomic matters in the United Nations Security Council —- which would enable that organisation to swiftly and effectively punish violations of the treaty. 

The plan was eventually met with considerable scepticism by Soviet officials, and was eventually rejected. As a consequence, the world gradually went MAD (“mutually assured destruction”).

The failure of these negotiations led to the emergence of the "mutually assured destruction" deterrence strategy. Each side's potential to essentially destroy its opponent enabled a fragile equilibrium where neither would initiate an attack, knowing it would trigger immediate retaliation - and almost certainly lead to all-out nuclear war. While this strategy prevailed beyond the fall of the Soviet regime, diplomatic efforts to address nuclear proliferation continued.

The Strategic Arms Limitation Talks (SALT), while falling short of the Baruch Plan's ambitious disarmament goals, helped slow down the arms race. SALT introduced verification mechanisms that built a foundation of trust between the superpowers: through the use of National Technical Means, both sides could monitor compliance with less reliance on often-fraught on-site inspections. This significant step forward in arms control demonstrated that even bitter rivals could find ways to cooperate on issues of existential importance. While the SALT treaties did not fully succeed in reducing existing nuclear arsenals, they paved the way for more comprehensive arms reduction treaties in later decades. 

The lessons of the Baruch plan and the SALT agreements should inform the design of international proposals to govern AI. 

The failure of the Baruch Plan highlights countries’ resistance to relinquishing sovereignty even when doing so would be in humanity’s best interest. In the case of arms control, however, relinquishing such power is largely symbolic, primarily signalling that a country has ceded authority over technology development. Therefore, governance frameworks should include measures to make this easier and less politically damaging.

These measures could include verification mechanisms like those in SALT. The success of SALT highlights the importance of trust-building measures in any governance framework. For AI, this might involve shared technical standards, shared code repositories, or third-party audits of AI systems. In deploying these initial measures, countries could establish a common basis of understanding from which to deepen the commitments made under a more ambitious plan.

To achieve lasting benefit and safety, an international organisation should promote and fund safe and beneficial uses of AI technology, much as early non-proliferation nuclear plans envisioned a potential for promoting safe nuclear technology. This would give all signatories an important stake in setting the speed of advancement at the frontier, without the risk of existing asymmetries in capabilities fueling an arms race dynamic. 

Drawing on the lessons of both the Baruch Plan and SALT, and an undesirable subsequent period of nuclear proliferation with associated high-risk of an exchange, we might revisit the proposal for an international organisation with ambitious aims in terms of non-proliferation of potentially dangerous AI systems. This requires first establishing a common understanding of the risks that AI poses to humanity, followed by taking seriously the need to build trust between all parties, addressing the limitations of previous proposals, and finding viable solutions to sovereignty concerns in a context of mutual distrust.

Safeguard our future

Sign up to keep updates on our current and future campaigns.