Politicians should make unilateral and multilateral commitments to treat superintelligent AI as a catastrophic and extinction threat.

Main Objectives:

  1. Create a common international acknowledgment of the risks superintelligent AI poses.

  2. Facilitate collaboration on how to coordinate internationally to address these risks.

Why this intervention:

Managing the risks from superintelligent AI requires an international and collective response. One of the main arguments given at present against regulating AI companies to require greater safety measures is the risk that this may cause Western companies to fall behind competitors in more authoritarian countries, and thereby undermine national security.

Despite this, countries are already recognizing the importance of AI and its potential risks.  We have seen statements by individual leaders as well as by international summits and groups of researchers; these are important building blocks and we are thrilled to see them. THowever, the next step is to not only recognize the importance of AI, but specifically of the risks of developing superintelligent AI, and to commit to treat it as posing potentially catastrophic or global extinction level riskshaving the potential of global catastrophic and extinction risks. Given the significant number of national elections happening in 2024, now is a prime time for prospective leaders to set clear expectations on how they will safely govern humanity forward. 

These national acknowledgments and commitments should be made unilaterally and multilaterally, through fora such asincluding, but not limited to, the G7, G20, IMF, and UN. Public commitments by international leaders carry significant weight in international relations, and are a first step to ensure global common knowledge is built that this issue is a global security risk, rather than solely a national security risk or a mere domestic economic question. Public commitments enable further national and international work to proceed on a solid, shared foundation and with a baseline view on the risks to be prevented and their sources.

How to implement

There are numerous avenues for these statements to be introduced and presented. Below are a number of methods we believe will be most effective and suitable.

  1. Heads of States should include a clear statement in line with the above in any upcoming public speech.

  2. Secretary of States for departments covering technology (or relevant nation’s equivalent) should include a clear statement in line with the above in any upcoming speech.

  3. Candidates in national elections should include a clear commitment in line with the above within their manifestos and campaign messaging.

  4. Sherpa’s and/or government employees involved in agreeing communique or statement language for upcoming international meetings should recommend including a clear statement in line with the above during negotiations.

  5. Policy officials and political aids within governments should recommend that their respective departmental Secretary, Undersecretary, Minister (or equivalent) include a clear statement in line with the above in any upcoming speech.

Civil Society groups and citizens should write to their respective Head of State to provide a clear statement in line with the above in any upcoming public speech.