How to Help

Our plan to address the extinction risks posed by the development of superintelligence is:

  1. Design policies that target ASI development and precursor technologies

  2. Then, convince policymakers in key jurisdictions around the world to implement these policies.

For Step 1, we have written a policy memo on the policies needed to achieve this, and discussed the measures in more detail in A Narrow Path.

What remains is scaling up the civic engagement necessary for the success of step 2: to promote and implement these policies in as many countries as possible. As demonstrated by our proof-of-concept campaign in the UK, directly contacting elected representatives works, and it is possible to push forward the risks from superintelligence and proposed policies.

Our Recommendations

We have tailored policy recommendations for key jurisdictions, as well as policy recommendations for the rest of the world. Recommend these policies when reaching out to policy makers in your jurisdiction. You can download country-specific policy briefings below: click on the button of your country.

If you find more than 5 officials interested in our policies and draft bills in your jurisdiction, let us know and we might add your jurisdiction here.

State Of AI Regulation

China has already implemented various regulations on digital technology and AI, focusing on exercising control over how AIs can be used:

While none of these regulations are sufficient to curtail the risks from superintelligence, they demonstrate willingness so far by the Chinese government to regulate advanced AI.

And along with the UK, the US, and many other countries, China has developed their AI Safety Institute, taking the form of a network of many institutions: the China AI Safety & Development Association (CNAISDA).

Concrete Actions

The most important point to make is that there is no winner when racing superintelligence. The first country to build superintelligence does not get an advantage – it only gets the dubious honour of being first to cause a global catastrophe, leading to possible loss of control and human extinction.

This means that the question to emphasise for each country’s policymakers is not “how to build superintelligence faster than X?”, but instead “How can we coordinate globally to ensure nobody builds the catastrophic technology?”

We are not directly working on advocacy in China. But if you are able to work on this as an individual or an organization, contact us at hello@controlai.com, and we will put you in contact with organizations focused on China.

State Of AI Regulation

China has already implemented various regulations on digital technology and AI, focusing on exercising control over how AIs can be used:

While none of these regulations are sufficient to curtail the risks from superintelligence, they demonstrate willingness so far by the Chinese government to regulate advanced AI.

And along with the UK, the US, and many other countries, China has developed their AI Safety Institute, taking the form of a network of many institutions: the China AI Safety & Development Association (CNAISDA).

Concrete Actions

The most important point to make is that there is no winner when racing superintelligence. The first country to build superintelligence does not get an advantage – it only gets the dubious honour of being first to cause a global catastrophe, leading to possible loss of control and human extinction.

This means that the question to emphasise for each country’s policymakers is not “how to build superintelligence faster than X?”, but instead “How can we coordinate globally to ensure nobody builds the catastrophic technology?”

We are not directly working on advocacy in China. But if you are able to work on this as an individual or an organization, contact us at hello@controlai.com, and we will put you in contact with organizations focused on China.

State Of AI Regulation

China has already implemented various regulations on digital technology and AI, focusing on exercising control over how AIs can be used:

While none of these regulations are sufficient to curtail the risks from superintelligence, they demonstrate willingness so far by the Chinese government to regulate advanced AI.

And along with the UK, the US, and many other countries, China has developed their AI Safety Institute, taking the form of a network of many institutions: the China AI Safety & Development Association (CNAISDA).

Concrete Actions

The most important point to make is that there is no winner when racing superintelligence. The first country to build superintelligence does not get an advantage – it only gets the dubious honour of being first to cause a global catastrophe, leading to possible loss of control and human extinction.

This means that the question to emphasise for each country’s policymakers is not “how to build superintelligence faster than X?”, but instead “How can we coordinate globally to ensure nobody builds the catastrophic technology?”

We are not directly working on advocacy in China. But if you are able to work on this as an individual or an organization, contact us at hello@controlai.com, and we will put you in contact with organizations focused on China.

State Of AI Regulation

China has already implemented various regulations on digital technology and AI, focusing on exercising control over how AIs can be used:

While none of these regulations are sufficient to curtail the risks from superintelligence, they demonstrate willingness so far by the Chinese government to regulate advanced AI.

And along with the UK, the US, and many other countries, China has developed their AI Safety Institute, taking the form of a network of many institutions: the China AI Safety & Development Association (CNAISDA).

Concrete Actions

The most important point to make is that there is no winner when racing superintelligence. The first country to build superintelligence does not get an advantage – it only gets the dubious honour of being first to cause a global catastrophe, leading to possible loss of control and human extinction.

This means that the question to emphasise for each country’s policymakers is not “how to build superintelligence faster than X?”, but instead “How can we coordinate globally to ensure nobody builds the catastrophic technology?”

We are not directly working on advocacy in China. But if you are able to work on this as an individual or an organization, contact us at hello@controlai.com, and we will put you in contact with organizations focused on China.

Get Updates

Sign up to our newsletter if you'd like to stay updated on our work,
how you can get involved, and to receive a weekly roundup of the latest AI news.