How to Help

Our plan to address the extinction risks posed by the development of superintelligence is:

  1. Design policies that target ASI development and precursor technologies

  2. Then, convince policymakers in key jurisdictions around the world to implement these policies.

For Step 1, we have written a policy memo on the policies needed to achieve this, and discussed the measures in more detail in A Narrow Path.

What remains is scaling up the civic engagement necessary for the success of step 2: to promote and implement these policies in as many countries as possible. As demonstrated by our proof-of-concept campaign in the UK, directly contacting elected representatives works, and it is possible to push forward the risks from superintelligence and proposed policies.

Our Recommendations

We have tailored policy recommendations for key jurisdictions, as well as policy recommendations for the rest of the world. Recommend these policies when reaching out to policy makers in your jurisdiction. You can download country-specific policy briefings below: click on the button of your country.

If you find more than 5 officials interested in our policies and draft bills in your jurisdiction, let us know and we might add your jurisdiction here.

State Of AI Regulation

Most countries do not have any regulation targeting superintelligence. The EU is an exception with its AI Act that has recently entered into force: while it covers general-purpose AI systems, the Act however contains no provisions that meaningfully prevent the development of superintelligence.

Multiple countries have established AI Safety Institutes (AISIs), modelled after the pioneering UK AISI. These governmental organizations only have an advisory role for the moment, but they represent a natural candidate for the regulatory power we are pushing for.

Policy Brief

Here is our country-agnostic policy brief:

Concrete Actions

Organizations

If you are an organization, you can participate in this civic engagement by doing what we did in your jurisdiction:

  1. Identify the public officials most relevant to AI policy and regulation in your jurisdiction

    1. Elected officials’ contact details should be publicly available: you can email them directly

    2. Non elected officials’ details may not all be publicly available

  2. Contact the relevant public officials, and for those that are not publicly reachable, leverage your network to reach out and meet them

  3. Make the case about AI risk and the need for targeted regulation

  4. Offer a policy brief of concrete policies they can promote and implement

Also, if the public officials you are engaging with are interested in making the policy proposals happen, you can put us in touch with them to discuss next steps and concrete legislation proposals.

Once you have contacted more than 5 public officials, you can let us know by reaching out to hello@controlai.com. We will be happy to schedule a call and provide advice and ideas.

Individuals

If you’re an individual citizen, what you can do is:

  1. Find who your elected representative, or closest, is, and get their contact details

    1. Many countries provide websites to search for your representative based on where you live, as in the following examples for France, the UK, or the US

  2. Contact them by email (we provide a template in English for inspiration)

    1. Make the case for the risks from ASI and the need for regulation (see our risks page for inspiration)

    2. Propose policy solutions from our generic policy brief below

    3. Demand that your government enacts these policies by putting them into law

You can also look at our resources for supporters for more actions you can take.

In either case, whenever you find yourself overextended or out of your depth with regard to this plan, the argument for AI risks, or the policy proposals, feel free to contact us at hello@controlai.com.

State Of AI Regulation

Most countries do not have any regulation targeting superintelligence. The EU is an exception with its AI Act that has recently entered into force: while it covers general-purpose AI systems, the Act however contains no provisions that meaningfully prevent the development of superintelligence.

Multiple countries have established AI Safety Institutes (AISIs), modelled after the pioneering UK AISI. These governmental organizations only have an advisory role for the moment, but they represent a natural candidate for the regulatory power we are pushing for.

Policy Brief

Here is our country-agnostic policy brief:

Concrete Actions

Organizations

If you are an organization, you can participate in this civic engagement by doing what we did in your jurisdiction:

  1. Identify the public officials most relevant to AI policy and regulation in your jurisdiction

    1. Elected officials’ contact details should be publicly available: you can email them directly

    2. Non elected officials’ details may not all be publicly available

  2. Contact the relevant public officials, and for those that are not publicly reachable, leverage your network to reach out and meet them

  3. Make the case about AI risk and the need for targeted regulation

  4. Offer a policy brief of concrete policies they can promote and implement

Also, if the public officials you are engaging with are interested in making the policy proposals happen, you can put us in touch with them to discuss next steps and concrete legislation proposals.

Once you have contacted more than 5 public officials, you can let us know by reaching out to hello@controlai.com. We will be happy to schedule a call and provide advice and ideas.

Individuals

If you’re an individual citizen, what you can do is:

  1. Find who your elected representative, or closest, is, and get their contact details

    1. Many countries provide websites to search for your representative based on where you live, as in the following examples for France, the UK, or the US

  2. Contact them by email (we provide a template in English for inspiration)

    1. Make the case for the risks from ASI and the need for regulation (see our risks page for inspiration)

    2. Propose policy solutions from our generic policy brief below

    3. Demand that your government enacts these policies by putting them into law

You can also look at our resources for supporters for more actions you can take.

In either case, whenever you find yourself overextended or out of your depth with regard to this plan, the argument for AI risks, or the policy proposals, feel free to contact us at hello@controlai.com.

State Of AI Regulation

Most countries do not have any regulation targeting superintelligence. The EU is an exception with its AI Act that has recently entered into force: while it covers general-purpose AI systems, the Act however contains no provisions that meaningfully prevent the development of superintelligence.

Multiple countries have established AI Safety Institutes (AISIs), modelled after the pioneering UK AISI. These governmental organizations only have an advisory role for the moment, but they represent a natural candidate for the regulatory power we are pushing for.

Policy Brief

Here is our country-agnostic policy brief:

Concrete Actions

Organizations

If you are an organization, you can participate in this civic engagement by doing what we did in your jurisdiction:

  1. Identify the public officials most relevant to AI policy and regulation in your jurisdiction

    1. Elected officials’ contact details should be publicly available: you can email them directly

    2. Non elected officials’ details may not all be publicly available

  2. Contact the relevant public officials, and for those that are not publicly reachable, leverage your network to reach out and meet them

  3. Make the case about AI risk and the need for targeted regulation

  4. Offer a policy brief of concrete policies they can promote and implement

Also, if the public officials you are engaging with are interested in making the policy proposals happen, you can put us in touch with them to discuss next steps and concrete legislation proposals.

Once you have contacted more than 5 public officials, you can let us know by reaching out to hello@controlai.com. We will be happy to schedule a call and provide advice and ideas.

Individuals

If you’re an individual citizen, what you can do is:

  1. Find who your elected representative, or closest, is, and get their contact details

    1. Many countries provide websites to search for your representative based on where you live, as in the following examples for France, the UK, or the US

  2. Contact them by email (we provide a template in English for inspiration)

    1. Make the case for the risks from ASI and the need for regulation (see our risks page for inspiration)

    2. Propose policy solutions from our generic policy brief below

    3. Demand that your government enacts these policies by putting them into law

You can also look at our resources for supporters for more actions you can take.

In either case, whenever you find yourself overextended or out of your depth with regard to this plan, the argument for AI risks, or the policy proposals, feel free to contact us at hello@controlai.com.

State Of AI Regulation

Most countries do not have any regulation targeting superintelligence. The EU is an exception with its AI Act that has recently entered into force: while it covers general-purpose AI systems, the Act however contains no provisions that meaningfully prevent the development of superintelligence.

Multiple countries have established AI Safety Institutes (AISIs), modelled after the pioneering UK AISI. These governmental organizations only have an advisory role for the moment, but they represent a natural candidate for the regulatory power we are pushing for.

Policy Brief

Here is our country-agnostic policy brief:

Concrete Actions

Organizations

If you are an organization, you can participate in this civic engagement by doing what we did in your jurisdiction:

  1. Identify the public officials most relevant to AI policy and regulation in your jurisdiction

    1. Elected officials’ contact details should be publicly available: you can email them directly

    2. Non elected officials’ details may not all be publicly available

  2. Contact the relevant public officials, and for those that are not publicly reachable, leverage your network to reach out and meet them

  3. Make the case about AI risk and the need for targeted regulation

  4. Offer a policy brief of concrete policies they can promote and implement

Also, if the public officials you are engaging with are interested in making the policy proposals happen, you can put us in touch with them to discuss next steps and concrete legislation proposals.

Once you have contacted more than 5 public officials, you can let us know by reaching out to hello@controlai.com. We will be happy to schedule a call and provide advice and ideas.

Individuals

If you’re an individual citizen, what you can do is:

  1. Find who your elected representative, or closest, is, and get their contact details

    1. Many countries provide websites to search for your representative based on where you live, as in the following examples for France, the UK, or the US

  2. Contact them by email (we provide a template in English for inspiration)

    1. Make the case for the risks from ASI and the need for regulation (see our risks page for inspiration)

    2. Propose policy solutions from our generic policy brief below

    3. Demand that your government enacts these policies by putting them into law

You can also look at our resources for supporters for more actions you can take.

In either case, whenever you find yourself overextended or out of your depth with regard to this plan, the argument for AI risks, or the policy proposals, feel free to contact us at hello@controlai.com.

Get Updates

Sign up to our newsletter if you'd like to stay updated on our work,
how you can get involved, and to receive a weekly roundup of the latest AI news.