Superintelligence
There is a simple truth - humanity’s extinction is possible. Recent history has also shown us another truth - we can create artificial intelligence (AI) that can rival humanity.
We do not know how to control AI vastly more powerful than us. Should attempts to build superintelligence succeed, this would risk our extinction as a species. But humanity can choose a different future: there is a narrow path through.
The Problem
AI progress is converging on building Artificial General Intelligence, AI systems as or more intelligent than humanity.
Today, ideologically motivated groups are driving an arms race to AGI, backed by Big Tech, and are vying for the support of nations. If these actors succeed in their goal of creating AI that is more powerful than humanity, without the necessary solutions to safety, it is game over for all of us. There is currently no solution to keep AI safe.
If you'd like to learn more about the potential risks of AI we recommend reading The Compendium.
The Solution
We do not know how to control AI vastly more powerful than us. Should attempts to build superintelligence succeed, this would risk our extinction as a species. But humanity can choose a different future: there is a narrow path through.
If you'd like to learn more about a potential solution, we recommend reading A Narrow Path.
Our Projects
Recent Campaigns
DEC 2023 - jun 2024
Campaign against deepfakes
Deepfakes are a growing threat to society, and governments must act.
Read More
OCT 2023
Campaign to prevent an international endorsement of further scaling
At the AI Safety Summit, we successfully campaigned against the Summit formally giving its approval to Responsible Scaling Policies.
Read More
Get Updates
Sign up to our newsletter if you'd like to stay updated on our work,
how you can get involved, and to receive a weekly roundup of the latest AI news.