Superintelligence

There is a simple truth - humanity’s extinction is possible. Recent history has also shown us another truth - we can create artificial intelligence (AI) that can rival humanity. 

We do not know how to control AI vastly more powerful than us. Should attempts to build superintelligence succeed, this would risk our extinction as a species. But humanity can choose a different future: there is a narrow path through.

The Problem

AI progress is converging on building Artificial General Intelligence, AI systems as or more intelligent than humanity.

Today, ideologically motivated groups are driving an arms race to AGI, backed by Big Tech, and are vying for the support of nations. If these actors succeed in their goal of creating AI that is more powerful than humanity, without the necessary solutions to safety, it is game over for all of us. There is currently no solution to keep AI safe. 

If you'd like to learn more about the potential risks of AI we recommend reading The Compendium.

The Solution

We do not know how to control AI vastly more powerful than us. Should attempts to build superintelligence succeed, this would risk our extinction as a species. But humanity can choose a different future: there is a narrow path through.

A new and ambitious future lies beyond a narrow path. A future driven by human advancement and technological progress. One where humanity fulfills the dreams and aspirations of our ancestors to end disease and extreme poverty, achieves virtually limitless energy, lives longer and healthier lives, and travels the cosmos. That future requires us to be in control of that which we create, including AI.

If you'd like to learn more about a potential solution, we recommend reading A Narrow Path.

Get Updates

Sign up to our newsletter if you'd like to stay updated on our work,
how you can get involved, and to receive a weekly roundup of the latest AI news.