The Direct Institutional Plan
Keeping Humanity in Control
AI companies are racing to build Artificial Superintelligence (ASI) - systems more intelligent than all of humanity combined. If ASI is created in the next few years, humanity risks losing control over its future. Top AI scientists, world leaders, and even AI company CEOs themselves warn it could lead to human extinction.
Given this, we have a clear imperative: prevent the development of artificial superintelligence and keep humanity in control.
We have a plan that anyone can follow to help turn the tide: the Direct Institutional Plan (DIP). It is composed of two straightforward steps:
Design policies that target ASI development and precursor technologies
Then, inform every relevant person in the democratic process – not only lawmakers, but also executive branch, civil service, media, civil society, etc –, and convince them to take a stance on these policies.
This plan is simple, obvious even, and that is the point. It is the most direct way imaginable to tackle the problem, going through the very institutions that power our societies.
That no one has genuinely tried it is a massive failure of civic engagement that should be rectified as soon as possible.
The DIP offers a clear path to solving the problem of superintelligence, one that follows the way civilizational problems are best solved: awareness, civic engagement, and applying to AI the standards of high-risk industries.
Laying the Groundwork
We have spent the last few months laying the groundwork for everyone to be able to participate in the DIP.
First, we wrote A Narrow Path, developing concrete policy measures to tackle superintelligence and keep humanity durably in control. We also developed country-specific policy briefs on the policies that can be implemented at the national level immediately. These include:
Banning the deliberate development of superintelligence
Prohibiting dangerous AI capabilities and superintelligence precursors such as automated AI research and hacking
Requiring companies to demonstrate that an AI will not use forbidden capabilities before they run it
A licensing system for advanced AI development
Every single country committing to take action makes it more likely ASI development is restricted globally.
Then, we launched a pilot campaign focused on UK lawmakers that validated our approach. In less than three months, over 20 cross-party UK parliamentarians publicly supported our campaign. This amounts to more than one in three lawmakers we brief recognizing extinction risks from AI and supporting binding regulation for the most powerful AI systems.
This was not achieved by complex political maneuvering, but through clear, honest, direct engagement with elected officials about the risks and solutions.
Directly engaging with elected officials and educating them about the risks and solutions can lead to meaningful change to keep humanity in control.
Succeeding at scale
For the development of ASI to be prevented, countries must ultimately agree to enforce international rules on it. This requires individual nations to lead the way by implementing these measures domestically first, then building enough consensus and pressure for an international treaty.
We cannot accomplish this alone.
That is why we designed the DIP as a collaborative plan. Any citizen or organization can participate in it independently. Everyone can take action by reaching out to lawmakers, executive branch, civil service, media, civil society in their jurisdiction, and making the case for the risks from superintelligence and what can be done about them.
At ControlAI, we are already scaling up the DIP in two ways.
First, we are building on our UK momentum to also engage the executive branch and ask them to take a stance, proposing our draft AI Bill to pass our policies into law.
At the same time, we are looking to apply the same approach in the US, and to support those doing this in other countries.
The extinction risk posed by developing smarter-than-human AI has been an open secret of AI scientists and company executives for over a decade. Yet, the acknowledgment of risks has been coupled with fatalism, inaction, and a pervasive narrative that gambling with the future of our entire species is a technological inevitability. We reject this attitude.
We can take back control of our future, and this is the first step to do so.
If you want to join us in making this happen, see How you can help below.
Media
Get Updates
Sign up to our newsletter if you'd like to stay updated on our work,
how you can get involved, and to receive a weekly roundup of the latest AI news.