
Top experts say AI poses an extinction risk on par with nuclear war
We can prevent this risk by prohibiting the development of superintelligent AI
Featured in
Our campaign on preventing extinction risk from AI is supported by over 100 UK lawmakers
(in part, because they have received thousands of messages from people like you)
What does ControlAI do?
1.
We inform millions of people about superintelligent AI and its risks
2.
We help tens of thousands people contact their representatives.
3.
We brief hundreds of policymakers and help them act.
We brief lawmakers
Most lawmakers have never heard about superintelligent AI or its risks. We're changing that.
We have briefed more than 250 lawmakers in the US, UK, Canada, and Germany, including the UK Prime Minister's office since November 2024.
Our superintelligence campaign has gained support from 100+ UK lawmakers.
We collaborate with journalists and creators
We brief journalists and creators about this issue and help them produce top-tier educational content covering this issue.
Common knowledge about the threat posed by superintelligent AI is essential to prevent it.
We appear on mainstream news, podcasts, and other media.
What is superintelligent AI? Why does it matter?
Endorsements for our Campaign

Lord Browne of Ladyton
Former UK Secretary of State for Defence
"History shows that every breakthrough technology reshapes the balance of power. With Superintelligent AI, the risks of unsafe development, deployment, or misuse could be catastrophic—even existential—as digital intelligence surpasses human capabilities. Control must come first, or we risk creating tools that outpace our ability to govern them."

Yuval Noah-Harari
Historian, Philosopher, and Best-Selling Author of 'Sapiens'
"We will not be able to understand a superintelligent AI. It will make its own decisions, set its own goals, and to achieve them it might trick and deceive humans in ways we cannot hope to understand. We are the Titanic headed towards the iceberg. In fact, we are the Titanic busy creating the iceberg. Let's change course and build a future that keeps humans in control."

Stuart Russell
Professor of Computer Science, UC Berkeley
"To proceed towards superintelligent AI without any concrete plan to control it is completely unacceptable. We need effective regulation that reduces the risk to an acceptable level. Developers cannot currently comply with any effective regulation because they do not understand what they are building."

Anthony Aguirre
Executive Director of FLI, Professor UC Santa Cruz
"At the Future of Life Institute, we've united leading scientists to address humanity's greatest threats. None is more urgent than uncontrolled AI. If we're going to build superintelligent AI, we'd need to ensure that it is safe, controllable, and actually desired by the public first."

Roman Yampolskiy
Professor of Computer Science, University of Louisville
"My life's work has been devoted to Al safety and cybersecurity, and the core truths l've uncovered are as simple as they are terrifying: We cannot verify systems that evolve their own code. We cannot predict entities whose intelligence exceeds our own. And we cannot control what lies beyond our comprehension. Yet the global research community, driven by competition and unrestrained ambition, is rushing headlong to construct precisely such systems. Unless we impose hard limits, now, on the development of recursively self-improving Al, we are not merely risking catastrophic failure. We are engineering our own extinction."

Steven Adler
Ex-OpenAI Dangerous Capability Evaluations Lead
"I worked on OpenAI's safety team for four years, and I can tell you with confidence: AI companies aren't taking your safety seriously enough, and they aren't on track to solve critical safety problems. This is despite an understanding from the leadership of OpenAI and other AI companies that superintelligence, the technology they're building, could literally cause the death of every human on Earth."

Dan Hendrycks
Director of the Center for AI Safety, Advisor for xAI and Scale
"We seriously risk human extinction by building superintelligence. We don’t know of any way to prevent extinction on the current path to superintelligence."
Media highlights
Stay informed
Join 130,000+ readers of our weekly newsletter to stay updated on AI news,
our work, and get emails to act and make a difference.

















