an aerial view of a city at night

Top experts say AI poses an extinction risk on par with nuclear war

We can prevent this risk by prohibiting the development of superintelligent AI

Featured in

TIME logo
The Guardian logo
The Guardian logo
The Times logo
  • TIME logo
  • The Guardian logo
  • The Guardian logo
  • The Times logo
  • TIME logo
  • The Guardian logo
  • The Guardian logo
  • The Times logo

Our campaign on preventing extinction risk from AI is supported by over 100 UK lawmakers

(in part, because they have received thousands of messages from people like you)

The Viscount Camrose

The Viscount Camrose, Former Minister for AI

The Lord Clement-Jones CBE

The Rt Hon Anneliese Dodds, MP

The Rt Hon Sir John Whittingdale OBE

The Rt Hon Sir John Whittingdale OBE, MP

What does ControlAI do?

two people in a meeting
two people in a meeting

We brief lawmakers

Most lawmakers have never heard about superintelligent AI or its risks. We're changing that.

We have briefed more than 250 lawmakers in the US, UK, Canada, and Germany, including the UK Prime Minister's office since November 2024.

Our superintelligence campaign has gained support from 100+ UK lawmakers.

We collaborate with journalists and creators

We brief journalists and creators about this issue and help them produce top-tier educational content covering this issue.

Common knowledge about the threat posed by superintelligent AI is essential to prevent it.

We appear on mainstream news, podcasts, and other media.

collection of thumbnails from controlAI's media appearnces
collection of thumbnails from controlAI's media appearnces
collection of thumbnails from controlAI's media appearnces

What is superintelligent AI? Why does it matter?

Superintelligent AI means AI that is more powerful than any individual, any company, or any nation.

The pace of AI development means superintelligent AI could arrive before 2030. No method exists to contain or control such a system.

If superintelligent AI is developed anywhere, this could cause the extinction of the humanity.

We must coordinate to prevent superintelligent AI from being created by anyone, anywhere.

Watch this video for a short introduction.

Superintelligent AI means AI that is more powerful than any individual, any company, or any nation.



The pace of AI development means superintelligent AI could arrive before 2030. No method exists to contain or control such a system.



If superintelligent AI is developed anywhere, this could cause the extinction of the human race.



We must coordinate to prevent superintelligent AI from being created by anyone, anywhere.



Watch this video for a short introduction.

Endorsements for our Campaign

Lord Browne of Ladyton

Former UK Secretary of State for Defence

"History shows that every breakthrough technology reshapes the balance of power. With Superintelligent AI, the risks of unsafe development, deployment, or misuse could be catastrophic—even existential—as digital intelligence surpasses human capabilities. Control must come first, or we risk creating tools that outpace our ability to govern them."

Yuval Noah-Harari

Historian, Philosopher, and Best-Selling Author of 'Sapiens'

"We will not be able to understand a superintelligent AI. It will make its own decisions, set its own goals, and to achieve them it might trick and deceive humans in ways we cannot hope to understand. We are the Titanic headed towards the iceberg. In fact, we are the Titanic busy creating the iceberg. Let's change course and build a future that keeps humans in control."

Stuart Russell

Professor of Computer Science, UC Berkeley

"To proceed towards superintelligent AI without any concrete plan to control it is completely unacceptable. We need effective regulation that reduces the risk to an acceptable level. Developers cannot currently comply with any effective regulation because they do not understand what they are building."

Anthony Aguirre

Executive Director of FLI, Professor UC Santa Cruz

"At the Future of Life Institute, we've united leading scientists to address humanity's greatest threats. None is more urgent than uncontrolled AI. If we're going to build superintelligent AI, we'd need to ensure that it is safe, controllable, and actually desired by the public first."

Roman Yampolskiy

Professor of Computer Science, University of Louisville

"My life's work has been devoted to Al safety and cybersecurity, and the core truths l've uncovered are as simple as they are terrifying: We cannot verify systems that evolve their own code. We cannot predict entities whose intelligence exceeds our own. And we cannot control what lies beyond our comprehension. Yet the global research community, driven by competition and unrestrained ambition, is rushing headlong to construct precisely such systems. Unless we impose hard limits, now, on the development of recursively self-improving Al, we are not merely risking catastrophic failure. We are engineering our own extinction."

Steven Adler

Ex-OpenAI Dangerous Capability Evaluations Lead

"I worked on OpenAI's safety team for four years, and I can tell you with confidence: AI companies aren't taking your safety seriously enough, and they aren't on track to solve critical safety problems. This is despite an understanding from the leadership of OpenAI and other AI companies that superintelligence, the technology they're building, could literally cause the death of every human on Earth."

Dan Hendrycks

Director of the Center for AI Safety, Advisor for xAI and Scale

"We seriously risk human extinction by building superintelligence. We don’t know of any way to prevent extinction on the current path to superintelligence."

Stay informed

Join 130,000+ readers of our weekly newsletter to stay updated on AI news,
our work, and get emails to act and make a difference.