What is AI Extinction Risk?

AI companies are racing to build superintelligent AI systems.

These AIs would be vastly smarter than the brightest human individuals and vastly more capable than any human organization or country.

Today's AI systems are already highly capable, and they are improving rapidly.

Many leading AI experts warn that we may lose control of future systems, and that superintelligent AIs could overpower and wipe out humanity.

This isn't a niche concern. It's held by hundreds of the top AI scientists, Nobel Prize winners, and even the CEOs of the largest AI companies.

AI companies are racing to build "superintelligent AI" more capable than any person, company, or nation at everything humans do.

Today's AI systems are already highly capable, raising fears of mass job loss, and they are improving rapidly.

Many leading AI experts warn that we may lose control of future systems, and that superintelligent AI could overpower and wipe out humanity.

This isn't a niche concern, it's held by hundreds of the top AI scientists, Nobel Prize winners, and even the AI CEOs.

"

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

–2023 CAIS Statement on AI Risk, signed by hundreds of top AI experts, including:

Sam Altman
Sam Altman

CEO of OpenAI

(ChatGPT)

Demis Hassabis
Demis Hassabis

Nobel Prize winner

CEO of Google DeepMind

Geoffrey Hinton
Geoffrey Hinton

Nobel Prize winner

"Godfather of AI"

Yoshua Bengio

World's most-cited scientist

"Godfather of AI"

Bill Gates

Founder of Microsoft

Dario Amodei
Dario Amodei

CEO of Anthropic

(Claude)

Why can't we control AI?

AI systems aren't programmed like traditional software. They're grown, more like an organism.

There is no reliable way to prevent dangerous behavior in AI or to set its goals precisely.

AIs already lie, blackmail, and even attempt to kill in testing scenarios to avoid being shut down. This isn't a bug an engineer can fix. No one at the AI companies knows how to stop this.

Right now, AI systems aren't powerful or autonomous enough for this to cause catastrophic harm. But as they grow more capable, the danger scales with them.

Geoffrey Hinton

Geoffrey Hinton

Nobel Prize Winner & "Godfather of AI"

"The idea that they [AI systems] might overtake us quite soon suddenly became much more plausible. For a long time I'd said that would be 30 to 50 years but now I think it may only be a few years."

Geoffrey Hinton

Geoffrey Hinton

Nobel Prize Winner & "Godfather of AI"

"The idea that they [AI systems] might overtake us quite soon suddenly became much more plausible. For a long time I'd said that would be 30 to 50 years but now I think it may only be a few years."

How much time do we have?

Before ChatGPT launched in 2022, most experts saw superintelligent AI as a distant problem for future generations. That has changed.

Due to the accelerating pace of AI progress, many experts now believe superintelligent AI could arrive within just a few years unless we act soon.

Yet there is a large gap between how experts see this threat and how much the public knows about it. There has been little broad public discussion, and most politicians are neither aware of nor preparing for the consequences.

What can we do to prevent this risk?

As a first step, we must prohibit the development of superintelligent AI.

AI progress is accelerating. Companies are racing with each other, and countries are increasingly involved. This race assumes there can be a human "winner", but since no one can control superintelligent AIs, whoever "wins" simply hands control to the AI itself.

To avoid this future and keep humanity in control, we must restrict the key capabilities that make superintelligent AI possible.

A great future for humanity is possible, and you can help turn the tide.

You can take action in less than 2 minutes, and you can read our full plan here.

Common questions and answers

How can AI lead to human extinction?

What is AGI? How does it relate to superintelligent AI?

Can we actually prevent the development of superintelligent AI?

Won't some other country develop a superintelligent AI, even if we don't?

What can a single country change? What can a single person do?

Why are governments allowing this to happen? Why isn't anyone doing anything about it?

Quotes from experts

Daniel Kokotajlo

Ex-OpenAI Safety Researcher

"By 2027, we may automate AI R&D leading to vastly superhuman AIs ('artificial superintelligence' or ASI). In [our scenario that is titled] AI 2027, AI companies create expert-human-level AI systems in early 2027 which automate AI research, leading to ASI by the end of 2027."

Daniel Kokotajlo

Ex-OpenAI Safety Researcher

"By 2027, we may automate AI R&D leading to vastly superhuman AIs ('artificial superintelligence' or ASI). In [our scenario that is titled] AI 2027, AI companies create expert-human-level AI systems in early 2027 which automate AI research, leading to ASI by the end of 2027."

Dario Amodei

CEO of Anthropic

"Time is short, and we must accelerate our actions to match accelerating AI progress. Possibly by 2026 or 2027 (and almost certainly no later than 2030), the capabilities of AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people appearing on the global stage — a 'country of geniuses in a datacenter' — with the profound economic, societal, and security implications that would bring."

Dario Amodei

CEO of Anthropic

"Time is short, and we must accelerate our actions to match accelerating AI progress. Possibly by 2026 or 2027 (and almost certainly no later than 2030), the capabilities of AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people appearing on the global stage — a 'country of geniuses in a datacenter' — with the profound economic, societal, and security implications that would bring."

Want to learn more?