Handling Common Reactions

When discussing the existential risks of AI you may hear some common reactions, below are resources to help you in handling these reactions.

When discussing the existential risks of AI you may hear some common reactions, below are resources to help you in handling these reactions.

When discussing the existential risks of AI you may hear some common reactions, below are resources to help you in handling these reactions.

When discussing the existential risks of AI you may hear some common reactions, below are resources to help you in handling these reactions.

"This sounds like science fiction"

Practical Conversation Tips

Acknowledge the concern

  • Show current AI capabilities

  • Connect to historical examples of "impossible" technology becoming reality

  • Computing Power

    • In 1943, Thomas Watson (IBM chairman) predicted "there is a world market for maybe five computers"

    • In 1977, Ken Olsen (Digital Equipment Corp founder) said "There is no reason anyone would want a computer in their home"

    • Today, most people carry more computing power in their pocket than was used for the entire Apollo moon landing program

  • Aviation

    • In 1901, astronomer Simon Newcomb declared "no possible combination of known substances, known forms of machinery and known forms of force, can be united in a practical machine by which man shall fly long distances."

    • In 1903, the Wright brothers achieved powered flight

    • By 1950s, commercial jet travel was routine

"We have more immediate problems"

Practical Conversation Tips

Most of the heads of AI companies think that AGI will be here within a few years

"Here's the thing: The biggest AI companies - OpenAI, Anthropic, DeepMind - are saying they're probably 2-3 years away from AI systems that can match or exceed human intelligence across most tasks. Some are saying it could be even sooner. We're not talking about some far-off future problem."

Connect the dots to current issues:

"Once we have AI systems that can match human intelligence, they'll be able to work on all our current problems - but they'll do it millions of times faster than humans can. That's incredibly exciting, but it also means we need to get it right. If we mess this up, none of our other problems matter because we won't be the ones in control anymore."

Use concrete examples:

"Look at how fast things are moving: In 2022, AI couldn't reliably write code. Now it's outperforming human programmers. In 2021, AI was terrible at math. Now it's solving complex mathematical problems. The capabilities are increasing so rapidly that waiting even a few years to think about safety could be too late."

End with the key point:

"We don't have to choose between solving today's problems and ensuring AI is developed safely. In fact, we can't afford to ignore either. Safe AI could be the key to solving our current challenges - but only if we keep it under control."

"This is anti-progress"

Practical Conversation Tips

  • Clarify that you support AI development

  • Emphasise that safety enables innovation

  • Draw parallels to safety measures in other industries

Get Updates

Sign up to our newsletter if you'd like to stay updated on our work,
how you can get involved, and to receive a weekly roundup of the latest AI news.