Talking to friends and the public
Getting Started
Talking to the public or friends about the existential risk of AI can sometimes feel frustrating. There’s a fine line to walk between convincing someone of the issue’s seriousness and sounding like you’ve been reading too much science fiction.
Start Where They Are
Begin with topics and concerns that already matter to the person you’re speaking with
Connect AI risk to existing frameworks they understand and care about (e.g., environmental protection, public safety)
"You know how climate change became an existential threat because we developed powerful technologies without fully understanding their impact? We're at a similar inflection point with AI - once these systems become advanced enough, we might not get a second chance to course-correct.”
"We put intense scrutiny on things like bridge construction and airplane safety because failure could cost lives. But AI systems are already making critical decisions in healthcare, transportation, and energy grids. If advanced AI systems malfunction or are misaligned with human values, the impact could be far more widespread than any single infrastructure failure.”
"Think about how facial recognition started as a convenience feature but became a tool for mass surveillance. AI development is following a similar pattern, but at a much larger scale. Without proper oversight, these systems could enable unprecedented levels of control and manipulation - we need to act now to ensure they respect human rights and dignity.”
Example Implementation:
"Consider the development of nuclear power in the 1940s-50s. Early scientists recognised that while nuclear fission offered immense benefits, it also carried catastrophic risks if proper safety systems weren't in place. They didn't wait until reactors were operational to start thinking about containment, cooling systems, and fail-safes - those safety measures had to be designed and integrated from the beginning. Once a reactor is running, it's too late to add fundamental safety features. Similarly, if we only begin preparing to mitigate the risks of AI when we are on the verge of losing control, it will be too late."
Focus on statements from three key groups (you can read more here):
Industry Leaders:
Sam Altman, OpenAI CEO:
“The bad case — and I think this is important to say — is like lights out for all of us.”
Source
Dario Amodei, Anthropic CEO:
“My chance that something goes really quite catastrophically wrong on the scale of human civilisation might be somewhere between 10 per cent and 25 per cent.”
Source
Demis Hassabis, DeepMind CEO:
Actively advocates for AI safety measures
Source
Scientific Authorities:
Geoffrey Hinton (Nobel Prize winner, "godfather of AI"):
"The existential risk is what I'm worried about, the existential risk is that humanity gets wiped out because we've developed a better form of intelligence that decides to take control."
Source
Yoshua Bengio (Turing Award recipient):
"There is too much uncertainty. We are not sure this won't pass some point where things get catastrophic."
Source
Government Recognition:
Demonstrate progression through specific capabilities:
Current Capabilities:
Talk about how AI systems already surpass humans in specific domains
Use specific examples of AI systems exhibiting concerning behaviours:
Structure the conversation around solutions:
Positive Vision: "Look, AI could do amazing things for all of us - help cure diseases, solve climate change, make our lives better in countless ways. We just need to make sure we develop it carefully so we actually get to those good outcomes."
Practical Solutions: Regulatory frameworks exist and work in other sectors:
Food safety
Construction
Aviation
Nuclear technology
Emphasise specific policy proposals:
When faced with skepticism:
Acknowledge the reasonableness of skepticism
Focus on evidence rather than rhetoric
Appear calm and reasonable
Use the bridging technique:
Acknowledge their point
Build a bridge to core messages
Communicate key information
Example Response to Skepticism:
"I understand your concern about whether this is a real risk. What's interesting is that the CEOs and developers of these AI systems - the people who know the technology best - are themselves raising serious concerns. For instance [refer to quotes earlier]"
Do:
Start with capabilities we can already observe
Use clear, AI jargon-free language
Connect to existing regulatory frameworks in other industries
Focus on specific, actionable solutions
Avoid:
Technical jargon or academic language
Dismissing other concerns about AI
Start with Current Reality
Begin with capabilities that exist today
Show clear progression of advancement
Use specific examples of AI surpassing human abilities in various domains
Bridge to Near-Term Concerns
Discuss immediate challenges we're facing
Connect to issues people already worry about
Show how current trends lead to bigger challenges
Introduce Longer-Term Risks
Explain why current trajectories create future risks
Use expert quotes to support your points
Connect to historical examples of preparing for future challenges
The UN's Near-Earth Object monitoring program and NASA's Planetary Defense Coordination Office were established to track and prepare for potential asteroid impacts. This includes developing deflection technologies like the DART mission that successfully demonstrated asteroid deflection capabilities in 2022.
Climate change preparation has seen varying levels of government response. The Netherlands' Delta Works project, started in the 1950s and still ongoing, is a massive infrastructure program designed to protect against rising sea levels. Similarly, Indonesia began planning in 2019 to move its capital city due to Jakarta's vulnerability to rising seas.
Some governments have also created dedicated offices for long-term risk assessment. For example, Finland's Committee for the Future was established in 1993 to study and prepare for various long-term risks and challenges. Singapore's Centre for Strategic Futures, created in 2009, similarly works on identifying and preparing for emerging threats.
This is a Current Issue
AI capabilities are advancing rapidly right now
Leading AI companies themselves are raising concerns
We need to act before problems become unmanageable
This is a Practical Issue
These are engineering and policy challenges, not science fiction
Other industries have safety regulations and oversight
We can develop AI responsibly while maintaining innovation
This is a limited issue
This is about regulating a sliver of AI, the one that is outsizedly dangerous and doesn’t help you, and letting the one that is valuable thrive.
This is not asking for a specially hard regime for AI. On the contrary, we’re simply asking that AI has the same standards and burden of proof that we require for any high-risk engineering.
This is a Solvable Issue
We have examples of managing powerful technologies safely
There are specific policy proposals and technical solutions
Many experts are working on these challenges
This is a Current Issue
AI capabilities are advancing rapidly right now
Leading AI companies themselves are raising concerns
We need to act before problems become unmanageable
This is a Practical Issue
These are engineering and policy challenges, not science fiction
Other industries have safety regulations and oversight
We can develop AI responsibly while maintaining innovation
This is a limited issue
This is about regulating a sliver of AI, the one that is outsizedly dangerous and doesn’t help you, and letting the one that is valuable thrive.
This is not asking for a specially hard regime for AI. On the contrary, we’re simply asking that AI has the same standards and burden of proof that we require for any high-risk engineering.
This is a Solvable Issue
We have examples of managing powerful technologies safely
There are specific policy proposals and technical solutions
Many experts are working on these challenges
Listen First
Understand their current knowledge and concerns
Find out what aspects of AI interest or worry them
Build on their existing understanding
Use Natural Language
Avoid technical terms unless necessary
Explain concepts using everyday analogies
Keep explanations simple and clear
Be Humble
Acknowledge uncertainties
Present multiple viewpoints when appropriate
Don't claim to have all the answers
Stay Constructive
Focus on solutions and positive actions
Emphasise human agency in shaping outcomes
End with concrete next steps or ways to learn more
Get Updates
Sign up to our newsletter if you'd like to stay updated on our work,
how you can get involved, and to receive a weekly roundup of the latest AI news.