AI Industry Leaders

Sam Altman - CEO, OpenAI

“Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity.” - Feb 2015

Source

“The bad case — and I think this is important to say — is like lights out for all of us.” - Jan 2023

Source

“A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too.” - Feb 2023

Source

Dario Amodei – Co-founder & CEO, Anthropic; former VP of Research, OpenAI

“I think at the extreme end is the Nick Bostrom style of fear that an AGI could destroy humanity. I can’t see any reason in principle why that couldn’t happen.” - Jul 2017

Source

"We are finding new jailbreaks. Every day people jailbreak Claude, they jailbreak the other models. [...] I’m actually deeply concerned that in two or three years, we’ll get to the point where the models can, I don’t know, do very dangerous things with science, engineering, biology, and then a jailbreak could be life or death." - Jul 2023

Source

“My chance that something goes really quite catastrophically wrong on the scale of human civilisation might be somewhere between 10 per cent and 25 per cent.” - Oct 2023

Source

Elon Musk - CEO, Tesla & SpaceX; Co-founder of OpenAI

“I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. [...] Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish.” - Oct 2014

Source

“Mark my words — AI is far more dangerous than nukes.” - Mar 2018

Source

“One of the biggest risks to the future of civilization is AI.” - Feb 2023

Source

"We are seeing the most destructive force in history here. We will have something that is smarter than the smartest human.” - Nov 2023

Source

Anthropic

“So far, no one knows how to train very powerful AI systems to be robustly helpful, honest, and harmless. …The results of [rapid AI progress] could be catastrophic, either because AI systems strategically pursue dangerous goals, or because these systems make more innocent mistakes in high-stakes situations.” - Mar 2023 (Anthropic’s website)

Source

"We are finding new jailbreaks. Every day people jailbreak Claude, they jailbreak the other models. [...] I’m actually deeply concerned that in two or three years, we’ll get to the point where the models can, I don’t know, do very dangerous things with science, engineering, biology, and then a jailbreak could be life or death." - Jul 2023

Source

“My chance that something goes really quite catastrophically wrong on the scale of human civilisation might be somewhere between 10 per cent and 25 per cent.” - Oct 2023

Source

Ilya Sutskever – Co-founder & Chief Scientist at Safe Superintelligence Inc.; Co-founder & former Chief Scientist, OpenAI.

“It’s not that it’s going to actively hate humans and want to harm them, but it is going to be too powerful and I think a good analogy would be the way humans treat animals.” - Nov 2019

Source

Mustafa Suleyman – CEO, Microsoft AI; Co-founder, Inflection AI; Co-founder, DeepMind

“Until we can prove unequivocally that it is [safe], we shouldn’t be inventing it.” - Apr 2024

Source

“It certainly would not be desirable to have very, very powerful AIs that can take massive actions in our world that can, you know, campaign and persuade and you know, do stuff just like a really smart group of humans.” - April 2024

Source

Demiss Hassabis – Co-founder & CEO, DeepMind

“We must take the risks of AI as seriously as other major global challenges, like climate change [...] It took the international community too long to coordinate an effective global response to this, and we’re living with the consequences of that now. We can’t afford the same delay with AI.” - Oct 2023

Source

John Schulman – Co-founder & Research Scientist, OpenAI

“If AGI came way sooner than expected we would definitely want to be careful about it. We might want to slow down a little bit on training and deployment until we’re pretty sure we know we can deal with it safely, and we have a pretty good handle on what it’s going to do and what it can do. We would have to be very careful if it happened way sooner than expected because I think our understanding is rudimentary in a lot of ways still.” - May 2024

Source

“You’d also want to make sure that whatever you’re training on doesn’t have any reason to make the model turn against you.” - May 2024

Source

Greg Brockman – Co-founder & President, OpenAI

“The core danger with AGI is that it has the potential to cause rapid change. This means we could end up in an undesirable environment before we have a chance to realize where we’re even heading.” - Jun 2018

Source

Jack Clark - Co-founder, Anthropic; former OpenAI Policy Director

“People are worried about - they're worried that at some point in the future AI systems will spawn other AI systems and will improve themselves at machine speed, making human oversight difficult to impossible. There's nothing about the technology that forbids this, as crazy as it sounds.” - Jun 2024

Source

Bill Gates - Co-founder & former CEO of Microsoft

“Then there’s the possibility that AIs will run out of control. Could a machine decide that humans are a threat, conclude that its interests are different from ours, or simply stop caring about us? Possibly, but this problem is no more urgent today than it was before the AI developments of the past few months.” - Mar 2023

Source

Leopold Aschenbrenner – Researcher, formerly OpenAI

“AGI will effectively be the most powerful weapon man has ever created [...] we must have reliable command and control over this immensely powerful weapon — but we’re not currently technically on track to be able to do this.” Apr 2023

Source

“Reliably controlling AI systems much smarter than we are is an unsolved technical problem. And while it is a solvable problem, things could very easily go off the rails during a rapid intelligence explosion. Managing this will be extremely tense; failure could easily be catastrophic.” Jun 2024

Source

Vitalik Buterin – Co-founder, Ethereum

“Thanks to recursive self-improvement, the strongest AI may pull ahead very quickly, and once AIs are more powerful than humans, there is no force that can push things back into balance.” - Nov 2023

Source

“it's a new type of mind that is rapidly gaining in intelligence, and it stands a serious chance of overtaking humans' mental faculties and becoming the new apex species on the planet.” - Nov 2023

Source

Emmet Shear - Founder & CEO, Twitch.tv; former Interim CEO of OpenAI

“If you build something that is a lot smarter than us, not like somewhat smarter… but like it's much smarter than we are as we are than like dogs right, like a big jump. That thing is intrinsically pretty dangerous.” - Jun 2023

Source

“We need to use the engineering to bootstrap ourselves into a science of AIs before we build the super intelligent AI so that it doesn't kill us all.” - Jun 2023

Source

Tech Leaders

Jeff Bezos – Founder, Executive Chairman, former President and CEO of Amazon

“Even specialized AI could be very bad for humanity. I mean, just regular machine learning models that can make certain weapons of war that could be incredibly destructive are very powerful. And they're not general AIs, they're just, they could just be very smart weapons. And so we have to think about all of those things.” - Dec 2023

Source

Jan Leike - Machine learning researcher, Anthropic; former Head of Alignment, OpenAI

“Building smarter-than-human machines is an inherently dangerous endeavour. OpenAI is shouldering an enormous responsibility on behalf of all of humanity. We are long overdue in getting incredibly serious about the implications of AGI. We must prioritise preparing for them as best we can.” - May 2024

Source

Chamath Palihapitiya – Founder and CEO of Social Capital

“If you think about what's the societal consequences of letting the worst case outcomes happen, the AGI type outcomes happen… I think those are so bad they're worth slowing some folks down.” - Apr 2023

Source

Dustin Moskovitz – Co-founder, Facebook & Asana

“At the beginning, I thought I knew AI would unfold safely and even helped to build it faster, but I have tried to look as objectively as possible at the arguments and the evidence and concluded that the risks were big enough and real enough to be taken much more seriously than humanity currently does.” - Feb 2024

Source

David Sacks – CEO, Yammer; formerly COO of Paypal and CEO of Zenefits

“AI is a wonderful tool for the betterment of humanity; AGI is a potential successor species.” - Nov 2023

Source

AI Researchers & Scientists

Geoffrey Hinton – “Godfather of AI”, Turing Award laureate

“The existential risk is what I'm worried about, the existential risk is that humanity gets wiped out because we've developed a better form of intelligence that decides to take control.” - Oct 2023

Source

“The idea that this stuff could actually get smarter than people — a few people believed that … But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.” - May 2023

Source

“It may keep us around for a while to keep the power stations running, but after that, maybe not.” - May 2023

Source

Stuart Russell – Distinguished professor of computer science

“One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.” - Apr 2014

Source

“If you make systems that are more intelligent than humans, they will have more power over the world than we do, just as we have more power over the world than other species on earth.” - Jul 2023

Source

Max Tegmark – Professor, MIT; AI safety researcher

“Even if we “win” the global race to develop these uncontrollable AI systems, we risk losing our social stability, security, and possibly even our species in the process.” - Oct 2023

Source

“We are as a matter of fact, right now, building creepy, super-capable, amoral, psychopaths that never sleep, think much faster than us, can make copies of themselves and have nothing human about them whatsoever, what could possibly go wrong?” - Nov 2023

Source

Yoshua Bengio – Deep learning pioneer; Turing Award laureate

“We don't know how to build an AI system that will not turn against humans or that will not become a super powerful weapon if [in] the hands of bad actors or be used to destroy our democracies.” - Apr 2024

Source

“Since we don’t really know how fast technological advances in AI or elsewhere (e.g., biotechnology) will come, it’s best to get on with the task of better regulating these kinds of powerful tools right away. [...] We must regulate these new technologies, just as we did for aeronautics or chemistry, for example, to protect people and society.” - Jan 2022

Source

Paul Christiano – AI safety researcher; former OpenAI researcher

“Powerful AI systems have a good chance of deliberately and irreversibly disempowering humanity. This is a much more likely failure mode than humanity killing ourselves with destructive physical technologies.” - Jun 2022

Source

Eliezer Yudkowsky – AI safety researcher; co-founder, Machine Intelligence Research Institute (MIRI)

“Losing a conflict with a high-powered cognitive system looks at least as deadly as "everybody on the face of the Earth suddenly falls over dead within the same second"” - Jun 2022

Source

“We need to get alignment right on the 'first critical try' at operating at a 'dangerous' level of intelligence, where unaligned operation at a dangerous level of intelligence kills everybody on Earth and then we don't get to try again.” - Jun 2022

Source

Joscha Bach – Cognitive scientist & AI researcher

"If a singularity happens, it is unlikely that human minds will play an important role afterwards (unlike the AI is whimsical enough to leave a zoo for us). To compete with optimized AI, human minds would have to change in ways that would remove all relevant traces of humanity." - Mar 2018

Source

Intellectuals, Philosophers & Authors

Stephen Hawking – Physicist, cosmologist, & author

"The development of full artificial intelligence could spell the end of the human race … It would take off on its own, and re-design itself at an ever increasing rate" - Dec 2014

Source

Sam Harris – Philosopher, author, & podcaster

“The concern is really that we will build machines that are so much more competent than we are that the slightest divergence between their goals and our own could destroy us… The concern is that we will one day build machines that, whether they’re conscious or not, could treat us with similar disregard.” - Oct 2016

Source

“I take the existential risk scenario seriously enough that I would pause it.” - Aug 2023

Source

Yuval Noah Harari – Historian & author

“We can still regulate the new AI tools, but we must act quickly. Whereas nukes cannot invent more powerful nukes, AI can make exponentially more powerful AI. [...] Just as a pharmaceutical company cannot release new drugs before testing both their short-term and long-term side-effects, so tech companies shouldn’t release new AI tools before they are made safe.” - Apr 2023

Source

Michio Kaku – Theoretical physicist & science writer

“But, as the decades go by, they will become as smart as a mouse, then rat, then a cat, dog, and monkey. By that point, they might become dangerous and even replace humans, near the end of the century.” - Mar 2018

Source

Toby Ord – Philosopher

“I do think that there is substantial risk of misuse of AI systems, leading to possible existential risk from that.” - 2020

Source

Scott Aaronson – Professor of computer science, UT Austin

“The idea that AI now needs to be treated with extreme caution strikes me as far from absurd. I don’t even dismiss the possibility that advanced AI could eventually require the same sorts of safeguards as nuclear weapons.” - Mar 2023

Source

“An AI wouldn't necessarily have to hate us or or want to kill us we might just you know be in the way or irrelevant to whatever alien goal it has” - Mar 2024

Source

Policy & Governance Leaders

Dan Hendrycks – Director, Center for AI Safety

“If I see international coordination doesn't happen, or much of it, it'll be more likely than not that we go extinct.” - Jul 2023

Source

“Overall, for AIs to create a safer, not more dangerous, world, we need rules and regulations, cooperation, auditors, and the help of AI tools to ensure the best outcomes.” - Mar 2023

Source

Ian Hogarth – Co-founder, Songkick; Co-author, State of AI Report

“Recently the contest between a few companies to create God-like AI has rapidly accelerated. They do not yet know how to pursue their aim safely and have no oversight. They are running towards a finish line without an understanding of what lies on the other side.” - Apr 2023

Source

Jaan Tallinn – Co-founder, Skype & Kazaa; co-founder, Future of Life Institute

“I have not met anyone right now in these [AI] labs who says that sure, the risk is less than 1% of blowing up the planet. So, it’s important that people know that their lives are being risked.” - May 2023

Source

“First of all, we need to realise that this is a suicide race. This is not a race with winners.” - Apr 2023

Source

Holden Karnofsky – Co-CEO, Open Philanthropy

“The kind of AI I've discussed could defeat all of humanity combined, if (for whatever reason) it were pointed toward that goal. By "defeat," I don't mean "subtly manipulate us" or "make us less informed" or something like that — I mean a literal "defeat" in the sense that we could all be killed, enslaved or forcibly contained.” - Jun 2022

Source

“And when we talk about whether AI could defeat humanity… they don’t have to be more capable than humans. They could be equally capable, and there could be more of them.” - Jul 2023

Source

Jason Gaverick Matheny – CEO, Rand Corporation

“While AI will bring many benefits, it is also potentially dangerous; it could be used to create cyber or bio weapons or to launch massive disinformation attacks. And if an AI is stolen or leaked even once, it could be impossible to prevent it from spreading throughout the world.” - Aug 2023

Source

“Once training is complete, a powerful AI model should be subject to rigorous review by a regulator or third-party evaluator before it is released to the world. Expert red teams, pretending to be malicious adversaries, can try to make the AI perform unintended behaviors, including the design of weapons. Systems that exhibit dangerous capabilities should not be released until safety can be assured.” - Aug 2023

Source

Matt Clifford – CEO, Entrepreneur First; External Advisory Board Vice-Chair, UK AI Safety Institute

“The idea that superhuman AIs pose an existential threat to humanity is easy to mock… but seems plausibly one of the most important problems facing the world in the coming decades.” - Jan 2022

Source