An AI 'Kill Switch'

Control AI

Oct 24, 2023

See All Posts

Where are we?

The world’s largest technology companies are driving recklessly towards the development of smarter-than-human artificial general intelligence (AGI). ChatGPT creator OpenAI, London-based Google DeepMind, and Amazon-backed Anthropic are all aiming for it.

The UK’s Foundation Models Task Force boss, Ian Hogarth, more aptly described AGI as godlike AI: “AI systems smarter and more powerful than humans.” These are not just chatbots, but rather autonomous agents that make their own plans and act in the real world.

Don’t (Just) Trust us!

You don’t have to trust just me that this is a problem - the CEOs of the very AGI companies developing the most advanced AI systems have signed a letter acknowledging that AI is an extinction risk for humanity.

Just before founding AGI lab OpenAI in 2015, CEO Sam Altman put it plainly: “Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity.”

And the CEO of one of the main companies aiming for AGI, Dario Amodei, has already warned that “a straightforward extrapolation of today’s systems to those we expect to see in 2-3 years suggests a substantial risk that AI systems will be able to fill in all the missing pieces” to aid with large scale biological attacks. And chatbots such as GPT-4, the AI-powering ChatGPT, have already figured out by themselves how to deceive a person online to do the chatbots’ bidding.

So… Why?

So why do they keep hurtling us towards a danger that they all know exists? 

According to them, it’s because they’re locked in a stand-off. They all justify their actions by arguing that if they don't build AGI, others will - and that others will do it even more dangerously!

How do we stop this? Much like with our nuclear weapon non-proliferation efforts of the last fifty years, we need governments to step in. We don’t let individuals or corporations develop nuclear weapons for private use, and we shouldn’t allow this to happen with dangerous, powerful AI. Like we controlled uranium, a key input of nuclear weapons, we can control computing power, a key input of AI.

The field of AI calls this ‘compute’, which is all the electronic machinery that powers the companies’ AIs. By capping the amount of compute that can be used, only the most risky, existentially dangerous AI work would be halted, while 99% of (useful!) AI development won’t be affected.

The ‘Kill Switch’

We also need to set up an AI ‘Kill Switch.’ 

It would be less a physical “switch” and more a set of safety procedures and protocols that allow us to shut down ongoing developments and deployments of advanced AI systems quickly, if the systems were to become too risky.

Imagine this: an AI engineer sees that his company’s latest AI is being used to facilitate criminal activities, or worse, the AI is going rogue and taking actions that the developer did not intend (This may sound outlandish, but it’s quickly becoming our reality).

Currently, there is no easy way for them to “pull the plug” on AI models acting or being used in these dangerous ways. There are no emergency off-buttons like there are on factory equipment, and in these emergency moments, every second counts. But what if a developer notices a serious problem in another company’s AI system? The result is even worse. It could take hours, days, or even longer until they can reach the right people and persuade them to shut a system down, and at that point, it might already be far too late.

And it shouldn’t just be on companies: governments should also have kill switches. Imagine that a government regulator receives evidence of an imminent AI-related emergency and needs to swiftly “pull the plug” on several companies at once. Right now, there is no infrastructure in place for the government to use to respond to AI-related crises (let alone detect such crises in the first place). This is despite the fact that leaders of AI companies are saying that their technology has a 10-25% chance of causing extinction

There could even be infrastructure in place for an international kill switch. If a world leader detects evidence that a system trained using a certain amount of computing power is catastrophically dangerous, this information could be communicated to other world leaders, allowing everyone to swiftly “kill” any AI systems that are nearing that level of computing power. 

We’ve Done It Before We Can Do It Again

We have infrastructure for international coordination to manage nuclear risks. The Moscow-Washington hotline allowed the United States and the USSR to swiftly communicate about nuclear risks. Similar to how to this day we ensure that the Moscow-Washington hotline will function in the event of an emergency, by continuing to send test messages once every hour’, this system should have regular “fire drills” - say a full shutdown of all advanced AI services for 5 minutes every six months - with punitive measures for any noncompliance.

These are exactly the kind of protocols we want to have in place before we start needing them, so we must build them now, and hope we will never need them.

Where are we?

The world’s largest technology companies are driving recklessly towards the development of smarter-than-human artificial general intelligence (AGI). ChatGPT creator OpenAI, London-based Google DeepMind, and Amazon-backed Anthropic are all aiming for it.

The UK’s Foundation Models Task Force boss, Ian Hogarth, more aptly described AGI as godlike AI: “AI systems smarter and more powerful than humans.” These are not just chatbots, but rather autonomous agents that make their own plans and act in the real world.

Don’t (Just) Trust us!

You don’t have to trust just me that this is a problem - the CEOs of the very AGI companies developing the most advanced AI systems have signed a letter acknowledging that AI is an extinction risk for humanity.

Just before founding AGI lab OpenAI in 2015, CEO Sam Altman put it plainly: “Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity.”

And the CEO of one of the main companies aiming for AGI, Dario Amodei, has already warned that “a straightforward extrapolation of today’s systems to those we expect to see in 2-3 years suggests a substantial risk that AI systems will be able to fill in all the missing pieces” to aid with large scale biological attacks. And chatbots such as GPT-4, the AI-powering ChatGPT, have already figured out by themselves how to deceive a person online to do the chatbots’ bidding.

So… Why?

So why do they keep hurtling us towards a danger that they all know exists? 

According to them, it’s because they’re locked in a stand-off. They all justify their actions by arguing that if they don't build AGI, others will - and that others will do it even more dangerously!

How do we stop this? Much like with our nuclear weapon non-proliferation efforts of the last fifty years, we need governments to step in. We don’t let individuals or corporations develop nuclear weapons for private use, and we shouldn’t allow this to happen with dangerous, powerful AI. Like we controlled uranium, a key input of nuclear weapons, we can control computing power, a key input of AI.

The field of AI calls this ‘compute’, which is all the electronic machinery that powers the companies’ AIs. By capping the amount of compute that can be used, only the most risky, existentially dangerous AI work would be halted, while 99% of (useful!) AI development won’t be affected.

The ‘Kill Switch’

We also need to set up an AI ‘Kill Switch.’ 

It would be less a physical “switch” and more a set of safety procedures and protocols that allow us to shut down ongoing developments and deployments of advanced AI systems quickly, if the systems were to become too risky.

Imagine this: an AI engineer sees that his company’s latest AI is being used to facilitate criminal activities, or worse, the AI is going rogue and taking actions that the developer did not intend (This may sound outlandish, but it’s quickly becoming our reality).

Currently, there is no easy way for them to “pull the plug” on AI models acting or being used in these dangerous ways. There are no emergency off-buttons like there are on factory equipment, and in these emergency moments, every second counts. But what if a developer notices a serious problem in another company’s AI system? The result is even worse. It could take hours, days, or even longer until they can reach the right people and persuade them to shut a system down, and at that point, it might already be far too late.

And it shouldn’t just be on companies: governments should also have kill switches. Imagine that a government regulator receives evidence of an imminent AI-related emergency and needs to swiftly “pull the plug” on several companies at once. Right now, there is no infrastructure in place for the government to use to respond to AI-related crises (let alone detect such crises in the first place). This is despite the fact that leaders of AI companies are saying that their technology has a 10-25% chance of causing extinction

There could even be infrastructure in place for an international kill switch. If a world leader detects evidence that a system trained using a certain amount of computing power is catastrophically dangerous, this information could be communicated to other world leaders, allowing everyone to swiftly “kill” any AI systems that are nearing that level of computing power. 

We’ve Done It Before We Can Do It Again

We have infrastructure for international coordination to manage nuclear risks. The Moscow-Washington hotline allowed the United States and the USSR to swiftly communicate about nuclear risks. Similar to how to this day we ensure that the Moscow-Washington hotline will function in the event of an emergency, by continuing to send test messages once every hour’, this system should have regular “fire drills” - say a full shutdown of all advanced AI services for 5 minutes every six months - with punitive measures for any noncompliance.

These are exactly the kind of protocols we want to have in place before we start needing them, so we must build them now, and hope we will never need them.

Where are we?

The world’s largest technology companies are driving recklessly towards the development of smarter-than-human artificial general intelligence (AGI). ChatGPT creator OpenAI, London-based Google DeepMind, and Amazon-backed Anthropic are all aiming for it.

The UK’s Foundation Models Task Force boss, Ian Hogarth, more aptly described AGI as godlike AI: “AI systems smarter and more powerful than humans.” These are not just chatbots, but rather autonomous agents that make their own plans and act in the real world.

Don’t (Just) Trust us!

You don’t have to trust just me that this is a problem - the CEOs of the very AGI companies developing the most advanced AI systems have signed a letter acknowledging that AI is an extinction risk for humanity.

Just before founding AGI lab OpenAI in 2015, CEO Sam Altman put it plainly: “Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity.”

And the CEO of one of the main companies aiming for AGI, Dario Amodei, has already warned that “a straightforward extrapolation of today’s systems to those we expect to see in 2-3 years suggests a substantial risk that AI systems will be able to fill in all the missing pieces” to aid with large scale biological attacks. And chatbots such as GPT-4, the AI-powering ChatGPT, have already figured out by themselves how to deceive a person online to do the chatbots’ bidding.

So… Why?

So why do they keep hurtling us towards a danger that they all know exists? 

According to them, it’s because they’re locked in a stand-off. They all justify their actions by arguing that if they don't build AGI, others will - and that others will do it even more dangerously!

How do we stop this? Much like with our nuclear weapon non-proliferation efforts of the last fifty years, we need governments to step in. We don’t let individuals or corporations develop nuclear weapons for private use, and we shouldn’t allow this to happen with dangerous, powerful AI. Like we controlled uranium, a key input of nuclear weapons, we can control computing power, a key input of AI.

The field of AI calls this ‘compute’, which is all the electronic machinery that powers the companies’ AIs. By capping the amount of compute that can be used, only the most risky, existentially dangerous AI work would be halted, while 99% of (useful!) AI development won’t be affected.

The ‘Kill Switch’

We also need to set up an AI ‘Kill Switch.’ 

It would be less a physical “switch” and more a set of safety procedures and protocols that allow us to shut down ongoing developments and deployments of advanced AI systems quickly, if the systems were to become too risky.

Imagine this: an AI engineer sees that his company’s latest AI is being used to facilitate criminal activities, or worse, the AI is going rogue and taking actions that the developer did not intend (This may sound outlandish, but it’s quickly becoming our reality).

Currently, there is no easy way for them to “pull the plug” on AI models acting or being used in these dangerous ways. There are no emergency off-buttons like there are on factory equipment, and in these emergency moments, every second counts. But what if a developer notices a serious problem in another company’s AI system? The result is even worse. It could take hours, days, or even longer until they can reach the right people and persuade them to shut a system down, and at that point, it might already be far too late.

And it shouldn’t just be on companies: governments should also have kill switches. Imagine that a government regulator receives evidence of an imminent AI-related emergency and needs to swiftly “pull the plug” on several companies at once. Right now, there is no infrastructure in place for the government to use to respond to AI-related crises (let alone detect such crises in the first place). This is despite the fact that leaders of AI companies are saying that their technology has a 10-25% chance of causing extinction

There could even be infrastructure in place for an international kill switch. If a world leader detects evidence that a system trained using a certain amount of computing power is catastrophically dangerous, this information could be communicated to other world leaders, allowing everyone to swiftly “kill” any AI systems that are nearing that level of computing power. 

We’ve Done It Before We Can Do It Again

We have infrastructure for international coordination to manage nuclear risks. The Moscow-Washington hotline allowed the United States and the USSR to swiftly communicate about nuclear risks. Similar to how to this day we ensure that the Moscow-Washington hotline will function in the event of an emergency, by continuing to send test messages once every hour’, this system should have regular “fire drills” - say a full shutdown of all advanced AI services for 5 minutes every six months - with punitive measures for any noncompliance.

These are exactly the kind of protocols we want to have in place before we start needing them, so we must build them now, and hope we will never need them.

Safeguard our future

Sign up to keep updates on our current and future campaigns.