Disrupting The Deepfake Supply Chain

ControlAI

Jan 31, 2024

See All Posts

broken chain with sunlight behind it
broken chain with sunlight behind it

You can find a shareable, visual overview of these policy priorities here.

Deepfakes are non-consensually AI-generated voices, images, or videos that a reasonable person would mistake as real. This definition excludes innocuous image or voice manipulation such as satire, memes, or parody. Deepfakes usually involve sexual imagery, fraud, or misinformation.

The Problem

Deepfakes are a growing threat to society, and governments must act. 

Unprecedented progress in AI is making deepfake creation fast, cheap, and easy. From 2022 to 2023, deepfake sexual content increased by over 400%, and deepfake fraud increased by 3000%. With half the world’s population facing elections this year, the widespread creation and proliferation of deepfakes present a growing threat to democratic processes around the world. There are no laws effectively targeting and limiting deepfake production and dissemination, and even requirements on creators (who are often underage) are negligible and ineffective.

The public strongly supports a ban on deepfakes. 

A series of recent polls, across many countries, found massive bipartisan support for a ban on deepfakes. For example, in the UK, 86% support a deepfake ban - just 5% oppose it.

The Solution

Governments must impose obligations throughout the supply chain to stop the creation and spread of deepfakes. The supply chain starts small (a few companies supply the AI systems used to make deepfakes) and ends up large (billions of people around the world can access products using said AI systems to create a deepfake). The only reliable and effective countermeasure is to hold the whole supply chain responsible for deepfake creation and proliferation. All parties must show that they have taken reasonable steps to preclude deepfakes. This approach is similar to how society combats child abuse material and malware. 

Effective legislation will:

  • Make the creation and dissemination of deepfakes a crime, and allow people harmed by deepfakes to sue for damages.

  • Hold AI developers liable. Developers of AI systems must show that they have applied reasonable techniques to prevent deepfakes. This includes: 1) precluding an AI's ability to generate deepfake sexual material or fraudulent content, 2) showing that such techniques cannot be easily circumvented, and 3) guaranteeing that the datasets they use to train their AI do not contain illegal material (e.g., child sexual abuse material).

  • Hold AI deployers liable. Those providing or facilitating access to generative AI systems must show that they have applied reasonable efforts to prevent the generation and distribution of deepfakes. This includes techniques to detect and disrupt deepfake creation attempts. 

Deepfakes must be banned, and governments must act. 

Join us. Call on lawmakers to tackle this growing and urgent threat.

You can find a shareable, visual overview of these policy priorities here.

Deepfakes are non-consensually AI-generated voices, images, or videos that a reasonable person would mistake as real. This definition excludes innocuous image or voice manipulation such as satire, memes, or parody. Deepfakes usually involve sexual imagery, fraud, or misinformation.

The Problem

Deepfakes are a growing threat to society, and governments must act. 

Unprecedented progress in AI is making deepfake creation fast, cheap, and easy. From 2022 to 2023, deepfake sexual content increased by over 400%, and deepfake fraud increased by 3000%. With half the world’s population facing elections this year, the widespread creation and proliferation of deepfakes present a growing threat to democratic processes around the world. There are no laws effectively targeting and limiting deepfake production and dissemination, and even requirements on creators (who are often underage) are negligible and ineffective.

The public strongly supports a ban on deepfakes. 

A series of recent polls, across many countries, found massive bipartisan support for a ban on deepfakes. For example, in the UK, 86% support a deepfake ban - just 5% oppose it.

The Solution

Governments must impose obligations throughout the supply chain to stop the creation and spread of deepfakes. The supply chain starts small (a few companies supply the AI systems used to make deepfakes) and ends up large (billions of people around the world can access products using said AI systems to create a deepfake). The only reliable and effective countermeasure is to hold the whole supply chain responsible for deepfake creation and proliferation. All parties must show that they have taken reasonable steps to preclude deepfakes. This approach is similar to how society combats child abuse material and malware. 

Effective legislation will:

  • Make the creation and dissemination of deepfakes a crime, and allow people harmed by deepfakes to sue for damages.

  • Hold AI developers liable. Developers of AI systems must show that they have applied reasonable techniques to prevent deepfakes. This includes: 1) precluding an AI's ability to generate deepfake sexual material or fraudulent content, 2) showing that such techniques cannot be easily circumvented, and 3) guaranteeing that the datasets they use to train their AI do not contain illegal material (e.g., child sexual abuse material).

  • Hold AI deployers liable. Those providing or facilitating access to generative AI systems must show that they have applied reasonable efforts to prevent the generation and distribution of deepfakes. This includes techniques to detect and disrupt deepfake creation attempts. 

Deepfakes must be banned, and governments must act. 

Join us. Call on lawmakers to tackle this growing and urgent threat.

You can find a shareable, visual overview of these policy priorities here.

Deepfakes are non-consensually AI-generated voices, images, or videos that a reasonable person would mistake as real. This definition excludes innocuous image or voice manipulation such as satire, memes, or parody. Deepfakes usually involve sexual imagery, fraud, or misinformation.

The Problem

Deepfakes are a growing threat to society, and governments must act. 

Unprecedented progress in AI is making deepfake creation fast, cheap, and easy. From 2022 to 2023, deepfake sexual content increased by over 400%, and deepfake fraud increased by 3000%. With half the world’s population facing elections this year, the widespread creation and proliferation of deepfakes present a growing threat to democratic processes around the world. There are no laws effectively targeting and limiting deepfake production and dissemination, and even requirements on creators (who are often underage) are negligible and ineffective.

The public strongly supports a ban on deepfakes. 

A series of recent polls, across many countries, found massive bipartisan support for a ban on deepfakes. For example, in the UK, 86% support a deepfake ban - just 5% oppose it.

The Solution

Governments must impose obligations throughout the supply chain to stop the creation and spread of deepfakes. The supply chain starts small (a few companies supply the AI systems used to make deepfakes) and ends up large (billions of people around the world can access products using said AI systems to create a deepfake). The only reliable and effective countermeasure is to hold the whole supply chain responsible for deepfake creation and proliferation. All parties must show that they have taken reasonable steps to preclude deepfakes. This approach is similar to how society combats child abuse material and malware. 

Effective legislation will:

  • Make the creation and dissemination of deepfakes a crime, and allow people harmed by deepfakes to sue for damages.

  • Hold AI developers liable. Developers of AI systems must show that they have applied reasonable techniques to prevent deepfakes. This includes: 1) precluding an AI's ability to generate deepfake sexual material or fraudulent content, 2) showing that such techniques cannot be easily circumvented, and 3) guaranteeing that the datasets they use to train their AI do not contain illegal material (e.g., child sexual abuse material).

  • Hold AI deployers liable. Those providing or facilitating access to generative AI systems must show that they have applied reasonable efforts to prevent the generation and distribution of deepfakes. This includes techniques to detect and disrupt deepfake creation attempts. 

Deepfakes must be banned, and governments must act. 

Join us. Call on lawmakers to tackle this growing and urgent threat.

Get Updates

Sign up to our newsletter if you'd like to stay updated on our work,
how you can get involved, and to receive a weekly roundup of the latest AI news.