ControlAI’s Work in the US

two people in a meeting
two people in a meeting
two people in a meeting
two people in a meeting

What We Do

Since mid-2025, ControlAI has briefed over 100 US members of Congress and staff on the extinction risk posed by superintelligent AI and the need to globally prohibit its development.

Since mid-2025, ControlAI has briefed over 100 members of Congress and staff on the extinction risk posed by superintelligent AI and the need to globally prohibit its development.

Led by our US Director Connor Leahy, we are strictly non-partisan and focus on informing lawmakers across the political spectrum about the unprecedented global and national security risks from superintelligent AI.

Led by our US Director Connor Leahy, we are strictly non-partisan and focus on informing lawmakers across the political spectrum about the unprecedented global and national security risks from superintelligent AI.

Led by our US Director Connor Leahy, we are strictly non-partisan and focus on informing lawmakers across the political spectrum about the unprecedented global and national security risks from superintelligent

ControlAI has helped citizens send over 195,000 messages to their elected officials urging them to support legislation that prohibits the development of superintelligence.

ControlAI has helped citizens send over 195,000 messages to their elected officials urging them to support legislation that prohibits the development of superintelligence.

100+ briefings for members of Congress and staff

195,000+ messages sent to elected officials from constituents

ControlAI advocates for a global prohibition on the development of superintelligent AI

Nobel Prize winners, top AI experts, and even the CEOs of AI companies warn that superintelligent AI poses an extinction risk to humanity. In light of this, ControlAI advocates for a global prohibition on the development of superintelligent AI.

We do not support policies that do not deal with the extinction risk posed by superintelligence, such as AI data center moratoriums or the deregulation of AI development. The only way to avert this risk is to ensure that it is not built by any actor, ranging from domestic AI corporations, to foreign adversaries like China, and non-state actors.

Get involved

Get Involved

If you want to do more about the risk from superintelligent AI alongside a community sharing your concerns, sign up for a ControlAI Citizens Chapter near you. These chapters are social, welcoming spaces where you can meet people locally and maximize your impact on this issue. We’d love for you to join us.