Key learnings from our engagement with lawmakers
By Leticia Garcia
Between November 2024 and May 2025, ControlAI met with 84 cross-party UK parliamentarians. Roughly 4 in 10 were MPs, 3 in 10 were Lords, and 2 in 10 represented devolved legislatures: the Welsh Senedd, Scottish Parliament, and Northern Ireland Assembly. We briefed these parliamentarians about the risk of extinction from AI that arises from loss of control of advanced AI systems. 1 in 3 lawmakers that we met during this period supported our campaign.
Until recently, civic engagement on AI risk has been largely overlooked. Yet it is now more critical than ever. Despite warnings from Nobel laureates, AI scientists, and CEOs of leading AI companies that artificial intelligence poses an “extinction” threat to humanity, no legislation currently protects the British public. While these risks originate from a handful of companies with the resources to develop advanced AI systems, their impact threatens all of society. The public plays a vital role in determining acceptable levels of risk and demanding appropriate safeguards from lawmakers.
To solve this problem, parliamentarians must first be aware of it. Like much of society, they are still getting up to speed with AI and will remain unaware of the risks unless someone informs them. At ControlAI, we are working to build this common knowledge. You can help too!
To help you get started, we are sharing key learnings from our engagement with parliamentarians during our Superintelligence campaign. This document covers: (i) how parliamentarians typically receive our AI risk briefings; (ii) practical outreach tips; (iii) effective leverage points for discussing AI risks; (iv) recommendations for crafting a compelling pitch; (v) common challenges we've encountered; (vi) key considerations for successful meetings; and (vii) recommended books and media articles that I’ve found helpful.
(i) Overall reception of our briefings
Very few parliamentarians are up to date on AI and AI risk: Around 80–85% of parliamentarians were only somewhat familiar with AI, with their engagement largely limited to occasional use of large language models (LLMs) like ChatGPT for basic tasks (e.g., getting assistance with writing a speech).Their staff were slightly more familiar with AI, but few were well-versed in the broader conversation surrounding it.
Capacity is the main limiting factor: MPs typically have 3–5 staffers, many of whom focus primarily on constituency work. Members of devolved legislatures usually have 2–4 staffers, while Peers often have even less support – some have no dedicated staff at all.
As a result, there is rarely anyone on these teams who can dedicate significant time to researching AI. Except for a few staffers with a personal interest in AI, most staffers we spoke to had little or no familiarity with it. While most of those we spoke to expressed a desire to learn more, they often cited lack of time and bandwidth as an impediment.
Overall, the briefings were very well received: Parliamentarians valued the chance to ask basic questions about AI and often said they learned a great deal. Both they and their staff welcomed a setting where they could ask “silly questions”. Several, especially MPs and their staffers, noted they are often lobbied by tech firms focused on AI’s benefits and found it refreshing to hear from an organisation addressing the risks and how to manage them.
Tangible signals confirm this: Parliamentarians and their staffers are typically polite and non-confrontational. They won’t say things like “I think this is stupid” or “this wasn’t a productive use of my time.”It is important to pay attention to tangible signals when assessing whether their feedback is genuinely positive. These signals include actions such as supporting our campaign, offering or agreeing to make introductions, or volunteering to sponsor events in Parliament.
The most important signal for us has been that, when presented with a clear ask, 1 in 3 lawmakers we met chose to take a public stance by supporting our campaign. In doing so, they acknowledged the concern that AI poses an extinction risk to humanity and called for targeted regulation of the most advanced AI systems. At the outset, we were told that a statement with such strong wording would never gain support from lawmakers. Yet, once they were presented with the problem – along with the need for open discussion to address it, and warnings from the very people developing advanced AI – we succeeded in gaining their support in 1 out of every 3 cases.
(iI) Outreach tips
Cold outreach worked better than I expected: Initially, I focused on identifying parliamentarians with an interest in AI. Although this approach was helpful, it was slow and had limited reach. To this end, cold outreach proved worthwhile; it is low-cost, and more parliamentarians than I expected chose to engage. Many found the 45-minute briefing valuable given their limited capacity to access such information through staff or their own research.
Relentlessly follow up: If you have contacted a parliamentarian once or twice without receiving a response, do not assume that they are uninterested. Parliamentarians receive an overwhelming volume of correspondence, so success often comes down to being at the top of their inbox at the right moment.
I have relentlessly followed up with people, and nobody has been angry with me – quite the contrary, some have thanked me for it. It is important to always be kind when following up and never reprimand someone for taking time to respond – they are extremely busy, and doing so would not help anyway. They will appreciate your understanding.
Ask for introductions: At the end of each meeting, I try to remember to ask whether there is another colleague who might be interested. If I have trouble reaching that person directly, I ask for an introduction.
(iII) Key talking points
Statements from relevant authorities
Extinction risk
In 2023, Nobel Prize winners, AI scientists, and CEOs of leading AI companies stated that “mitigating the risk of extinction from AI should be a global priority.” Communicating this concern effectively is key. Consider the difference between these two approaches:
Approach 1: “AI poses an extinction risk.”
The immediate response is likely: “How so?” – placing the burden of proof on the advocate. As a policy advisor at a civil society organisation, I lack the authority or perceived credibility to make this case convincingly on my own. Moreover, raising scenarios like AI escaping containment or unaligned superintelligence can seem abrupt without first laying the groundwork (see my note on inferential distances below).
Approach 2: “In 2023, Nobel Prize winners, AI scientists, and CEOs of leading AI companies stated that ‘mitigating the risk of extinction from AI should be a global priority, alongside other societal-scale risks such as pandemics and nuclear war.’”
Now, present the list of signatories. Briefly explain who Geoffrey Hinton and Yoshua Bengio are, and highlight the CEOs of major AI companies – Sam Altman, Dario Amodei, and Demis Hassabis. Watch as the parliamentarian scans the page, taking in the weight of these names, sometimes remarking, “Oh, there’s even Bill Gates.” Suddenly, the claim is not just coming from a stranger – it’s backed by a broad coalition of experts.
This also creates space for a personal connection. Some parliamentarians react with surprise – even discomfort – and I acknowledge that I felt the same when I saw the statement was signed by the very people building this technology. “The question I asked myself was: what is driving this concern?” From there, I can begin explaining the deeper issues with how advanced AI is being developed. At this point, they understand that what they are about to hear matters – not just to me, but to Nobel laureates, AI scientists, and the CEOs shaping the future of AI.
Sometimes, parliamentarians will argue that tech CEOs are simply hyping up AI in order to attract more investment. This is a fair concern. When this issue arises, it is important to highlight two key points: Firstly, the warnings are not only coming from CEOs who have a financial interest in the success of AI. AI scientists, including Yoshua Bengio and Geoffrey Hinton, are also raising awareness; the latter quit Google to speak out about the risks of AI. Secondly, current and former employees within these companies have echoed these warnings. Some were willing to forfeit millions of dollars in equity to speak out publicly about the risks. In recent months, several staff members from AI safety teams, particularly at OpenAI, have resigned after losing trust in their organisations.
Loss of control
In raising the issue of loss of control, it is worth keeping in mind the many authoritative sources which acknowledge the issue. Risks of losing control are acknowledged in the 2025 International report on AI, the Singapore consensus on AI safety priorities, and sometimes by government officials themselves! The UK Secretary of State for Science, Innovation and Technology, for example, publicly addressed this concern at the 2025 Munich Security Conference:
“We are now seeing the glimmers of AI agents that can act autonomously, of their own accord. The 2025 International AI Safety Report, led by Yoshua Bengio, warns us that - without the checks and balances of people directing them - we must consider the possibility that risks won’t just come from malicious actors misusing AI models, but from the models themselves. [...] Losing oversight and control of advanced AI systems, particularly Artificial General Intelligence (AGI), would be catastrophic. It must be avoided at all costs.”
Public attention
Parliamentarians must prioritise among numerous competing issues, and they are more likely to engage with a topic when they see it resonates with the public and their constituents. Two key resources can help make that case.
Polls: At ControlAI, we partnered with YouGov to conduct in-depth public opinion research on AI and its regulation across the UK. Notably, 79% support creating a UK AI regulator, and 87% support requiring developers to prove their systems are safe before release. While some policymakers are more poll-sensitive than others, this poll has generally been well received. In addition to our own polling, we sometimes find ourselves referring to polling from the AI Policy Institute, which has run numerous representative polls on US citizens.
Media coverage: Press attention also signals public interest, and there is an increasing amount of media coverage of AI risks. I usually bring a selection of recent articles to meetings, and more often than not, as soon as I take them out, the parliamentarian asks: “Can I keep them?”. There are some examples of articles that I have shared at the end of this post.
High-risk standards in other industries
“Predictability and controllability are fundamental prerequisites for safety in all high-risk engineering fields.” [Miotti, A., Bilge, T., Kasten, D., & Newport, J. (2024). A Narrow Path (p. 11).]
“In other high-risk sectors, demonstrating safety is a precondition for undertaking high-risk projects. Before building and deploying critical systems for public use, companies must meet verifiable safety standards. Why should AI be treated any differently?”
This argument rests on the following structure:
P1: AI is comparable to other high-risk sectors.
P2: High-risk sectors are subject to strict safety standards.
C: Therefore, AI should also be subject to strict safety standards.
To challenge this reasoning, one must either dispute P2 (arguing that existing safety standards in other industries are excessive or unwarranted) or challenge P1/C (arguing that AI is not sufficiently analogous to those domains).
This point is usually understood, but a supporting example can help. The risk, however, is that the conversation drifts into the example’s domain rather than AI. I do not mind discussing this when time allows – but with parliamentarians, time is limited, and you need to spend it wisely.
To build a bridge, you must prove it can withstand several times the maximum expected load, including vehicles, pedestrians, and environmental stress. Engineers follow strict structural standards, and designs are reviewed by regulators and independent experts. No one accepts a bridge built on intuition or best guesses.
To develop a new drug, companies must complete a multi-phase testing process to assess safety, efficacy, and side effects. Agencies like the MHRA or FDA require robust, peer-reviewed evidence before granting approval for public use.
Similarly, aircraft manufacturers must meet rigorous aviation safety standards. Every component is stress-tested, and regulators like the UK Civil Aviation Authority or EASA must certify the plane before it carries passengers.
Empirical evidence
Examples are helpful, particularly when discussing loss of control. Consider the following research paper: Meinke, A., Schoen, B., Scheurer, J., Balesni, M., Shah, R., & Hobbhahn, M. (2024). Frontier models are capable of in-context scheming. arXiv. https://arxiv.org/abs/2412.04984
This video by Apollo Research explains the most interesting results in under two minutes. Of note, The Times published an article on this issue, which I often reference to illustrate its relevance.
Of course, other relevant research could also illustrate this point, and it is worth keeping an eye out for new studies to keep examples current and relevant.
(iV) Crafting a good pitch
Mind the gap
“When I observe failures of explanation, I usually see the explainer taking one step back, when they need to take two or more steps back. [...] A clear argument has to lay out an inferential pathway, starting from what the audience already knows or accepts. If you don’t recurse far enough, you’re just talking to yourself.”
— Eliezer Yudkowsky, Expecting Short Inferential Distances, LessWrong.
Never assume your audience shares your background – or any prior knowledge at all. Always ask: would someone new to this topic understand the concepts being introduced?
AI is full of buzzwords like “AGI”, “machine learning”, “frontier systems”, and “jailbreaking”. If these appear in your pitch, there is a good chance of confusion. Recurse as needed to introduce ideas clearly, and whenever possible, replace jargon with plain explanations of the underlying concept or phenomenon.
Similarly, avoid introducing complex ideas, such as the notion that some AI systems are capable of scheming, without first laying the groundwork for how AI systems work and why such issues may arise.
Make it memorable
Parliamentarians care not only about understanding an issue, but also about being able to explain it – to constituents, colleagues, and the public. If they support a campaign and are asked why, they need to respond in their own words. They cannot just say, “It was a compelling pitch from nice people.”
A pitch aimed at building common knowledge should not be dense with detail or technical complexity. That can be counterproductive – arguments may be persuasive in the moment but quickly forgotten. If a parliamentarian cannot easily recall or repeat the message, they will be reluctant to speak on it.
Ideally, a pitch should combine clear explanations with simple, memorable talking points they can use to explain why the issue matters and why they have chosen to engage.
Some examples of memorable arguments:
“AI is grown, not built.” [Leahy, C., Alfour, G., Scammell, C., & Others. (2024). The Compendium (pp. 16–18).]
Traditional software is coded line-by-line by engineers, who need to understand broadly how the program works. In contrast to traditional programming, AI capabilities are not explicitly programmed by developers, they are not “built into” the system.
Instead, researchers use algorithms known as neural networks, which are inspired by the structure and function of the human brain. These networks are fed with large volumes of data, and learn from the patterns in the data.
Unlike all other code, which is written by and legible to developers, we don’t really understand how AI systems work. Inspecting a neural network offers little insight into how the system works. This is why modern AI systems have been referred to as “black boxes.”
Consider that, in a recent podcast, Dario Amodei, CEO of the second largest AI company Anthropic, said “Maybe we now like understand 3% of how they [AI systems] work.” [Podcast: Dario Amodei - CEO of Anthropic | Podcast | In Good Company | Norges Bank Investment Management, link here]
"It’s not only tools that are being developed, but also agents."
Progress in AI capabilities is rapidly outpacing our understanding of how these systems work and how to ensure they behave as intended. Despite this, billions are being invested to make them not only more powerful but also increasingly autonomous. As Secretary of State for DSIT Peter Kyle warned at the Munich Security Conference, “novel risks will emerge from systems acting as autonomous agents to complete tasks with only limited human instruction.”
Keep innovating and improving
Exploit and explore: Build on the strongest parts of your current pitch, but continue testing new angles, arguments, or examples. A good rule of thumb is to keep 80% of the pitch consistent and use the remaining 20% to explore and innovate.
Improve through iteration: Pay attention to what resonates – whether it is specific narratives, examples, or materials – and refine your pitch based on that feedback.
Do not obsess over context: The broader landscape is always changing. While it is useful to have responses to timely issues, such as the UK's decision not to support the Paris AI Action Summit declaration, context-specific questions tend to be short-lived. It is generally not worth trying to incorporate these into your core arguments.
Avoid the rot: Without regular practice, even a strong pitch can lose its edge. Without some maintenance, you risk becoming less sharp – omitting key points or falling back on weaker phrasing. Like athletes, we perform best with consistent training!
(v) Some challenges
Not feeling the AGI
“Even though we now have A.I. systems contributing to Nobel Prize-winning breakthroughs, and even though 400 million people a week are using ChatGPT, a lot of the A.I. that people encounter in their daily lives is a nuisance. I sympathize with people who see A.I. slop plastered all over their Facebook feeds, or have a clumsy interaction with a customer service chatbot and think: This is what’s going to take over the world?”
— Kevin Roose, Powerful A.I. Is Coming. We’re Not Ready, New York Times.
This quote highlights a core challenge in communicating AI risks and potential. A staffer once told me ChatGPT is “only good for chicken recipes, and not even very good ones!” That view is relatively common. As Kevin Roose observes, most people’s experience with AI is underwhelming or frustrating. Many parliamentarians and staffers I have spoken with have had limited, unimpressive interactions. Those who have asked ChatGPT to generate a speech in their own style are often surprised – but even then, that does not amount to feeling the AGI.
Most people use AI for simple tasks – interactions that do not convey the scale of what is coming. Few truly grasp what AGI will bring. I often joke that one robot doing a backflip creates more of a gut-level understanding of the coming transformation than any polished pitch. Concrete, real-world examples of concerning AI behaviour help bridge this gap, even if only partially.
Defeatist views
The unexpected harms of social media – from tools meant to connect to platforms linked with isolation, addiction, and low self-esteem – have helped some recognise the need to think ahead. They show why we must proactively consider the consequences of new technologies and plan how to manage them before the harms emerge.
Others, however, take a more defeatist view: “We haven’t even been able to remove videos of people dying by suicide from the internet – how are we supposed to manage something as powerful as this?”
In those moments, it is useful to flip the point: exactly. Once technologies are released, they cannot be uninvented. That is why now is the time to act!
Underlying beliefs
A parliamentarian told us, after hearing about the risks: “If a company is developing an AI system that poses unacceptable risks, the board will stop it. That is what boards are for!”
Such assumptions often remain unspoken until they become a bottleneck. If someone simply says, “That won't happen”, or offers vague reasoning, do not accept this as an answer. Ask questions to uncover the underlying belief. You may not be able to dismantle the belief entirely, but gently challenging it can help to keep the conversation productive. For example: “Do you think boards always function perfectly to prevent harm?” Most people will quickly recognise this as unrealistic and become more receptive to your argument.
Misconceptions
Just like underlying beliefs, some common misconceptions can quietly derail a conversation. For example, some people assume that for a system to scheme, it must be conscious or evil. But that is not true: a system can simply correctly infer that the strategy intended by its developer is not the one that best serves its long-term goals.
It is helpful to keep these misconceptions in mind – and sometimes even address them proactively. A quick clarification before introducing your main point can prevent confusion and make the rest of the discussion more effective.
(vI) General tips
Before a meeting, prepare
It is helpful to understand a parliamentarian’s involvement with AI, their role in Parliament and their party, and whether they are leading any related projects or campaigns. In the UK, Hansard lets you read all their contributions and search by keyword; devolved legislatures also provide some records of parliamentary activity.
Good to see you, <name>, <smile>
Remember names: As Dale Carnegie said, “a person’s name is to that person the sweetest and most important sound in any language.” It matters. If you are unsure how to pronounce a name, look it up or ask – mispronouncing it throughout a meeting can be distracting and disengaging. Knowing names in advance also helps in unexpected encounters; I have landed meetings just by greeting someone by name in the lobby.
Smile: It puts others (and you) at ease. As Carnegie put it, “your smile is a messenger of your goodwill.”
And a personal tip: Speak slowly!
Everyone wants to talk about their book
"We’re wrapping up the programme, and my book – which is right there on the table – hasn’t been discussed at all, and it looks like it won’t be. [...] I’ve come here to talk about my book, not about what people think – which I couldn’t care less about."
While I understand this journalist’s frustration, which made for an iconic moment in Spanish television, I often recall it differently: as a reminder that everyone wants to talk about their book.
Show a genuine interest in what the parliamentarian has to say. The goal is to understand their perspective and find ways to collaborate. If you do not let people speak, they will feel ignored – and you will miss the chance to build a connection. People love talking about themselves! As Dale Carnegie said, “To be interesting, be interested.” Ask about their concerns, acknowledge their questions – even if you eventually need to steer the conversation back to your message. You want to inform them, but also to bond with them and be able to work together.
It takes both Michael and Jan
Lessons from sales often apply to advocacy. In terms of style, some like to build rapport through informal conversation, while others focus on providing structured arguments and evidence.
A scene from The Office illustrates this well: Jan takes a formal, strategic approach to the sales meeting; Michael wins the deal by being relaxed, friendly, and relatable – without even pitching the product.
In my view, charm alone is not enough. Policymakers should understand and care about the issue, rather than just liking the messenger. However, excessive formality can also be limiting; trust is important!
Striking the right balance is key. Be clear, but also human. Take the time to connect. If you are presenting with someone else, lean into complementary strengths – one can lead with warmth, the other with clarity.
The devil is in the details; and so is some of the feedback
Every meeting offers subtle and non-verbal feedback on both your message and delivery. I pay close attention to when a parliamentarian writes something down. It is not about what they write down – it is important not to be intrusive – but when they write it; a quick note after a key point often signals interest or relevance.
You also start to sense shifts in the room’s energy – when attention drifts, when you regain it, or when something resonates. With time, you develop a feel for how your message is landing.
The quiet value of staffer conversations
New arguments or strategies are best tested in low-risk settings where feedback is easy to gather. Meetings with staffers are ideal for this. They filter and distill information for their MPs. Focused on getting the message right, they ask more questions, are candid about what they understand and what not, and often give direct feedback on what works and what does not.
Parliamentarians are people too!
Running for office involves many personal sacrifices. It is not always a glamorous job, and the hours are long. In parliament, elected officials juggle meetings with civil society, committee work, debates, votes, and events. Outside of parliament, they are often buried in constituency casework.
The parliamentarian you are speaking with chose this path because they wanted to make the world better. Keep that in mind when you engage with them! Respect their time, and be honest. They deserve to hear the truth from you. Do not aim to be the slickest advocate, but to sincerely convey what you believe. If you believe humanity faces an extinction risk from AI, you are not doing anyone any favours by concealing that fact.
Write it down
I always bring a notebook to meetings to capture key questions and comments about our message, which are valuable for learning and iteration. That said, I have sometimes taken notes too frantically, once making a parliamentarian slightly uneasy – perhaps because they said something not meant for broad sharing. It is important to take notes calmly and discreetly, focusing on key words that will help you recall the exchange later.
Be kind to yourself
After a meeting, you will often spot things to improve: regretting a missed point, a poorly chosen example, or awkward phrasing. That is normal, and it is a good sign: it means you are learning. Many of the lessons I have shared here come from my own mistakes. You will make yours too. It takes time. Be kind to yourself.
(vII) Books & media articles
How Westminster Works and Why It Doesn’t, by Ian Dunt: I found it a useful introduction to the UK political system — covering frontbenchers vs. backbenchers, first-past-the-post, what MPs actually do (split between Parliament and constituency), the hidden value of the House of Lords, how the civil service works, the roles of the Treasury and No. 10, and the perverse incentives embedded throughout the system.
How to Win Friends and Influence People, by Dale Carnegie: A cringe-worthy title, but ultimately a charming book; and a great reminder of basic principles: show genuine interest in others, smile, remember names, listen well, be truthful, and avoid arguments.
How Parliament Works (9th Edition) by Nicolas Besly and Tom Goldsmith: An excellent guide to all things Parliament – from the roles of the two chambers and the King, to key actors, the structure of a parliamentary day, how bills are made and progress through both Houses, the function of questions, and how committees operate.
*For those looking to engage with parliamentarians in the US, I recommend reading Akash Wasil’s post about his experience speaking with congressional staffers about AI risk in 2023. While the political landscape has changed significantly since then, I believe there is still much to learn from his approach and insights.
Some examples of media articles:
New York Times (14/03/24) - Powerful AI Is Coming. We're Not Ready.
The Times (06/12/24) - ‘Scheming’ ChatGPT tried to stop itself from being shut down.
Guardian (28/01/25) - Former OpenAI safety researcher brands pace of AI development ‘terrifying'.
The Spectator (29/01/25) - DeepSeek shows the stakes for humanity couldn’t be higher.
Newsweek (31/01/25) - DeepSeek, OpenAI, and the Race to Human Extinction | Opinion
Financial Times (12/09/24) - OpenAI acknowledges new models increase risk of misuse to create bioweapons.
Wall Street Journal (21/11/24) - The AI Effect: Amazon Sees Nearly 1 Billion Cyber Threats a Day.
Vox (19/05/24) - “I lost trust”: Why the OpenAI team in charge of safeguarding humanity imploded.
Written by Leticia Garcia
Get Updates
Sign up to our newsletter if you'd like to stay updated on our work,
how you can get involved, and to receive a weekly roundup of the latest AI news.