Artificial Guarantees

This is a collection of inconsistent statements, baseline-shifting tactics,
and promises broken by major AI companies and their leaders showing
that what they say doesn't always match what they do.

This is a collection of inconsistent statements, baseline-shifting tactics,
and promises broken by major AI companies and their leaders showing
that what they say doesn't always match what they do.

This is a collection of inconsistent statements, baseline-shifting tactics,
and promises broken by major AI companies and their leaders showing
that what they say doesn't always match what they do.

This is a collection of inconsistent statements, baseline-shifting tactics,
and promises broken by major AI companies and their leaders showing
that what they say doesn't always match what they do.

Anthropic

Founded:

2021

Reported Valuation:

$60 billion

CEO:

Dario Amodei

What They Say: Anthropic CEO  Dario Amodei warns of the dangers of US-China AI racing

What They Do: Amodei proposes a dangerous race, advocating the use of recursive self-improvement

CONTEXT

What They Say: AI systems have the potential to cause large-scale destruction within 1-3 years

What They Do: Lobby against the enforcement of AI safety standards in California

CONTEXT

What They Say: The mitigation of extinction risks as a result of AI should be a global priority

What They Do: Lobby for AI companies to only be fined after a catastrophic event occurs

CONTEXT

What They Say: Governments around the world should create testing and auditing regimes

What They Do: Lobby against testing and auditing proposed by California regulators

CONTEXT

What They Say: Anthropic will not work to advance state-of-the-art capabilities

What They Do: Releases Claude 3.5 Sonnet and states that it “raises the industry bar for intelligence, outperforming competitor models… on a wide range of evaluations”.

CONTEXT

OpenAI

Founded:

2015

Reported Valuation:

$300 billion

CEO:

Sam Altman

What They Say: OpenAI is cautious with the creation and deployment of their models

What They Do: OpenAI is “pulling up” releases in response to competition

CONTEXT

What They Say: OpenAI CEO, Sam Altman argues that building AGI fast is safer, because it avoids a compute overhang

What They Do: OpenAI announces Stargate, a $500 billion AI infrastructure project

CONTEXT

What They Say: OpenAI understands the need for AI regulations and cares about AI safety

What They Do: Lobby the EU to reduce AI regulations

CONTEXT

What They Say: OpenAI is a non-profit so they can "stay accountable to humanity as a whole"

What They Do: OpenAI's CEO, Sam Altman, tells shareholders that OpenAI is considering becoming a for-profit corporation

CONTEXT

What They Say: Altman states that the risks of AI development going bad would result in “lights out for all of us”

What They Do: Claims that now his worst fear is industry impacts

CONTEXT

What They Say: The risks of AI development going bad would result in “lights out for all of us”

What They Do: Shift their framing, now promising that AGI will "change the world much less than we think”

CONTEXT

What They Say: OpenAI will require fine-tuned versions of their models to be safety tested

What They Do: OpenAI quietly alters their Preparedness Framework to restrict this only to cases where the model weights are released

CONTEXT

What They Say: OpenAI launches a “superalignment” team, pledging 20% of their compute to be dedicated to this team’s efforts over the next four years

What They Do: OpenAI dissolves the superalignment team less than a year later

CONTEXT

What They Say: OpenAI commits to "publish reports for all new significant model public releases" when models are "more powerful than the current industry frontier"

What They Do: OpenAI launches GPT-4.1 with no plans for a system card

CONTEXT

Google DeepMind

Founded:

2010

Owned by:

Alphabet Inc (Google)

CEO:

Demis Hassabis

What They Say: Google commits to "publish reports for all new significant model public releases" when models are "more powerful than the current industry frontier"

What They Do: Publicly deploy their “most intelligent AI model” without a model card. Model cards are supposed to “provide essential information on Gemini models, including known limitations, mitigation approaches, and safety performance”

CONTEXT

What They Say: Google’s AI won’t be used for military purposes

What They Do: Drop their ban on AI for weapons and surveillance

CONTEXT