The Current State of AI Development
AI progress has exploded in the last 10 years, reaching near-human-level capabilities in writing, coding, art, math, and many more fields of human activity. This progress has been driven by deep learning: modern AIs are grown by feeding them massive amounts of data and letting them evolve in response, not built piece by piece by humans.
Researchers and engineers don't need to understand AIs to create them – indeed, experts consistently fail to anticipate how quickly new skills will be unlocked, or how existing AIs work. Progress is bottlenecked only by resources (such as AI chips, electrical power, data) instead of scientific insights. As tech companies and frontier AI companies collaborate to unlock ever more resources, the path leads to increasingly intelligent yet opaque AIs.
The Path to Superintelligence
How far can this trend of smarter AI go? Looking at how humanity has historically increased its intelligence, three strategies emerge: tools, groups, and methods. AIs can leverage these same strategies; indeed each is already being used in current AI R&D.
Although skeptics claim that various components of intelligence cannot be automated, no existing scientific theory of intelligence backs these opinions. Thus we should assume the trend towards smarter AIs will continue, eventually leading to Artificial General Intelligence (AGI), AIs able to do the same intellectual tasks as humans.
The Emergence of AGI and Beyond
Since AGI would be able to do anything a human can, it would be able to do AI research and improve its own intelligence. AGI companies and AI researchers are already pushing hard in this direction. Since software is cheaper and more efficient than brains, AGI would improve far faster than any human, eventually reaching artificial superintelligence (ASI) surpassing humanity's collective intelligence.
As it continues to scale, ASI would unlock abilities to shape matter and energy that would look godlike compared to human engineering. Even without malicious intent, these godlike-AIs would by default wipe out humanity as collateral damage while pursuing their own goals, in the same way ants are just collateral damage for contractors building a house.
The Challenge of AI Alignment
Godlike-AIs lead to catastrophe because of the incredible difficulty of aligning AI's goals with those of humanity. Alignment is the harder version of problems humanity already struggles with: making companies and governments beneficial for what citizens care about.
Solving alignment requires massive progress on questions like finding what we value, reconciling contradictions between values, and predicting consequences to avoid side-effects. This would require decades of research and trillions in investment; yet only a handful of people and a couple hundred million are currently invested, with most effort going towards making AIs more powerful. The little existing work merely patches current issues and passes the buck to future AIs. We're not on track to solve alignment, and thus godlike-AI would cause human extinction.
The Regulatory Gap
Lacking a solution to alignment, we need to ensure godlike-AIs are not built. This requires institutions with the authority to regulate frontier AI research, both at the national and international level.
Yet these institutions simply do not exist, very little is being done to create them, and the little existing governance work already finds itself undermined by the very AI companies racing to AGI.
The Industry Response
This lack of effort on alignment and regulation is not accidental: frontier AI companies systematically undermine these to race to AGI without blockers. These companies are largely utopists who want to build AGI because they believe it will usher in their ideal world.
Fearing AGI will be built by the "wrong" people, these utopists increasingly cut corners, undermining safety. They employ classic industry tactics of fear, uncertainty, and doubt (FUD). Spreading fear through stoking geopolitical tensions, sowing doubt by changing stances, capturing regulatory efforts under cover of self-regulation, and undermining research that might slow them down.
A Path Forward
We are on a dark trajectory toward human extinction. Yet giving up is a mistake – it's what those racing to AGI want us to make. Instead, there is a narrow path forward through civic duty. The people racing towards AGI are a tiny minority putting everyone at risk for their delusions.
Because no one wants humanity to go extinct, this issue can unify people across party lines and countries. This starts by spreading awareness of the risks and exercising civic duty: contacting representatives about the extinction risks posed by AI.
Get Updates
Sign up to our newsletter if you'd like to stay updated on our work,
how you can get involved, and to receive a weekly roundup of the latest AI news.