Strength Before Judgement
What Amodei Gets Right (and Wrong) About AI's Adolescence
Dario Amodei's 19,000-word essay frames our current moment as a technological adolescence: strength arriving before judgement. It is the most commercially significant piece of AI risk writing since his own "Machines of Loving Grace" essay in 2024. And it deserves serious engagement, not the reflexive dismissal or uncritical praise it has mostly received.
The core thesis is sound. Powerful AI, defined as agents smarter than Nobel Prize winners across multiple fields, is likely within one to three years. These systems will act autonomously on multi-week tasks, scale to millions of instances, and operate at speeds no human workforce can match. Amodei calls this a "country of geniuses in a datacentre." Whether or not you buy the specific timeline, the directional claim is hard to dispute. The scaling laws have held. AI is already accelerating its own development.
Where the essay is strongest is on economic disruption. Amodei predicts 50% of entry-level white-collar jobs could be displaced within one to five years. The data supports this trajectory. Stanford's Digital Economy Lab found that entry-level hiring in AI-exposed jobs has already dropped 13%. Microsoft's 2025 data identifies 5 million white-collar roles facing extinction. New graduates now make up only 7% of hires at big tech companies, down more than 50% from pre-pandemic levels. This is no forecast. It’s already happening.
The essay also correctly identifies the authoritarian capture risk. China's Digital Silk Road has exported AI-powered surveillance and social control tools to dozens of countries. MIT research shows a feedback loop: authoritarian investment in AI for political control stimulates further AI innovation, which entrenches the regime further. Democratic nations face a genuine strategic challenge, to either maintain a technology lead or cede the norms-setting power to states that will use AI to make dissent impossible.
But the essay has blind spots. The biggest is the implicit assumption that the companies building powerful AI should also be the primary architects of its governance. As one critic put it, there is a persistent claim that "a small set of deeply capitalised, fast-moving institutions should be both the authors of risk and the arbiters of safety." Fortune noted the essay reads as much like a marketing message as a prophecy, and they are not wrong. The risks Amodei describes are real. The convenient alignment between his proposed solutions and Anthropic's commercial positioning is also real. Both things can be true.
The "meaning" risk is the one that gets the least attention and may matter the most. In a world where AI outperforms humans at essentially every cognitive task, what gives people purpose? Geoffrey Hinton, when asked about the biggest short-term threat to human happiness, did not say extinction or bioweapons. He said joblessness. People need to feel useful. They need to feel they are contributing something. Emad Mostaque's framework is useful here: if intelligence becomes non-rivalrous and infinitely scalable, the human domain becomes consciousness, meaning-making, and connection. But our economic system, built on scarcity, will process this abundance as poverty through job displacement unless we redesign it.
The practical question for anyone in business or technology is what to do with this? Three things:
First, don’t dismiss the timelines. Whether powerful AI arrives in 2027 or 2030, the economic displacement is already underway. Plan for it.
Second, think about your own workforce. If 50% of entry-level roles are at risk, your talent pipeline is about to break. The companies that figure out how to develop people alongside AI, not instead of them, will have a structural advantage.
Third, engage with governance. The worst possible outcome is regulation designed by people who do not understand the technology and imposed after the damage is done. If you are building or deploying AI, you have an obligation to be part of the conversation about guardrails, not just the conversation about capability.
Amodei is right that this is a test. He is less convincing that the companies sitting the exam should also be marking it.