Anthropic CEO warns that AI could bring slavery, bioterrorism, and unstoppable drone armies. Im not buying it.

Share This Post

Dario Amodei in glasses, making a point with two hands on stage.

In a new 38-page essay published on his personal website, Anthropic CEO and co-founder Dario Amodei makes a plea for urgent action to address the risks of super-intelligent AI.

Amodei writes that this type of self-improving AI could be just one to two years away — and warns that the risks include the enslavement and “mass destruction” of mankind.

The essay, “The Adolescence of Technology,” deals with AI risks both known and unknown. The CEO talks at length about the potential for AI-powered bioterrorism, drone armies controlled by malevolent AI, and AI making human workers obsolete at a society-wide scale.

To address these risks, Amodei suggests a variety of interventions — from self-regulation within the AI industry all the way up to amending the U.S. Constitution.

Amodei’s essay is thoughtful and well-researched. But it also commits the cardinal sin of AI writing — he can’t resist anthropomorphizing AI.

And by treating his product like a conscious, living being, Amodei falls into the very trap he warns against.

Tellingly, at the same time, the New York Times published a major investigation into “AI psychosis.” This is an umbrella term without a precise medical definition, and it refers to a wide range of mental health problems exacerbated by AI chatbots like ChatGPT or Claude. It can include delusions, paranoia, or a total break from reality.


“However, when an AI salesman tells you that AI is an unstoppable world-changing technology on the order of the agricultural revolution…you should take this prediction for what it is: a sales pitch.”

These cases often have one thing in common: A vulnerable person spends so long talking to an AI chatbot that they start to believe the chatbot is alive. The Large Language Models (LLMs) that power platforms like ChatGPT can produce a very lifelike facsimile of human conversation, and over time, users can develop an emotional reliance on the chatbot.

When you spend too long talking to a machine that’s programmed to sound empathetic — and when that machine is ever-present and optimized for engagement — it’s all too easy to forget there’s no mind at work behind the screen.

LLMs are powerful word-prediction engines, but they do not have a consciousness, or feelings, or empathy. Reading “The Adolescence of Technology,” I started to wonder if Amodei has made too much of an emotional connection to his own machine.

Amodei is responsible for creating one of the most powerful chatbots in the world. He has no doubt spent countless hours using Claude, talking to it, testing it, and improving it. Has he, too, started to see a god in the machine?

The essay describes AI chatbots as “psychologically complex.” He talks about AI as if it has motives and goals of its own. He describes Anthropic’s existing models as having a robust sense of “self-identity” as a “good person.”

In short, he’s anthropomorphizing generative AI — and not merely some future, super-intelligent form of AI, but the LLM-based AI of today.

Why AI doom is always around the corner

So much of the conversation around the dangers of AI is pulled straight from science fiction, which Amodei admits — and yet he too is guilty of the same reach.

The essay opens with a section titled “Avoiding doomerism,” where Amodei criticizes the “least sensible” and most “sensationalistic” voices discussing AI risks. “These voices used off-putting language reminiscent of religion or science fiction,” he writes.

Yet Amodei’s essay also repeatedly evokes science fiction. And as for religion, he seems to harbor a faith-like belief that AI superintelligence is nigh.

Stop me if you’ve heard this one before: “It cannot possibly be more than a few years before AI is better than humans at essentially everything. In fact, that picture probably underestimates the likely rate of progress.”

To AI doomers, super-intelligence is always just around the corner. In a previous essay with a more utopian bent, “Machines of Loving Grace,” Amodei wrote that super AI could be just one or two years away. (That essay was published in October 2024, which was one to two years ago.)

Now here he is making the same estimate: super-intelligence is one to two years away. Again, it’s just around the corner. Soon, very soon, generative AI tools like Claude will learn how to improve themselves, achieving an explosion of intelligence like nothing the planet has ever seen before. The singularity is coming soon, the AI boosters say. Just trust us, they say.

But something cannot be perpetually imminent. Should we expect generative AI to keep progressing exponentially, even as the AI industry seems to be banging its head against the wall of diminishing returns?

Certainly, any AI CEO would have a strong incentive to think so. An unprecedented amount of money has already been invested in developing AI infrastructure. The AI industry needs that money spigot to stay open at all costs.

At Davos last week, Jensen Huang of NVIDIA suggested that the investment in AI infrastructure is so large that it can’t be a bubble. From the people who brought you “too big to fail” comes a new hit song: “too big to pop.”

I’ve seen the benefits of AI technology, and I do believe it’s a powerful tool. However, when an AI salesman tells you that AI is an unstoppable world-changing technology on the order of the agricultural revolution, or a world-altering threat on the order of the atom bomb, and that AI tools will soon “be able to do everything” you can do, you should take this prediction for what it is: a sales pitch.

AI doomerism has always been a form of self-flattery. It attributes to human beings god-like powers to create new forms of life, and casts Silicon Valley oligarchs as titans with the power to shape the very foundations of the world.

I suspect the truth is much simpler. AI is a powerful tool. And all powerful tools can be dangerous in the wrong hands. Laws are needed to constrain the unchecked growth of AI companies, their effect on the environment, and on growing wealth inequality.

To his credit, Amodei calls for industry regulation in his essay, mentioning the r-word 10 times. But he also mistakes science fiction for science fact in the process.

There is growing evidence that LLMs will never lead to the type of super-intelligence that Amodei believes in with such zeal. As one Apple research paper put it, LLMs seem to offer only “the illusion of thinking.” The long-awaited GPT-5 largely disappointed ChatGPT’s biggest fans. And many large-scale AI enterprise projects seem to be crashing and burning, possibly as many as 95 percent.

Instead of worrying about the bogeyman of a Skynet-like apocalypse, we should instead focus on the concrete harms of AI — unnecessary layoffs inspired by overconfident AI projections and nonconsensual deepfake pornography, to name just two.

The good news for humans is that these are solvable problems if we put our human minds together — no science fiction thought experiment required.


This article reflects the opinion of the writer.

Subscribe The Newsletter

Get updates and learn from the best

More To Explore

Do You Want To Stay Connected?

drop a line and keep in touch