Inside the AI Breakup That Turned Into a $300 Billion Rivalry
You probably know that Anthropic was founded by people who left OpenAI. What you probably don’t know is why they left, or how personal it got on the way out. This personal falling-out that became an ideological split that’s now shaping trillion-dollar decisions about how AI gets built, who controls it, and what it’s allowed to do.
A new WSJ investigation by reporter Keach Hagey, based on interviews with current and former employees at both companies, gives the most complete account yet of how it started. Here’s the arc.
It started as a disagreement about principles…
Dario Amodei joined OpenAI in 2016, eventually rising to VP of Research and becoming a key architect of GPT-2 and GPT-3. But behind the scenes, he and Sam Altman were increasingly at odds over one core question. How fast should you move, and who should be in charge?
The cracks widened fast
- The Russia/China proposal. Co-founder Greg Brockman floated a fundraising plan that included selling AGI access to rival nations. Dario reportedly considered it borderline treasonous and nearly quit on the spot.
- The broken promise. Altman assured Dario that Brockman and Ilya Sutskever (OpenAI’s chief scientist at the time) wouldn’t have authority over him. Dario later discovered a quiet handshake deal that gave both parties the power to fire him.
- The confrontation. In 2020, Altman called Dario and his sister Daniela into a conference room and accused them of organizing negative board feedback against him. It ended in a shouting match.
Dario’s conditions to stay were simple. Report directly to the board, and never work with Brockman again. Both were rejected. He left in 2021, taking Daniela and a dozen other OpenAI employees with him to found Anthropic.
At its core, the split came down to this. Dario believed you could scale AI fast AND build it safely. Altman believed speed and market dominance came first. They never resolved it. They just built two separate companies around it. And if you thought that settled things, the WSJ report ends there, but the feud absolutely did not.
The same disagreements about control, safety, and power now show up in every major public decision both companies make. Just in February:
- The Super Bowl shots. Anthropic ran a four-ad campaign with one message front and center. “Ads are coming to AI. But not to Claude.“ A direct dig at OpenAI’s plan to run ads in ChatGPT. Altman publicly called the ads “clearly dishonest.” Marketing professor Scott Galloway’s verdict was blunt. Market leaders don’t acknowledge the competition. Altman blinked.
- The India photo op. At an AI summit, India’s Prime Minister Modi tried to manufacture a unity moment with rival CEOs, hands raised together. Google’s Sundar Pichai and Meta’s AI chief joined in. Altman and Amodei raised separate fists and didn’t make eye contact. The internet had a field day. Altman later said he was “just confused.” Sure.
- The Pentagon standoff. When the Defense Department came calling, the two companies went in opposite directions, revealing exactly where their priorities lie. Anthropic refused to sign without hard safeguards against autonomous weapons and mass domestic surveillance. OpenAI signed hours later. Defense Secretary Hegseth responded by threatening to designate Anthropic a national supply chain risk, effectively barring it from working with any government contractor.
Privately, the rhetoric is even harsher. Per the WSJ, Dario has reportedly compared the Altman-Musk lawsuit to “Hitler vs. Stalin” internally, called co-founder Brockman’s $25 million donation to a pro-Trump super PAC “evil,” and likened OpenAI to a tobacco company.
Why this matters
Two companies valued at north of $300 billion are making decisions about ads, weapons contracts, and who gets access to the world’s most powerful AI.
Those decisions trace back to a shouting match in 2020. And the gap keeps widening. This week, leaked Anthropic documents revealed a new model called Claude Mythos that’s described as “dramatically” ahead of anything else on the market. It’s also described as “very expensive to serve.”
Meanwhile, Claude users on $100/month plans are already hitting rate limits within an hour. The more powerful the model, the harder it is to keep it accessible. That’s exactly what Dario said he was leaving OpenAI to fix.
Our take…
Dario built Anthropic around the idea that safety and scale aren’t opposites.
But right now, his most powerful model is too expensive to serve, his paying users are hitting walls, and his company just got threatened by the Pentagon for standing on principle. OpenAI, meanwhile, signed the defense deal, is running ads, and is winning the distribution war. The irony is hard to miss.
The “responsible” path is turning out to be the harder business to run. Whether that changes, or whether Anthropic’s bet eventually pays off, is probably the most important question in AI right now.
Editor’s note: This content originally ran in the newsletter of our sister publication, The Neuron. To read more from The Neuron, sign up for its newsletter here.
The post Inside the AI Breakup That Turned Into a $300 Billion Rivalry appeared first on eWEEK.