OpenAI's Scarlett Johansson saga reveals an 'ask forgiveness, not permission' problem
OpenAI has faced backlash from artists, authors and creators who say that the chatbot's AI has been trained using their material, permission-free.
Evan Agostini/Invision/AP; Alyssa Powell/BI
- OpenAI's long-running strategy might be in hot water.
- The ChatGPT maker has faced claims that it has used artists' work to train AI, without permission.
- Its latest run-in with Scarlett Johansson could deepen those claims.
OpenAI's biggest critics have long held the view that Sam Altman's success has been built on an "ask forgiveness, not permission" strategy that could come back to haunt him.
They might be proven right.
The ChatGPT maker has been embroiled in fresh controversy since Monday after Scarlett Johansson lashed out at the company over a new voice feature for its chatbot. In her view, it sounded an awful lot like the AI assistant she played in the 2013 movie Her.
"When I heard the released demo, I was shocked, angered and in disbelief that Mr Altman would pursue a voice that sounded so eerily similar to mine," Johansson said in a statement first published by NPR.
The Hollywood star has even reason to be frustrated.
Despite declining an offer last September to voice ChatGPT, per her statement, Johansson found herself being alluded to by Altman following OpenAI's big launch of a new AI model last week that brought a real-time, Johansson-like voice called "Sky" to the chatbot.
"Her," Altman wrote in a one-word post to X following the event, seemingly referring to the 2013 film, "Her," in which the main character Theodore, played by Joaquin Phoenix, develops a relationship with his AI personal assistant, voiced by Johansson.
OpenAI has responded to the criticisms by pulling the Sky voice entirely. It has also issued a statement claiming Sky was "never intended to resemble Johansson." That said, the entire saga highlights a deeper problem facing the startup.
On multiple fronts, the San Francisco company at the heart of the current AI boom faces a growing chorus of critics who say that it has trained its AI models with intellectual property from authors, publishers, and artists — without their explicit permission.
Although OpenAI asked for Johansson's permission in this instance, it ended up creating an AI voice that many say sounded just like hers — after she politely declined to get involved.
Others seem not to have been asked for permission at all.
Sora, a text-to-video AI model unveiled by OpenAI in February, is suspected of using videos from YouTube in its development. In an interview with The Verge, published Monday, Google CEO Sundar Pichai said he thinks OpenAI may have broken YouTube's terms and conditions.
Though Johansson and Pichai have not filed lawsuits against OpenAI, the "ask forgiveness, not permission" strategy that critics accuse the company of has already landed it in legal hot water.
Several authors represented by the Authors Guild are in the midst of a tense legal battle with OpenAI over concerns that their books were used without permission to train an older OpenAI model.
The New York Times is fighting a legal case against OpenAI, too, suggesting that the likeness of ChatGPT's responses to texts in its articles is a sign that the AI company is taking its journalism for a "free ride."
OpenAI could face more trouble from the music industry. Sony Music, whose artist roster includes the likes of Beyoncé, sent a letter to OpenAI and other companies last week over fears that they had "made unauthorized uses" of songs from its artists to train AI.
With AI already unleashed on the world, it's hard to know what the path forward might be. In recent months, OpenAI has been scrambling to sign licensing agreements with Reddit and publishers such as Business Insider and the Financial Times.
Creators who suspect their work has been used to train OpenAI without their permission will probably wonder why they weren't offered an agreement in the first place.