Summary:
- Scarlett Johansson accused OpenAI of using a voice strikingly similar to hers for the ChatGPT bot without her permission, prompting legal action and calls for clearer laws on AI replication.
- OpenAI paused the use of the voice, claimed it was from a different actress, and apologized for the miscommunication.
The actress Scarlett Johansson accused the most advanced technology company, OpenAI, of using a voice extremely close to hers for the ChatGPT bot, having denied any involvement in the development.
This comes at a time when AI technologies are constantly treading on the very thin line of ethical issues that voice replication brings.
Accusations and Company’s Response
Scarlett Johansson, the heroine from Her, said the new voicebot, Sky, from OpenAI, sounds too much like her. An Associated Press story reported that the actress turned down OpenAI CEO Sam Altman’s offer in September to add her voice to ChatGPT 4.0.
Speaking of her shock and disappointment, she said, “When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference.”
A major controversy blew up in response and OpenAI claimed it would step back from using ‘Sky’ as a result, stating its principle that AI voices should not intentionally sound like a non-generic celebrity:
“Sky is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice,” stated the company, adding her secrecy was due to privacy reasons.
Legal Actions and the Call for Legislative Changes
Johansson soon brought her complaints to court, and her lawyers sent numerous inquiries to OpenAI and Sam Altman regarding how the ‘Sky’ voice was created.
OpenAI then said that it would remove the voice, but only very grudgingly. Using a tweet by Altman that used “her” in a way relating to her AI character in the film Her, Johansson also submitted it as further evidence of the supposed analogy in her mind.
As the conflict goes on, Johansson urges for clearer laws dealing with AI technology and how individual rights can be protected.
“In a time where we are all pretty concerned about deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity,” she said.
Hence, personal identity needs to be guarded in this digital period by bestowing transparency and good legislation pertaining to that.
In sum, he told Entertainment Weekly, that the voice was never intended as an impersonation of Johansson, and he regretted that specific point was not made beforehand, adding:
“We are sorry to Ms. Johansson that we didn’t communicate better.”
This unfolding scenario underlines the complex ethical and legal dilemmas raised by AI technologies in replicating human attributes, shaping the ongoing dialogue about digital rights, and protecting identity.