The ongoing dispute between Scarlett Johansson and OpenAI intensifies scrutiny on CEO Sam Altman’s credibility and leadership, fueling longstanding concerns about trust and transparency in AI development.
Why It Matters
Altman and OpenAI, which aim to ensure AI benefits humanity, must maintain public trust. Any erosion of credibility can hinder their mission, especially as AI's impact grows.
Background
Last fall, Altman faced a dramatic boardroom battle when directors fired him, citing his lack of consistent candour. Although this reason puzzled many at the time, recent events are shedding light on these concerns.
Current Situation
OpenAI and Altman insist that ChatGPT’s female voice, "Sky," wasn't modelled after Johansson, despite her iconic AI role in the 2013 film "Her." However, Johansson revealed that Altman had approached her twice to model the voice, a fact OpenAI hadn’t disclosed. OpenAI has now paused the use of Sky’s voice.
Compounding Issues
This controversy follows the disbanding of OpenAI's "superalignment" team, which was focused on researching long-term AI risks. Both team leaders, Ilya Sutskever and Jan Leike, have left the company. Sutskever, who initially voted to fire Altman, later signed a letter demanding his return. Leike criticized the company, stating that safety processes had been sidelined in favour of flashy products.
When the superalignment team was announced in July 2023, OpenAI committed 20% of its computing resources to this effort. Leike, however, highlighted the team's struggles with limited resources and internal resistance.
The Intrigue
Reports surfaced that OpenAI's off-boarding agreements prevent departing employees from criticizing the company, under threat of losing vested stock options. After these revelations, Altman apologized publicly, stating he was unaware of the harsh terms and promised agreement changes.
Public Reaction
Calls for Altman's resignation have emerged, though it remains likely with further significant issues. The newly reconstituted board is less prone to challenging his leadership than the previous board.
Transparency Issues
One of the most critical issues is OpenAI's need for more transparency about the data used to train its AI models. In a Wall Street Journal interview, OpenAI CTO Mira Murati could not confirm whether YouTube videos were used for training the Sora video-making tool, stating they used "publicly available data and licensed data."
Looking Ahead
The AI industry, including OpenAI, could alleviate much of the distrust by being more transparent about their training data. However, full transparency might reveal extensive use of copyrighted material, leading to potential legal challenges.
One reason for settling disputes like Johansson's outside of court is to avoid disclosing sensitive information. Trials could force companies to reveal data practices they prefer to keep confidential.
By addressing these challenges head-on and improving transparency, OpenAI can work towards restoring trust and demonstrating its commitment to ethical AI development. The outcome of this controversy will significantly impact the perception and future of AI technology.