Helen Toner, a former director at OpenAI and current director at Georgetown's Center for Security and Emerging Technology, made a compelling argument on Tuesday for increased disclosure and auditing of AI companies. Speaking at the TED2024 conference, Toner highlighted the critical need for major AI players, including Google, Microsoft, and OpenAI, to open their advanced systems to external scrutiny. Her advocacy for transparency aims to prevent these companies from merely "grading their own homework," a practice she believes could lead AI firms down a problematic path similar to that of social media companies.
Toner's call for action is particularly resonant given her recent departure from OpenAI’s board following the controversy surrounding the attempted ouster of CEO Sam Altman. Her remarks underscore the importance of clear regulatory frameworks and accountability in the rapidly evolving AI industry. Toner expressed concern over the current AI development landscape, likening the debate to a choice between "stepping on the gas or slamming on the brakes." Instead, she advocates for "a clear windshield and better steering," emphasizing the need for a well-defined vision and roadmap for AI development that involves a broader range of societal input.
Highlighting the necessity for public engagement, Toner encouraged individuals from all walks of life to participate in shaping the future of AI. "Don’t be intimidated by the technology or the people building it," she urged, asserting that one does not need to be a scientist or engineer to contribute to the conversation about AI's role in society.
Her comments contrast with a series of more optimistic views on AI's future presented by figures such as Google DeepMind CEO Demis Hassabis and investor Vinod Khosla at the same event. These discussions reflect the diverse perspectives and potential directions for AI technology, which continues to be a major focus at global tech conferences.
Toner's advocacy for greater transparency and independent auditing is a call to action for the AI industry to prioritize ethical practices and ensure that the development and deployment of AI technologies are conducted in a manner that is safe, secure, and beneficial to society as a whole. This approach aims to mitigate risks and foster an environment of trust and collaboration between AI companies and the global community they serve.