Sam Altman, CEO of OpenAI, has shared his vision for regulating AI companies, including his own, with the U.S. government. His recommendations for government regulation of AI companies emphasize the need for safety standards, auditing requirements, and a new regulatory agency. Altman and other OpenAI co-founders, Greg Brockman and Ilya Suskever, have suggested that the International Atomic Energy Agency (IAEA) be used as a blueprint for regulating artificial general intelligence (AGI).
Advanced AI models that surpass human intelligence would be categorized as AGI and would fall under international authority. This authority would have the power to inspect systems, conduct audits, enforce safety standards, and restrict the deployment of the technology. Altman suggests that companies initially work together to establish a list of requirements for countries, with a focus on reducing existential risks. However, more specific issues, such as determining what AI is allowed to say, would be left to individual nations.
OpenAI’s ChatGPT continues to captivate netizens and experts alike and at the same time, has ignited a debate on the future of AI and its inherent risks. In March, a letter signed by tech leaders, including Elon Musk and Steve Wozniak, called for a pause in training systems more powerful than OpenAI’s GPT-4, citing worries over unforeseen risks.
While the IAEA has faced challenges in the past, including instances where countries expelled inspectors or failed to comply with regulations, there are concerns about the feasibility and effectiveness of creating a similar international regulatory body for AI.