The Time for Independence in Artificial Intelligence: The Future of Life Institute argues that AI needs to be deemed public and verifiable
As noted in the letter, even OpenAI itself has expressed the potential need for “independent review” of future AI systems to ensure they meet safety standards. The people who signed it said that this time has come.
Authors Yuval Noah Harari, Steve Wozniak, Jaan Tallinn, Andrew Yang, Stuart Russell, Yoshua Bengio and Gary Marcus are some of the signatories. The full list of signatories can be seen here, though new names should be treated with caution as there are reports of names being added to the list as a joke (e.g. OpenAI CEO Sam Altman, an individual who is partly responsible for the current race dynamic in AI).
The letter, which was written by the Future of Life Institute, an organization focused on technological risks to humanity, adds that the pause should be “public and verifiable,” and should involve all those working on advanced AI models like GPT-4. It does not suggest how a halt on development could be verified, but adds that “if such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” something that seems unlikely to happen within six months.
Requests for comment from Openai, Microsoft, and other companies were not answered. Many people from tech companies such as Microsoft and Google are included in the letter.
The excitement surrounding Microsoft’s maneuvers in search may have pushed Google to start its own plans. The company recently debuted Bard, a competitor to ChatGPT, and it has made a language model called PaLM, which is similar to OpenAI’s offerings, available through an API. “It feels like we are moving too quickly,” says Peter Stone, a professor at the University of Texas at Austin, and the chair of the One Hundred Year Study on AI, a report aimed at understanding the long-term implications of AI.