Elon Musk, Steve Wozniak, and more than 1,000 other AI and IT experts have called for a six-month moratorium on training AI systems that perform better than OpenAI’s GPT-4 model.
In an open letter published by the non-profit Future of Life Institute, the signatories warned of potential societal risks if powerful AI systems are developed without common protocols and security standards. They emphasized that extensive research has shown that AI systems competing with humans can pose a serious danger to all of humanity and that developers should work closely with regulators to ensure the positive consequences of AI development and manage the associated risks.
The letter came after Europol joined a group of AI concerns to warn that attackers could misuse systems like ChatGPT for phishing, disinformation, and cybercrime. Since its release last year, Microsoft-backed OpenAI’s ChatGPT chatbot has demonstrated qualities that forced competitors to urgently step up the development of their large language models. As a result, companies have rushed to integrate generative AI into their products.
The head of OpenAI, Sam Altman, did not sign the letter, and the company refused to comment. According to one of the experts, it is necessary to slow down the relevant work until humanity begins to better understand the consequences since AI systems can cause serious damage, especially given that the big players keep secret information about what they are working on.
The authors of the open letter also called on AI developers to cooperate with politicians to create a governance system in this area, including new regulatory bodies, a reliable verification and certification system, and control of more advanced AI systems.
The signatories proposed pausing AI development to create and implement security protocols for more powerful systems. They argued that laboratories and specialists should use the proposed break to ensure that the risks associated with AI development will be manageable and that common protocols and security standards are developed for the industry, the application of which will be verified by independent auditors.
In summary, the call for a moratorium on AI development by experts including Elon Musk and Steve Wozniak highlights the need for common protocols and security standards to be developed for the industry and for the risks associated with AI development to be managed. The open letter also emphasizes the potential threats to society and civilization from AI systems capable of competing with humans. It encourages AI developers to work closely with regulators to ensure positive consequences for humanity.