Leading artificial intelligence companies have come together to form the Frontier Model Forum, a coalition aimed at developing safe and responsible AI. Members of the new group include chatbot pioneers OpenAI and Anthropic, as well as tech giants Google and Microsoft.
The goal of the Frontier Model Forum is to promote the ethical development and deployment of large language models and other advanced AI systems known as frontier models. In a blog post announcing the group, Google said it will focus on AI safety research, identifying best practices, collaborating with stakeholders, and developing beneficial applications.
The need for cooperation comes as AI chatbots like OpenAI’s ChatGPT have raised concerns about potential risks from advanced AI systems. ChatGPT’s human-like conversational abilities have sparked worries about misinformation and security vulnerabilities.
In a joint statement, Microsoft President Brad Smith said companies creating AI have a duty to ensure it is safe and under human control. He called the forum an important step for responsible AI development that can tackle challenges and benefit humanity.
The forum aims to work on its first goals over the next year. Members said it’s imperative for leading AI firms to find common ground on safety measures for powerful AI tools to have broad benefits.
So far, the Frontier Model Forum consists of OpenAI, Anthropic, Google, and Microsoft. But it said it welcomes other companies with a strong commitment to frontier model safety.