OpenAI, Google, Microsoft and Meta Sign an Agreement with White House to Regulate AI

By:

Published on:

Tech giants, including Google, Amazon, Microsoft, and OpenAI, have pledged to the White House to voluntarily implement watermarks for AI-generated content, aiming to enhance the safety of technology usage. This commitment is part of a broader agreement to introduce additional AI safety measures, marking a significant step towards AI regulation in the US and globally.

In a letter signed by Sam Altman, CEO of OpenAI, and personalities like Elon Musk, the potential negative impacts of AI, including the loss of millions of jobs worldwide, were highlighted. The advent of AI models and applications like ChatGPT has only intensified these warnings. 

The call for regulation has been constant, and agreements like the one recently announced by the White House are becoming more frequent. Seven leading companies in AI development, including Google, Amazon, Microsoft, Meta, OpenAI, Inflection, and Anthropic, have voluntarily committed to allowing independent experts to analyze and test their systems before they can be used by users. They will also share information about their safety and operation with the US government and academic institutions.

One of the commitments made by these tech giants is the development of solutions that allow the public to know when a text, audio, image, or video has been created by AI. Companies like OpenAI have already started implementing such measures. In January, OpenAI launched its own detector to determine whether a text was written by AI or a human, although the results still leave room for improvement.

In the offices of OpenAI, a team of external professionals has been working for months to assess the potential risks of GPT-4, the latest version of its LLM. Meanwhile, other companies like Google are working on developing something similar to a ‘watermark’. This solution would allow the identification of content created with generative AI, but for now, users must rely on other clues to identify fake photos created with AI.

The agreement will serve to implement more demanding standards in the sector, according to a senior White House official who spoke to The Washington Post under the condition of anonymity. “This is going to push the limits of what companies are doing and raise the standards of safety and trust in AI,” the official said.

For President Biden and his government, this issue is a national priority, and they are not satisfied with these first steps towards regulation. They are currently working on an executive order and bipartisan legislation that will regulate AI in the US.

This announcement came after several meetings between President Biden, Vice President Kamala Harris, and senior executives in the tech sector, as well as researchers, consumer advocates, and civil liberties defense groups. Until the desired regulation arrives, several government agencies are studying different ways to use already approved laws to restrict certain uses of AI.

Akshay Prakash
Akshay Prakash
Akshay Prakash is a BCA graduate and a passionate newbie programmer. With a keen interest in the ever-evolving world of coding, he delves into the latest trends, updates, and innovations within the coding sphere. Through his captivating and informative writing, Akshay brings coding news to life, making complex concepts accessible to readers of all backgrounds.

Related Posts:

Leave a Reply

Please enter your comment!
Please enter your name here