Google employees have raised concerns about releasing the company’s AI chatbot, Bard, calling it a “pathological liar” and “cringe-worthy”, and urging management not to release the product.
According to Bloomberg, former and current corporation employees have expressed concerns about the chatbot’s capabilities.
During an internal discussion, a Google employee said that Bard often gave dangerous advice, including on the topic of landing a plane and swimming underwater. In addition, the company “rejected a risk assessment” — a report from the security service, according to which the chatbot is not yet ready for release. Access to the “experimental” project itself was opened back in March.
The publication believes that Google has decided to overshadow ethical considerations to keep up with its competitors — Microsoft and OpenAI. The company emphasizes that it is working to ensure safety and compliance with ethical standards in this area, but in reality, it often prefers to pursue profit for its business.
However, Google spokesman Brian Gabriel insists that ethics remains a priority for the company. They allegedly develop teams working on applying ethical principles to new technologies.
Their criticism of Bard highlights the importance of ethical considerations in developing artificial intelligence. While AI can transform the way we live and work, it also has the potential to pose substantial threats if not properly regulated and managed.
Google is not the only company to face criticism over the development of AI. Many experts and advocates have called for greater regulation and oversight of the technology to ensure that it is used responsibly and ethically.
The controversy surrounding Bard is a reminder of the need for companies to take a cautious and responsible approach to the development of AI.