Microsoft Proposes Using OpenAI Tech for US Military, Ignoring Ethical Guidelines

By:

Published on:

Key Takeaways:
  • Microsoft proposed military applications for OpenAI’s technology without OpenAI’s approval.
  • Microsoft suggested using AI for image generation in battle management and automating administrative tasks.
  • The proposal sparked ethical concerns and internal opposition regarding the use of AI in military settings.

Microsoft has recently stirred significant controversy by proposing the integration of OpenAI’s artificial intelligence into US military operations, despite apparent violations of OpenAI’s ethical guidelines. This revelation emerged from a detailed investigative report by The Intercept, which highlighted Microsoft’s plans to profit from its hefty investments in AI technology through military applications.

During a presentation titled “Generative AI with DoD Data” to the Department of Defense, Microsoft showcased various military uses for OpenAI’s AI, including analyzing surveillance imagery and managing battlefield operations. This presentation not only outlined the potential applications of generative AI like DALL-E in creating images for battle management systems but also discussed how tools such as ChatGPT could be utilized to automate bureaucratic processes such as report writing.

Surprisingly, Microsoft initiated these discussions without seeking prior approval from OpenAI, contradicting the latter’s firm stance against the militarization of its technologies. OpenAI’s spokesperson, Liz Bourgeous, clarified, “Our policies explicitly prohibit the use of our technology to develop weapons, cause harm, or destroy property. OpenAI does not partner with defence agencies for such purposes.”

Despite this stance, Microsoft admitted to making the presentation to the Department of Defense but clarified that the US military has not yet employed these technologies. The technology giant portrayed the presentation as a mere exploration of possible applications for AI, emphasizing that actual deployment had not been initiated.

The incident highlights Microsoft’s ongoing strategy to leverage its technological advancements, including those developed in partnership, for military purposes. This is not Microsoft’s first venture into such territory; the company has multiple ongoing contracts with the US military. This includes the utilization of Azure for defence tasks, which Microsoft promotes as enhancing operational efficiency and decision-making on the battlefield.

The technology giant has faced internal dissent over such military collaborations in the past. In 2019, employees protested a contract with the Department of Defense that involved using HoloLens for military missions, stating, “We were not hired to develop weapons, and we demand to be able to decide how our work is used.” Under this contract, Microsoft was set to provide 100,000 HoloLens units for IVAS, an augmented reality system designed to enhance the effectiveness of soldiers in combat scenarios.

The use of AI in defence settings has been met with scepticism from experts. Brianna Rosen, a professor at the Blavatnik School of Government at the University of Oxford, commented on the inherent risks, noting, “It is not possible to build a battle management system in a way that does not contribute, at least indirectly, to civilian harm.” Heidy Khlaaf, a machine learning security engineer, also expressed doubts about the reliability of AI technologies like DALL-E in accurately reflecting real-world scenarios. “They can’t even accurately generate a correct number of limbs or fingers, how can we trust them to be accurate with respect to a realistic field presence?” she questioned.

Related Posts:

Leave a Reply

Please enter your comment!
Please enter your name here