Microsoft Promoted the US Military’s Use of DALL-E from Azure OpenAI in Combat Operations

Microsoft Azure’s version of the OpenAI image generator, GIVE HERwas pitched as Battlefield tool for the US Department of Defense (DoD), originally reported by The interception Wednesday. According to the report, Microsoft’s pitch for Azure OpenAI tools was conducted in October 2023, likely in hopes of capitalizing on the U.S. military’s growing interest in using generative AI for warfare.

“Use of the DALL-E models to create images for training battle management systems,” reads a line from Microsoft’s proposal to the Defense Department, according to a presentation obtained from The Intercept. The phrase about DALL-E’s potential military application appears in a slide deck titled “Generative AI with DoD Data” along with Microsoft’s branding.

Azure offers many of the OpenAI tools, including DALL-E, thanks to Microsoft $10 billion partnership with the non-profit organization. When it comes to military use, Microsoft Azure has the advantage of not being tied to the cumbersome, cross-functional capabilities of OpenAI mission: “to ensure that artificial general intelligence benefits all of humanity.” OpenAI’s policies prohibit the use of its services to “harm others“ or for spyware. However, according to a Microsoft spokesperson, Microsoft offers the OpenAI tools under its corporate umbrella, where the company has worked with the armed forces for decades.

“This is an example of potential use cases uncovered through conversations with customers about the art of the possible with generative AI,” a Microsoft spokesperson said in an email in response to the presentation.

Just last year, OpenAI (not Azure OpenAI) banned the use of its tools for “military and warfare” and “weapons development,” as documented on its website Internet Archive. However, in January 2024, OpenAI quietly removed a line from its Universal Policies that was first discovered by The interception. Just a few days later, Anna Makanju, vice president of global affairs at OpenAI, told Bloomberg that this was the case Start of cooperation with the Pentagon. OpenAI noted at the time that several national security use cases aligned with its mission.

“OpenAI’s policies prohibit the use of our tools to develop or use weapons, harm others, or destroy property,” an OpenAI spokesperson said in an email. “We were not involved in this presentation and have not had discussions with U.S. defense officials about the hypothetical use cases described in it.”

Governments around the world seem to be looking at AI as the future of warfare. We recently learned that Israel has an AI system called “ lavender to compile a “kill list” of 37,000 people in Gaza, originally reported by +972 Magazine. Since July last year, American military officials have been experimenting with large language models for military tasks, it is said Bloomberg.

The tech industry has undoubtedly taken note of this massive financial opportunity. Former Google CEO Eric Schmidt builds AI Kamikaze drones under the name White stork. Schmidt has been connecting technology and the Pentagon for years, leading efforts to put AI on the front lines.

The technology has long been promoted by the Pentagon, dating back to the first semiconductor chips in the 1950s. Therefore, it is no surprise that AI is being adopted in the same way. Although OpenAI’s goals sound lofty and peaceful, the Microsoft partnership allows it to obscure them and sell its world-leading AI to the American military.

Sharing Is Caring:

Leave a Comment