Microsoft Pitched OpenAI’s DALL-E as Battlefield Tool for U.S. Military

Any battlefield use of the software would be a dramatic turnaround for OpenAI, which describes its mission as developing AI that can benefit all of humanity. The post Microsoft Pitched OpenAI’s DALL-E as Battlefield Tool for U.S. Military appeared first on The Intercept.

Apr 16, 2024 - 07:22
 0
Microsoft Pitched OpenAI’s DALL-E as Battlefield Tool for U.S. Military

Microsoft last year proposed using OpenAI’s mega-popular image generation tool, DALL-E, to help the Department of Defense build software to execute military operations, according to internal presentation materials reviewed by The Intercept. The revelation comes just months after OpenAI silently ended its prohibition against military work.

The Microsoft presentation deck, titled “Generative AI with DoD Data,” provides a general breakdown of how the Pentagon can make use of OpenAI’s machine learning tools, including the immensely popular ChatGPT text generator and DALL-E image creator, for tasks ranging from document analysis to machine maintenance. (Microsoft invested $10 billion in the ascendant machine learning startup last year, and the two businesses have become tightly intertwined. In February, The Intercept and other digital news outlets sued Microsoft and OpenAI for using their journalism without permission or credit.)

The Microsoft document is drawn from a large cache of materials presented at an October 2023 Department of Defense “AI literacy” training seminar hosted by the U.S. Space Force in Los Angeles. The event included a variety of presentation from machine learning firms, including Microsoft and OpenAI, about what they have to offer the Pentagon.

The publicly accessible files were found on the website of Alethia Labs, a nonprofit consultancy that helps the federal government with technology acquisition, and discovered by journalist Jack Poulson. On Wednesday, Poulson published a broader investigation into the presentation materials. Alethia Labs has worked closely with the Pentagon to help it quickly integrate artificial intelligence tools into its arsenal, and since last year has contracted with the Pentagon’s main AI office. The firm did not respond to a request for comment.

One page of the Microsoft presentation highlights a variety of “common” federal uses for OpenAI, including for defense. One bullet point under “Advanced Computer Vision Training” reads: “Battle Management Systems: Using the DALL-E models to create images to train battle management systems.” Just as it sounds, a battle management system is a command-and-control software suite that provides military leaders with a situational overview of a combat scenario, allowing them to coordinate things like artillery fire, airstrike target identification, and troop movements. The reference to computer vision training suggests artificial images conjured by DALL-E could help Pentagon computers better “see” conditions on the battlefield, a particular boon for finding — and annihilating — targets.

In an emailed statement, Microsoft told The Intercept that while it had pitched the Pentagon on using DALL-E to train its battlefield software, it had not begun doing so. “This is an example of potential use cases that was informed by conversations with customers on the art of the possible with generative AI.” Microsoft, which declined to attribute the remark to anyone at the company, did not explain why a “potential” use case was labeled as a “common” use in its presentation.

OpenAI spokesperson Liz Bourgeous said OpenAI was not involved in the Microsoft pitch and that it had not sold any tools to the Department of Defense. “OpenAI’s policies prohibit the use of our tools to develop or use weapons, injure others or destroy property,” she wrote. “We were not involved in this presentation and have not had conversations with U.S. defense agencies regarding the hypothetical use cases it describes.”

Bourgeous added, “We have no evidence that OpenAI models have been used in this capacity. OpenAI has no partnerships with defense agencies to make use of our API or ChatGPT for such purposes.”

At the time of the presentation, OpenAI’s policies seemingly would have prohibited a military use of DALL-E. Microsoft told The Intercept that if the Pentagon used DALL-E or any other OpenAI tool through a contract with Microsoft, it would be subject to the usage policies of the latter company. Still, any use of OpenAI technology to help the Pentagon more effectively kill and destroy would be a dramatic turnaround for the company, which describes its mission as developing safety-focused artificial intelligence that can benefit all of humanity.

“It’s not possible to build a battle management system in a way that doesn’t, at least indirectly, contribute to civilian harm.”

“It’s not possible to build a battle management system in a way that doesn’t, at least indirectly, contribute to civilian harm,” Brianna Rosen, a visiting fellow at Oxford University’s Blavatnik School of Government who focuses on technology ethics.

Rosen, who worked on the National Security Council during the Obama administration, explained that OpenAI’s technologies could just as easily be used to help people as to harm them, and their use for the latter by any government is a political choice. “Unless firms such as OpenAI have written guarantees from governments they will not use the technology to harm civilians — which still probably would not be legally-binding — I fail to see any way in which companies can state with confidence that the technology will not be used (or misused) in ways that have kinetic effects.”

The presentation document provides no further detail about how exactly battlefield management systems could use DALL-E. The reference to training these systems, however, suggests that DALL-E could be to used to furnish the Pentagon with so-called synthetic training data: artificially created scenes that closely resemble germane, real-world imagery. Military software designed to detect enemy targets on the ground, for instance, could be shown a massive quantity of fake aerial images of landing strips or tank columns generated by DALL-E in order to better recognize such targets in the real world.

Even putting aside ethical objections, the efficacy of such an approach is debatable. “It’s known that a model’s accuracy and ability to process data accurately deteriorates every time it is further trained on AI-generated content,” said Heidy Khlaaf, a machine learning safety engineer who previously contracted with OpenAI. “Dall-E images are far from accurate and do not generate images reflective even close to our physical reality, even if they were to be fine-tuned on inputs of Battlefield management system. These generative image models cannot even accurately generate a correct number of limbs or fingers, how can we rely on them to be accurate with respect to a realistic field presence?”

In an interview last month with the Center for Strategic and International Studies, Capt. M. Xavier Lugo of the U.S. Navy envisioned a military application of synthetic data exactly like the kind DALL-E can crank out, suggesting that faked images could be used to train drones to better see and recognize the world beneath them.

Lugo, mission commander of the Pentagon’s generative AI task force and member of the Department of Defense Chief Digital and Artificial Intelligence Office, is listed as a contact at the end of the Microsoft presentation document. The presentation was made by Microsoft employee Nehemiah Kuhns, a “technology specialist” working on the Space Force and Air Force.

The Air Force is currently building the Advanced Battle Management System, its portion of a broader multibillion-dollar Pentagon project called the Joint All-Domain Command and Control, which aims to network together the entire U.S. military for expanded communication across branches, AI-powered data analysis, and, ultimately, an improved capacity to kill. Through JADC2, as the project is known, the Pentagon envisions a near-future in which Air Force drone cameras, Navy warship radar, Army tanks, and Marines on the ground all seamlessly exchange data about the enemy in order to better destroy them.

On April 3, U.S. Central Command revealed it had already begun using elements of JADC2 in the Middle East.

The Department of Defense didn’t answer specific questions about the Microsoft presentation, but spokesperson Tim Gorman told The Intercept that “the [Chief Digital and Artificial Intelligence Office’s] mission is to accelerate the adoption of data, analytics, and AI across DoD. As part of that mission, we lead activities to educate the workforce on data and AI literacy, and how to apply existing and emerging commercial technologies to DoD mission areas.”

While Microsoft has long reaped billions from defense contracts, OpenAI only recently acknowledged it would begin working with the Department of Defense. In response to The Intercept’s January report on OpenAI’s military-industrial about face, the company’s spokesperson Niko Felix said that even under the loosened language, “Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property.”

“The point is you’re contributing to preparation for warfighting.”

Whether the Pentagon’s use of OpenAI software would entail harm or not might depend on a literal view of how these technologies work, akin to arguments that the company that helps build the gun or trains the shooter is not responsible for where it’s aimed or pulling the trigger. “They may be threading a needle between the use of [generative AI] to create synthetic training data and its use in actual warfighting,” said Lucy Suchman, professor emerita of anthropology of science and technology at Lancaster University. “But that would be a spurious distinction in my view, because the point is you’re contributing to preparation for warfighting.”

Unlike OpenAI, Microsoft has little pretense about forgoing harm in its “responsible AI” document and openly promotes the military use of its machine learning tools.

Related

OpenAI Quietly Deletes Ban on Using ChatGPT for “Military and Warfare”

Following its policy reversal, OpenAI was also quick to emphasize to the public and business press that its collaboration with the military was of a defensive, peaceful nature. In a January interview at Davos responding to The Intercept’s reporting, OpenAI vice president of global affairs Anna Makanju assured panel attendees that the company’s military work was focused on applications like cybersecurity initiatives and veteran suicide prevention, and that the company’s groundbreaking machine learning tools were still forbidden from causing harm or destruction.

Contributing to the development of a battle management system, however, would place OpenAI’s military work far closer to warfare itself. While OpenAI’s claim of avoiding direct harm could be technically true if its software does not directly operate weapons systems, Khlaaf, the machine learning safety engineer, said, its “use in other systems, such as military operation planning or battlefield assessments” would ultimately impact “where weapons are deployed or missions are carried out.”

Indeed, it’s difficult to imagine a battle whose primary purpose isn’t causing bodily harm and property damage. An Air Force press release from March, for example, describes a recent battle management system exercise as delivering “lethality at the speed of data.”

Other materials from the AI literacy seminar series make clear that “harm” is, ultimately, the point. A slide from a welcome presentation given the day before Microsoft’s asks the question, “Why should we care?” The answer: “We have to kill bad guys.” In a nod to the “literacy” aspect of the seminar, the slide adds, “We need to know what we’re talking about… and we don’t yet.”

Update: April 11, 2024
This article was updated to clarify Microsoft’s promotion of its work with the Department of Defense.

The post Microsoft Pitched OpenAI’s DALL-E as Battlefield Tool for U.S. Military appeared first on The Intercept.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow

viralnews360 I'm an IT whiz by day, a wordsmith by night. With a keyboard in hand and a head full of code, I translate the complexities of the digital world into engaging stories for the folks at ViralNews360. When I'm not deciphering algorithms or wrangling servers, you'll find me exploring the latest tech trends and crafting articles that inform, inspire, and maybe even spark a few laughs. Join me on the journey as I bridge the gap between tech and everyday life, one byte at a time!