Microsoft Bans US Police Departments from Using AI Tools for Businesses | TechCrunch - Latest Global News

Microsoft Bans US Police Departments from Using AI Tools for Businesses | TechCrunch

Microsoft has changed its policy banning US police departments from using generative AI through the Azure OpenAI Service, the company’s fully managed, enterprise-focused wrapper for OpenAI technologies.

On Wednesday, new language was added to the Azure OpenAI Service terms of service that prohibits the use of integrations with Azure OpenAI Service “by or for” police departments in the United States, including integrations with OpenAI’s text and speech analytics models.

A separate new bullet point covers “any law enforcement agency worldwide” and specifically prohibits the use of “real-time facial recognition technology” on mobile cameras such as body cameras and dash cams to attempt to identify a person in “uncontrolled-wild” environments.

The changes in terms come a week after Axon, a maker of technology and weapons products for military and law enforcement, announced a new product that uses OpenAI’s GPT-4 generative text model to summarize audio from body cameras. Critics were quick to point out potential pitfalls, such as hallucinations (even the best generative AI models now make up facts) and racial bias emerging from the training data (which is particularly concerning since people of color are much more likely to be stopped by police) . than their white colleagues).

It is unclear whether Axon used GPT-4 through the Azure OpenAI Service and, if so, whether the updated policy was in response to Axon’s product launch. OpenAI had previously restricted the use of its models for facial recognition via its APIs. We’ve contacted Axon, Microsoft and OpenAI and will update this post if we hear back.

The new conditions give Microsoft leeway.

The complete ban on the use of the Azure OpenAI service applies only to the United States, not international, police. And it doesn’t cover the facial recognition that is done with stationary Cameras in controlled Environments such as a back office (although the terms prohibit any use of facial recognition by US police).

This aligns with Microsoft and its close partner OpenAI’s recent approach to AI-related law enforcement and defense contracts.

In January, a report from Bloomberg revealed that OpenAI was working with the Pentagon on a number of projects, including cybersecurity capabilities – a departure from the startup’s previous ban on providing its AI to the military. Elsewhere, according to The Intercept, Microsoft has proposed using OpenAI’s DALL-E imaging tool to help the Department of Defense (DoD) develop software to conduct military operations.

The Azure OpenAI service became available in Microsoft’s Azure Government product in February, providing additional compliance and management capabilities for government agencies, including law enforcement. In a blog post, Candice Ling, SVP of Microsoft’s government-focused division Microsoft Federal, promised that the Azure OpenAI service would be “submitted for additional approval” to the Department of Defense for workloads in support of defense missions.

Microsoft and OpenAI did not immediately respond to requests for comment.

Sharing Is Caring:

Leave a Comment