New Rules from Microsoft Ban the Use of AI for Facial Recognition by Law Enforcement - Latest Global News

New Rules from Microsoft Ban the Use of AI for Facial Recognition by Law Enforcement

Microsoft continues to remain steadfast in its stance against law enforcement by leveraging its Azure OpenAI generative artificial intelligence (AI) service that performs facial recognition, joining other tech giants such as Amazon and IBM in similar decisions.

The Washington-based tech giant has changed the terms of use of its Azure OpenAI offering to expressly prohibit its use “by or for” police departments for facial recognition in the US.

Also now expressly prohibited is the use of “real-time facial recognition technology on mobile cameras used by law enforcement agencies worldwide to attempt to identify an individual.” [sic] in uncontrolled, “in the wild” environments, which include (without limitation) police officers on patrol using body-worn or dashboard-mounted cameras and using facial recognition technology to attempt to identify individuals in a database of suspects or former inmates are present.”

The company has since claimed that its original terms of service change contained an error. They told TechCrunch that the ban only applies to facial recognition in the US and is not an outright ban on police departments using the service.

Why did Microsoft ban facial recognition from its generative AI service?

This update to Microsoft’s Azure OpenAI Terms of Service comes just a week after an announcement from Axon, a military technology and weapons company, announcing a tool built using OpenAI’s GPT-4 to summarize body camera audio.

Features that use generative AI like this come with many pitfalls, such as: Such as the tendency of these tools to “hallucinate” and make false claims (OpenAI is currently facing a privacy complaint for failing to correct inaccurate data from ChatGPT), and the rampant racial bias in facial recognition caused by racist training data (e.g . B. Late last year, when a facial recognition error led to the false imprisonment of an innocent black man).

These recent changes reinforce a stance that Microsoft has taken for several years. During the Black Lives Matter protests in 2020, Microsoft President Brad Smith said in a conversation with The Washington Post: “We will not sell facial recognition technology to police departments in the United States until we have a national law based on human rights .”, that will determine this technology.”

The current spate of protests around the world against the Palestinian genocide in Gaza has led to a renewed commitment from technology companies to protecting human rights as issues of police brutality against protesters emerge in the press.

Featured image credit: generated with Ideogram

Sharing Is Caring:

Leave a Comment