Microsoft Says it Released 30 Responsible AI Tools Last Year - Latest Global News

Microsoft Says it Released 30 Responsible AI Tools Last Year

Microsoft shared its responsible artificial intelligence practices over the past year in an inaugural report, including releasing 30 responsible AI tools with over 100 features to support AI developed by its customers.

The companys Responsible AI Transparency Report is and is focused on its efforts to develop, support and advance AI products responsibly Part of Microsoft’s commitments after signing a voluntary agreement with the White House in July. Microsoft also announced that it grew its responsible AI team from 350 to over 400 people in the second half of last year – an increase of 16.6%.

“As a company at the forefront of AI research and technology, we are committed to sharing our practices with the public as they evolve,” said Brad Smith, vice chairman and president of Microsoft, and Natasha Crampton, AI officer in charge a statement. “This report allows us to share our mature practices, reflect on what we have learned, set our goals, hold ourselves accountable and earn the public’s trust.”

Microsoft said its responsible AI tools are designed to “map and measure AI risks” and then manage them with remediation, real-time detection and filtering, and ongoing monitoring. In February, Microsoft released Open Access Red teaming tool called Python Risk Identification Tool (PyRIT) for generative AI, which enables security experts and machine learning engineers to identify risks in their generative AI products.

In November, the company released a series of generative AI assessment tools Azure AI Studio, where Microsoft customers build their own generative AI modelsto enable customers to evaluate their models on basic quality metrics, including Down-to-earth – or how well a model’s generated response matches its source material. In March, these tools were expanded to address security risks including hateful, violent, sexual and self-harm content, as well as jailbreaking methods such as immediate injectionsThis involves feeding instructions to a large language model (LLM) that can result in confidential information being lost or misinformation being spread.

Despite these efforts, Microsoft’s AI team has had to deal with numerous incidents involving its AI models over the past year. In March, the Copilot AI chatbot from Microsoft said one user: “Maybe you have nothing to live for.” after the user, a data scientist at Meta, asked Copilot if he should “just break up.” Microsoft said the data scientist attempted to manipulate the chatbot to generate inappropriate responses, which the data scientist denied.

Last October, Microsoft’s Bing image generator allowed users to generate photos of popular characters, including Kirby and Spongebob. Planes fly into the Twin Towers. After its Bing AI chatbot (Copilot’s predecessor) was released in February last year, one user was on board able to make the chatbot say “Heil Hitler”..”

“There is no finish line for responsible AI. And while this report does not contain all the answers, we are committed to sharing our findings early and often and engaging in robust dialogue about responsible AI practices,” Smith and Crampton write in the report.

This story originally appeared on quartz.

Sharing Is Caring:

Leave a Comment