OpenAI's Model Specification Describes Some Basic Rules for AI - Latest Global News

OpenAI’s Model Specification Describes Some Basic Rules for AI

AI tools that misbehave – like Microsoft’s Bing AI losing track of the current year – have become a subgenre of AI reporting. But very often it is difficult to tell the difference between a mistake and a poor design of the underlying AI model that analyzes incoming data and predicts what an acceptable response will be, such as Google’s Gemini image generator, which is due to draws different Nazis using a filter setting.

Now OpenAI is releasing the first draft of a proposed framework called Model Spec that aims to influence the future response of AI tools like its own GPT-4 model. The OpenAI approach proposes three general principles: AI models should support the developer and end user with helpful answers, follow instructions, benefit humanity taking into account potential benefits and harms, and reflect OpenAI well in terms of social norms and laws.

It also contains several rules:

According to OpenAI, the idea is to also give companies and users the ability to “toggle” how “sharp” AI models could become. One example the company points to is NSFW content, where the company says it is “investigating whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts via the API and ChatGPT.”

A section of the model specification that relates to how an AI assistant should handle information risks.
Screenshot: OpenaI

Joanne Jang, product manager at OpenAI, explains that the idea is to get public input to guide how AI models should behave and says this framework would help create a clearer line between intent and error pull. The standard behaviors that OpenAI suggests for the model include assuming the best intentions of the user or developer, asking clarifying questions, not going too far, taking an objective point of view, suppressing hate, not trying to change someone’s mind , and express uncertainty.

“We believe we can provide building blocks for people to have more nuanced conversations about models and ask questions, such as: B. whether models should follow the law, whose law?” Jang says The edge. “I hope that we can decouple the discussions about whether something is a mistake or whether a response is a principle that people disagree about, because that would de-couple the conversations about, make what we should present to the policy team easier.”

The model specification has no immediate impact on OpenAI’s currently released models such as GPT-4 or DALL-E 3, which will continue to operate under their existing usage guidelines.

Jang calls model behavior a “nascent science” and says Model Spec is intended to be a living document that can be updated frequently. For now, OpenAI will wait for feedback from the public and the various stakeholders (including “policy makers, trusted institutions, and subject matter experts”) that use its models, although Jang did not provide a time frame for releasing a second draft of the model, Spec.

OpenAI hasn’t said how much of the public’s feedback it may incorporate or who exactly will determine what needs to change. Ultimately, the company has the final say on how its models will behave, saying in a post: “We hope this will provide us with early insights as we develop a robust process for collecting and incorporating feedback to ensure that we realize our mission responsibly.” .”

Sharing Is Caring:

Leave a Comment